Professional Documents
Culture Documents
This courseware is to help the participants to learn the basic principles of writing
applications for the subject. It surveys the different issues and discusses the
various techniques for dealing with it.
The participants have no doubt seen many books about the subject that are 800 to
1000 pages long. Presumably the participants have noticed by now that this
courseware is much smaller but we have tried to make the coverage as vast as
possible. It will always be our endeavor to improve the contents and coverage of
the material at all times.
The R&D team of professionals of TRENDZ has designed the courseware after
extensive research, taking into considerations the present market trends and the
requirements of the participants, keeping in mind the knowledge and
understanding level of the participants the language used in this book is very
simple and easy. This the third released version on the subject by the R&D team.
This is only an insight into the subject and the participants are advised to refer
the more sources to improve their technical knowledge. In daily sessions the
technical coordinator will discuss more real time application as covered in the
courseware.
The team to give the best knowledge through his courseware has put all efforts,
if you have any suggestion please advice on the same.
INDEX
Servlets
o Overview
o Life Cycle
o Request & Response
o HTTP Servlets
o Session Tracking
Model-view-Controller
What is Design Pattern
Helpful Hints
Overview
Types of Drivers
Result Set
Transactions
JDBC OVERVIEW
The JDBC API is based on the X/Open SQL CLI, which is also the basis for ODBC.
JDBC provides a natural and easy-to-use mapping from the Java programming
language to the abstractions and concepts defined in the X/Open CLI and SQL
standards.
Since its introduction in January 1997, the JDBC API has become widely accepted
and implemented. The flexibility of the API allows for a broad range of
implementations.
PLATFORMS
The JDBC API is part of the Java platform, which includes the Java 2 Standard Edition
(J2SE) and the Java 2 Enterprise Edition (J2EE). The JDBC 3.0 API is divided into two
packages: java.sql and javax.sql. Both packages are included in the J2SE and J2EE
platforms.
GOALS:
The JDBC API is a mature technology, having first been specified in January 1997. In
its initial release, the JDBC API focused on providing a basic call-level interface to
SQL databases. The JDBC 2.1 specification and the 2.0 Optional Package
specifications then broadened the scope of the API to include support for more
advanced applications and for the features required by application servers to manage
use of the JDBC API on behalf of their applications. The overall goal of the JDBC 3.0
specification is to “round out” the API by filling in smaller areas of missing
functionality. The following list outlines the goals and design philosophy for the JDBC
API in general and the JDBC 3.0 API in particular:
Keep it simple
The JDBC API is intended to be a simple-to-use, straightforward interface
upon which more complex entities can be built. This goal is achieved by
defining many compact, single-purpose methods instead of a smaller number
of complex, multipurpose ones with control flag parameters.
The JDBC 3.0 API provides the migration path for JDBC drivers to the
Connector architecture. It should be possible for vendors whose products use
JDBC technology to move incrementally towards implementing the Connector
API. The expectation is that these implementers will write “resource manager
wrappers” around their existing data source implementations so that they can
be reused in a Connector framework.
CONNECTIONS:
A Connection object represents a connection to a data source via a JDBC technology-
enabled driver. The data source can be a DBMS, a legacy file system, or some other
source of data with a corresponding JDBC driver. A single application using the JDBC
API may maintain multiple connections. These connections may access multiple data
sources, or they may all access a single data source.
From the JDBC driver perspective, a Connection object represents a client session. It
has associated state information such as user ID, a set of SQL statements and result
sets being used in that session, and what transaction semantics are in effect. To
obtain a connection, the application may interact with either:
It describes the various types of JDBC drivers and the use of the Driver interface, the
DriverManager class, and the basic DataSource interface. DataSource
implementations that support connection pooling and distributed transactions.
Types of Drivers
There are many possible implementations of JDBC drivers. These
implementations are categorized as follows:
Type 1 — Drivers that implement the JDBC API as a mapping to another data
access API, such as ODBC. Drivers of this type are generally dependent on a
native library, which limits their portability. The JDBC-ODBC Bridge driver is
an example of a Type 1 driver.
Type 2 — Drivers that are written partly in the Java programming language
and partly in native code. These drivers use a native client library specific to
the data source to which they connect. Again, because of the native code,
their portability is limited.
Type 3 — Drivers that use a pure Java client and communicate with a
middleware server using a database-independent protocol. The middleware
server then communicates the client’s requests to the data source.
Type 4 — Drivers that are pure Java and implement the network protocol for a
specific data source. The client connects directly to the data source.
The DriverManager class invokes Driver methods when it wishes to interact with a
registered driver. The Driver interface also includes the method accepts URL. The
DriverManager can use this method to determine which of its registered drivers it
should use for a given Universal Resource Locator (URL).
URL Syntax
The recommended JDBC URL syntax is structured as follows:
jdbc:<subprotocol>:<subname>
//hostname:port/subsubname
DriverManager Class
The DriverManager class works with the Driver interface to manage the set of drivers
available to a JDBC client. When the client requests a connection and provides a URL,
the DriverManager is responsible for finding a driver that recognizes the URL and for
using it to connect to the corresponding data source.
Class.forName("acme.db.Driver");
The DriverPropertyInfo class provides information on the properties that the JDBC
driver can understand.
SQLPermission Class
The SQLPermission class represents a set of permissions that a codebase may be
granted.
Currently the only permission defined is setLog. The SecurityManager will check for
the setLog permission when an Applet calls either the DriverManager method
setLogWriter or setLogStream. If the codebase does not have the setLog permission,
a java.lang.SecurityException exception will be thrown.
STATEMENTS
This section describes the Statement interface and its subclasses PreparedStatement
and CallableStatement. It also describes related topics, including escape syntax,
performance hints, and auto-generated keys.
Statement Interface
The Statement interface defines methods for executing SQL statements that
do not contain parameter markers. The PreparedStatement interface adds
methods for setting input parameters, and the CallableStatement interface
adds methods for retrieving output parameter values returned from stored
procedures.
Creating Statements
Statement objects are created by Connection object.
Each Connection object can create multiple Statement objects that may be
used concurrently by the program.
It creates a Statement object that returns result sets that are scrollable, that
are insensitive to changes made while the ResultSet object is open, that can
be updated, and that do not close the ResultSet objects when a commit
operation is implicity or explicitly performed.
Creating a scrollable, insensitive, updatable result set that stays open after
the method commit is called ResultSet types.
String sql;
...
When the SQL string being executed returns a ResultSet object, the method
getUpdateCount() returns -1. If the SQL string being executed returns an
update count, the method getResultSet() returns null.
Closing a Statement object will close and invalidate any instances of ResultSet
produced by that Statement object. The resources held by the ResultSet
object may not be released until garbage collection runs again, so it is a good
practice to explicitly close ResultSet objects when they are no longer needed.
PreparedStatement Interface
The PreparedStatement interface extends Statement, adding the ability to set
values for parameter markers contained within the statement.
Setting Parameters
The Prepared Statement interface defines setter methods that are used to
substitute values for each of the parameter markers in the precompiled SQL
string. The names of the methods follow the pattern "set<Type>".
The values set for the parameter markers of a PreparedStatement object are
not reset when it is executed. The method clearParameters() can be called to
explicitly clear the values that have been set. Setting a parameter with a
different value will replace the previous value with the new one.
Type Conversions
The data type specified in a PreparedStatement setter method is a data type
in the Java programming language. The JDBC driver is responsible for
mapping this to the corresponding JDBC type (one of the SQL types defined in
java.sql.Types) so that it is the appropriate type to be sent to the data source.
ps.setNull(2, java.sql.Types.VARCHAR);
int colType;
String colLabel;
for (int i = 1; i <= colCount; i++) {
colType = rsmd.getColumnType(i);
colLabel = rsmd.getColumnLabel(i);
...
}
If the statement being executed does not return a ResultSet object, the
method executeQuery() throws an SQLException.
CallableStatement Interface
The CallableStatement interface extends PreparedStatement with methods for
executing and retrieving results from stored procedures.
CallableStatement cstmt =
conn.prepareCall(“{? = call validate(?, ?)}”);
Setting Parameters
Callable Statement objects may take three types of parameters: IN, OUT, and
INOUT. The parameter can be specified as either an ordinal parameter or a
named parameter. A value must be set for each parameter marker in the
statement.
It is not possible to combine setting parameters with ordinals and with names
in the same statement. If ordinals and names are used for parameters in the
same statement, an SQLException is thrown.
Note: In some cases it may not be possible to provide only some of the
parameters for a procedure. For example, if the procedure name is
overloaded, the data source determines which procedure to call based on the
number of parameters. Enough parameters must be provided to allow the
data source to resolve any ambiguity.
IN Parameters
IN parameters are assigned values using the setter methods.
cstmt.setString(1, “October”);
cstmt.setDate(2, date);
OUT Parameters
The method registerOutParameter() must be called to set the type for each
OUT parameter before a CallableStatement object is executed. When the
stored procedure returns from execution, it will use these types to set the
values for any OUT parameters.
The values of OUT parameters can be retrieved using the appropriate getter
methods defined in the CallableStatement interface. The following lines shows
the execution of a stored procedure with two OUT parameters, a string and
float, and the retrieval of the OUT parameter values.
INOUT Parameters
Parameters that are both input and output parameters must be both set by
using the appropriate setter method and also registered by calling the
registerOutParameter() method. The type implied by the setter method and
the type supplied to the method registerOutParameter() must be the same.
RESULT SETS
The ResultSet interface provides methods for retrieving and manipulating the results
of executed queries.
ResultSet Types
The type of a ResultSet object determines the level of its functionality in two
main areas: (1) the ways in which the cursor can be manipulated, (2) how
concurrent changes made to the underlying data source are reflected by the
ResultSet object. The latter is called the sensitivity of the ResultSet object.
1. TYPE_FORWARD_ONLY
• The result set is not scrollable; its cursor moves forward only, from
before the first row to after the last row.
• The rows contained in the result set depend on how the underlying
database materializes the results. That is, it contains the rows that
satisfy the query at either the time the query is executed or as the
rows are retrieved.
2. TYPE_SCROLL_INSENSITIVE
• The result set is scrollable; its cursor can move both forward and
backward relative to the current position, and it can move to an
absolute position.
• The result set is insensitive to changes made to the underlying data
source while it is open. It contains the rows that satisfy the query at
either the time the query is executed or as the rows are retrieved.
3. TYPE_SCROLL_SENSITIVE
• The result set is scrollable; its cursor can move both forward and
backward relative to the current position, and it can move to an
absolute position.
• The result set reflects changes made to the underlying data source
while the result set remains open.
If the driver does not support the type supplied to the methods
createStatement(), prepareStatement(), or prepareCall(), it generates an
SQLWarning on the Connection object that is creating the statement. When
the statement is executed, the driver returns a ResultSet object of a type that
most closely matches the requested type. An application can find out the type
of a ResultSet object by calling the method ResultSet.getType().
ResultSet Concurrency
The concurrency of a ResultSet object determines what level of update
functionality is supported.
If the driver does not support the concurrency level supplied to the methods
createStatement(), prepareStatement(), or prepareCall(), it generates an
If the driver cannot return a ResultSet object at the requested type and
concurrency, it determines the appropriate type before determining the
concurrency.
ResultSet Holdability
Calling the method Connection.commit() can close the ResultSet objects that
have been created during the current transaction. In some cases, however,
this may not be the desired behaviour. The ResultSet property holdability
gives the application control over whether ResultSet objects (cursors) are
closed when a commit operation is implicitly or explicitly performed.
1.HOLD_CURSORS_OVER_COMMIT
• ResultSet objects (cursors) are not closed; they are held open when a
commit operation is implicity or explicity performed.
2.CLOSE_CURSORS_AT_COMMIT
• ResultSet objects (cursors) are closed when a commit operation is
implicitly or explicitly performed. Closing cursors at commit can result
in better performance for some applications.
Creating a scrollable, insensitive, read-only result set with a cursor that is not
holdable.
For each book in the table booklist, the ResultSet object will contain a row
consisting of three columns, author, title, and isbn. The following sections
detail how these rows and columns can be retrieved.
Cursor Movement
A ResultSet object maintains a cursor, which points to its current row of data.
When a ResultSet object is first created, the cursor is positioned before the
first row. The following methods can be used to move the cursor:
• next() — moves the cursor forward one row. Returns true if the cursor
is now positioned on a row and false if the cursor is positioned after
the last row.
• previous() — moves the cursor backwards one row. Returns true if
the cursor is now positioned on a row and false if the cursor is
positioned before the first row.
• first() — moves the cursor to the first row in the ResultSet object.
Returns true if the cursor is now positioned on the first row and false if
the ResultSet object does not contain any rows.
• last() — moves the cursor to the last row in the ResultSet object.
Returns true if the cursor is now positioned on the last row and false if
the ResultSet object does not contain any rows.
• beforeFirst() — positions the cursor at the start of the ResultSet
object, before the first row. If the ResultSet object does not contain
any rows, this method has no effect.
• afterLast() — positions the cursor at the end of the ResultSet object,
after the last row. If the ResultSet object does not contain any rows,
this method has no effect.
• relative(int row)— moves the cursor relative to its current position.
If row is 0 (zero), the cursor is unchanged. If row is positive, the
cursor is moved forward row rows. If the cursor is less than the
specified number of rows from the last row, the cursor is positioned
after the last row. If row is negative, the cursor is moved backward
row rows. If the cursor is less than row rows from the first row, the
cursor is positioned before the first row.
If row is positive, the cursor is moved row rows from the beginning of
the ResultSet object. The first row is 1, the second 2, and so on. If row
is greater than the number of rows in the ResultSet object, the cursor
is positioned after the last row.
If row is negative, the cursor is moved row rows from the end of the
ResultSet object. The last row is -1, the penultimate -2, and so on. If
row is greater than the number of rows in the ResultSet object, the
cursor is positioned before the first row.
Retrieving Values
The ResultSet interface provides methods for retrieving the values of columns
from the row where the cursor is currently positioned.
Two getter methods exist for each JDBC type: one that takes the column
index as its first parameter and one that takes the column name or label.
The columns are numbered from left to right, as they appear in the select list
of the query, starting at 1.
Column names supplied to getter methods are case insensitive. If a select list
contains the same column more than once, the first instance of the column
will be returned.
The index of the first instance of a column name can be retrieved using the
method findColumn(). If the specified column is not found, the method
findColumn() throws an SQLException.
ResultSet rs = stmt.executeQuery(sqlstring);
int colIdx = rs.findColumn(“ISBN”);
ResultSetMetadata
When the ResultSet method getMetaData is called on a ResultSet object, it
returns a ResultSetMetaData object describing the columns of that ResultSet
object. In cases where the SQL statement being executed is unknown until
runtime, the ResultSetMetaData can be used to determine which of the getter
methods should be used to retrieve the data.
ResultSet rs = stmt.executeQuery(sqlString);
ResultSetMetaData rsmd = rs.getMetaData();
int colType [] = new int[rsmd.getColumnCount()];
for (int idx = 0, int col = 1; idx < colType.length; idx++, col++)
colType[idx] = rsmd.getColumnType(col);
When the column value in the database is JDBC NULL, it may be returned to
the Java application as null, 0, or false, depending on the type of the column
value. Column values that map to Java Object types are returned as a Java
null; those that map to numeric types are returned as 0; those that map to a
Java Boolean are returned as false. Therefore, it may be necessary to call the
wasNull() method to determine whether the last value retrieved was a JDBC
NULL.
Updating a Row
Updating a row in a ResultSet object is a two-phase process. First, the new
value for each column being updated is set, and then the change is applied to
the row. The row in the underlying data source is not updated until the
second phase is completed.
The ResultSet interface contains two update methods for each JDBC type, one
specifying the column to be updated as an index and one specifying the
column name as it appears in the select list.
The method updateRow() is used to apply all column changes to the current
row. The changes are not made to the row until updateRow() has been called.
The method cancelUpdates() can be used to back out changes made to the
row before the method updateRow() is called. The Following code shows the
current row being updated to change the value of the column “author” to
“Karnitkar, Yaswant”:
A ResultSet object may be able to use the method rowUpdated to detect rows
that have had the method updateRow called on them. The method
DatabaseMetaData.updatesAreDetected(int type) returns true if a ResultSet
object of the specified type can determine if a row is updated using the
method rowUpdated() and false otherwise.
Deleting a Row
A row in a ResultSet object can be deleted using the method deleteRow().
ResultSet rows being deleted.
rs.absolute(4);
rs.deleteRow();
Inserting a Row
New rows may be inserted using the ResultSet interface. New rows are
constructed in a special insert row. The steps to insert a new row are:
The Following steps necessary to insert a new row into the table booklist.
Each column in the insert row that does not allow null as a value and does not
have a default value must be given a value using the appropriate update
method. If this is not the case, the method insertRow() will throw an
SQLException.
TRANSACTIONS
Transactions are used to provide data integrity, correct application semantics,
and a consistent view of data during concurrent access. All JDBC compliant
Auto-commit mode
Transaction isolation levels
SavePoints
new isolation level remains in effect for the remainder of the session or until
the next invocation of the setTransactionIsolation() method.
Performance Considerations
As the transaction isolation level increases, more locking and other DBMS
overhead is required to ensure the correct semantics. This in turn lowers the
degree of concurrent access that can be supported. As a result, applications
may see decreased performance when they use a higher transaction isolation
level. For this reason, the transaction manager, whether it is the application
itself or part of the application server, should weigh the need for data
consistency against the requirements for performance when determining
which transaction isolation level is appropriate.
Savepoints
Savepoints provide finer-grained control of transactions by marking
intermediate points within a transaction. Once a savepoint has been set, the
transaction can be rolled back to that savepoint without affecting preceding
work. The DatabaseMetaData.supportsSavepoints method can be used to
determine whether a JDBC API implementation supports savepoints.
The following code inserts a row into a table, sets the savepoint svpt1, and
then inserts a second row. When the transaction is later rolled back to svpt1,
the second insertion is undone, but the first insertion remains intact. In other
words, when the transaction is committed, only the row containing ’FIRST’ will
be added to TAB1.
Statement stmt = conn.createStatement();
int rows = stmt.executeUpdate("INSERT INTO TAB1 (COL1) VALUES "
+"(’FIRST’)");
// set savepoint
Savepoint svpt1 = conn.setSavepoint("SAVEPOINT_1");
Releasing a Savepoint
The method Connection.releaseSavepoint takes a Savepoint object as a
parameter and removes it from the current transaction.
Overview
Life Cycle
HTTP Servlets
Session Tracking
SERVLETS OVERVIEW
What is a Servlet?
A Servlet is a Java technology-based Web component, managed by a container that
generates dynamic content. Like other Java technology-based components, Servlets
are platform-independent Java classes that are compiled to platform-neutral byte
code that can be loaded dynamically into and run by a Java technology-enabled Web
server. Containers, sometimes called Servlet engines, are Web server extensions that
provide Servlet functionality. Servlets interact with Web clients via a
request/response paradigm implemented by the Servlet container.
J2SE is the minimum version of the underlying Java platform with which Servlet
containers must be built.
An Example
The following is a typical sequence of events:
1. A client (e.g., a Web browser) accesses a Web server and makes an HTTP
request.
2. The request is received by the Web server and handed off to the Servlet
container. The Servlet container can be running in the same process as the
host Web server, in a different process on the same host, or on a different
host from the Web server for which it processes requests.
3. The Servlet container determines which Servlet to invoke based on the
configuration of its Servlets, and calls it with objects representing the request
and response.
4. The Servlet uses the request object to find out who the remote user is, what
HTTP POST parameters may have been sent as part of this request, and other
relevant data. The Servlet performs whatever logic it was programmed with,
and generates data to send back to the client. It sends this data back to the
client via the response object.
5. Once the Servlet has finished processing the request, the Servlet container
ensures that the response is properly flushed, and returns control back to the
host Web server.
Servlets have the following advantages over other server extension mechanisms:
• They are generally much faster than CGI scripts because a different process
model is used.
• They use a standard API that is supported by many Web servers.
• They have all the advantages of the Java programming language, including
ease of development and platform independence.
• They can access the large set of APIs available for the Java platform.
Servlet Interface:
The Servlet interface is the central abstraction of the Java Servlet API. All Servlets
implement this interface either directly, or more commonly, by extending a class that
implements the interface. The two classes in the Java Servlet API that implement the
Servlet interface are GenericServlet and HttpServlet. For most purposes, Developers
will extend HttpServlet to implement their Servlets.
The handling of concurrent requests to a Web application generally requires that the
Web Developer design Servlets that can deal with multiple threads executing within
the service method at a particular time.
Generally the Web container handles concurrent requests to the same Servlet by
concurrent execution of the service method on different threads.
Number of Instances
The Servlet declaration, which is part of the deployment descriptor of the Web
application containing the Servlet, “Deployment Descriptor”, controls how the Servlet
container provides instances of the Servlet.
For a Servlet not hosted in a distributed environment (the default), the Servlet
container must use only one instance per Servlet declaration. However, for a Servlet
implementing the SingleThreadModel interface, the Servlet container may instantiate
multiple instances to handle a heavy request load and serialize requests to a
particular instance.
In the case where a Servlet was deployed as part of an application marked in the
deployment descriptor as distributable, a container may have only one instance per
Servlet declaration per Java Virtual Machine (JVM). However, if the Servlet in a
distributable application implements the SingleThreadModel interface, the container
may instantiate multiple instances of that Servlet in each JVM of the container.
It is recommended that a developer take other means to resolve those issues instead
of implementing this interface, such as avoiding the usage of an instance variable or
synchronizing the block of the code accessing those resources. The
SingleThreadModel Interface is deprecated in this version of the specification.
This life cycle is expressed in the API by the init, service, and destroy methods of the
javax.Servlet.Servlet interface that all Servlets must implement directly or indirectly
through the GenericServlet or HttpServlet abstract classes.
The Servlet container is responsible for loading and instantiating Servlets. The
loading and instantiation can occur when the container is started, or delayed until the
container determines the Servlet is needed to service a request.
When the Servlet engine is started, the Servlet container must locate the Servlet
class. The Servlet container loads the Servlet class using normal Java class loading
facilities. The loading may be from a local file system, a remote file system, or other
network services. After loading the Servlet class, the container instantiates it for
use.
Initialization
After the Servlet object is instantiated, the container must initialize the Servlet
before it can handle requests from clients. Initialization is provided so that a Servlet
can read persistent configuration data, initialize costly resources (such as JDBC™ API
based connections), and perform other one-time activities. The container initializes
the Servlet instance by calling the init method of the Servlet interface with a unique
(per Servlet declaration) object implementing the ServletConfig interface. This
configuration object allows the Servlet to access name-value initialization parameters
from the Web application’s configuration information. The configuration object also
gives the Servlet access to an object (implementing the ServletContext interface)
that describes the Servlet’s runtime environment. See Chapter SRV.3, “Servlet
Context” for more information about the ServletContext interface.
Request Handling
After a Servlet is properly initialized, the Servlet container may use it to handle client
requests. Request objects of type ServletRequest represent requests. The Servlet fills
out response to requests by calling methods of a provided object of type
ServletResponse. These objects are passed as parameters to the service method of
the Servlet interface.
In the case of an HTTP request, the objects provided by the container are of types
HttpServletRequest and HttpServletResponse.
Note that a Servlet instance placed into service by a Servlet container may handle no
requests during its lifetime.
Multithreading Issues
A Servlet container may send concurrent requests through the service method of the
Servlet. To handle the requests, the Servlet developer must make adequate
provisions for concurrent processing with multiple threads in the service method.
For Servlets not implementing the SingleThreadModel interface, if the service method
(or methods such as doGet() or doPost() which are dispatched to the service method
of the HttpServlet abstract class) has been defined with the synchronized keyword,
the Servlet container cannot use the instance pool approach, but must serialize
requests through it. It is strongly recommended that Developers not synchronize the
service method (or methods dispatched to it) in these circumstances because of
detrimental effects on performance.
Thread Safety
Implementations of the request and response objects are not guaranteed to be
thread safe. This means that they should only be used within the scope of the
request handling thread.
References to the request and response objects should not be given to objects
executing in other threads as the resulting behavior may be not determine. If the
thread created by the application uses the container-managed objects, such as the
request or response object, those objects must be accessed only within the Servlet’s
service life cycle and such thread itself should have a life cycle within the life cycle of
the Servlet’s service method because accessing those objects after the service
method ends may cause indeterminist problems. Be aware that the request and
response objects are not thread safe. If those objects were accessed in the multiple
threads, the access should be synchronized or be done through the wrapper to add
the thread safety, for instance, synchronizing the call of the methods to access the
request attribute, or using a local output stream for the response object within a
thread.
End of Service
The Servlet container is not required to keep a Servlet loaded for any particular
period of time. A Servlet instance may be kept active in a Servlet container for a
period of milliseconds, for the lifetime of the Servlet container (which could be a
number of days, months, or years), or any amount of time in between.
When the Servlet container determines that a Servlet should be removed from
service, it calls the destroy method of the Servlet interface to allow the Servlet to
release any resources it is using and save any persistent state. For example, the
container may do this when it wants to conserve memory resources, or when it is
being shut down.
Before the Servlet container calls the destroy method, it must allow any threads that
are currently running in the service method of the Servlet to complete execution, or
exceed a server-defined time limit.
Once the destroy method is called on a Servlet instance, the container may not route
other requests to that instance of the Servlet. If the container needs to enable the
Servlet again, it must do so with a new instance of the Servlet’s class.
After the destroy method completes, the Servlet container must release the Servlet
instance so that it is eligible for garbage collection.
Servlet Context:
ServletContext Interface
The ServletContext interface defines a Servlet’s view of the Web application within
which the Servlet is running. The Container Provider is responsible for providing an
implementation of the ServletContext interface in the Servlet container. Using the
ServletContext object, a Servlet can log events, obtain URL references to resources,
and set and store attributes that other Servlets in the context can access.
Servlets in a container that were not deployed as part of Web application are
implicitly part of a “default” Web application and have a default ServletContext. In a
distributed container, the default ServletContext is non-distributable and must only
exist in one JVM.
Initialization Parameters
The following methods of the ServletContext interface allow the Servlet access to
context initialization parameters associated with a Web application as specified by
the Application Developer in the deployment descriptor:
• getInitParameter
• getInitParameterNames
Context Attributes
A Servlet can bind an object attribute into the context by name. Any attribute bound
into a context is available to any other Servlet that is part of the same Web
application. The following methods of ServletContext interface allow access to this
functionality:
• setAttribute()
• getAttribute()
• getAttributeNames()
• removeAttribute()
Resources
The ServletContext interface provides direct access only to the hierarchy of static
content documents that are part of the Web application, including HTML, GIF, and
JPEG files, via the following methods of the ServletContext interface:
• getResource()
• getResourceAsStream()
context. This hierarchy of documents may exist in the server’s file system, in a Web
application archive file, on a remote server, or at some other location.
THE REQUEST:
The request object encapsulates all information from the client request. In the HTTP
protocol, this information is transmitted from the client to the server in the HTTP
headers and the message body of the request.
The parameters are stored as a set of name-value pairs. Multiple parameter values
can exist for any given parameter name. The following methods of the
ServletRequest interface are available to access parameters:
• getParameter()
• getParameterNames()
• getParameterValues()
• getParameterMap()
Data from the querystring and the post body are aggregated into the request
parameter set. Querystring data is presented before post body data. For example, if
a request is made with a query string of a=hello and a post body of
a=goodbye&a=world, the resulting parameter set would be ordered a=(hello,
goodbye, world).
Path parameters that are part of a GET request are not exposed. They must be
parsed from the String values returned by the getRequestURI() method or the
getPathInfo() method.
Attributes
Attributes are objects associated with a request. Attributes may be set by the
container to express information that otherwise could not be expressed via the API,
or may be set by a Servlet to communicate information to another Servlet (via the
RequestDispatcher). Attributes are accessed with the following methods of the
ServletRequest interface:
• getAttribute()
• getAttributeNames()
• setAttribute()
Attribute names beginning with the prefixes of “java.” and “javax.” Are reserved for
definition by this specification. Similarly, attribute names beginning with the prefixes
of “sun.”, and “com.sun.” are reserved for definition by Sun Microsystems. It is
suggested that all attributes placed in the attribute set be named in accordance with
the reverse domain name convention suggested by the Java Programming Language
Specification1 for package naming.
Headers
A Servlet can access the headers of an HTTP request through the following methods
of the HttpServletRequest interface:
• getHeader()
• getHeaders()
• getHeaderNames()
The getHeader() method returns a header given the name of the header. There can
be multiple headers with the same name, e.g. Cache-Control headers, in an HTTP
request. If there are multiple headers with the same name, the getHeader() method
returns the first header in the request. The getHeaders() method allows access to all
the header values associated with a particular header name, returning an
Enumeration of String objects.
Headers may contain String representations of int or Date data. The following
convenience methods of the HttpServletRequest interface provide access to header
data in a one of these formats:
• getIntHeader()
• getDateHeader()
• getContextPath()
• getServletPath()
• getPathInfo()
It is important to note that, except for URL encoding differences between the request
URI and the path parts, the following equation is always true:
Cookies
THE RESPONSE:
The response object encapsulates all information to be returned from the server to
the client. In the HTTP protocol, this information is transmitted from the server to the
client either by HTTP headers or the message body of the request.
Buffering
A Servlet container is allowed, but not required, to buffer output going to the client
for efficiency purposes. Typically servers that do buffering make it the default, but
allow Servlets to specify buffering parameters.
The following methods in the ServletResponse interface allow a Servlet to access and
set buffering information:
• getBufferSize()
• setBufferSize()
• isCommitted()
• reset()
• resetBuffer()
• flushBuffer()
The getBufferSize() method returns the size of the underlying buffer being used. If
no buffering is being used, this method must return the int value of 0 (zero).
The Servlet can request a preferred buffer size by using the setBufferSize() method.
The buffer assigned is not required to be the size requested by the Servlet, but must
be at least as large as the size requested. This allows the container to reuse a set of
fixed size buffers, providing a larger buffer than requested if appropriate. The
method must be called before any content is written using a ServletOutputStream or
Writer. If any content has been written or the response object has been committed,
this method must throw an IllegalStateException.
The isCommitted() method returns a boolean value indicating whether any response
bytes have been returned to the client. The flushBuffer() method forces content in
the buffer to be written to the client.
The reset method clears data in the buffer when the response is not committed.
Headers and status codes set by the Servlet prior to the reset call must be cleared as
well. The resetBuffer() method clears content in the buffer if the response is not
committed without clearing the headers and status code.
When using a buffer, the container must immediately flush the contents of a filled
buffer to the client. If this is the first data is sent to the client, the response is
considered to be committed.
Convenience Methods
The following convenience methods exist in the HttpServletResponse interface:
• sendRedirect()
• sendError()
The sendRedirect() method will set the appropriate headers and content body to
redirect the client to a different URL. It is legal to call this method with a relative URL
path, however the underlying container must translate the relative path to a fully
qualified URL for transmission back to the client. If a partial URL is given and, for
whatever reason, cannot be converted into a valid URL, then this method must throw
an IllegalArgumentException.
The sendError() method will set the appropriate headers and content body for an
error message to return to the client. An optional String argument can be provided to
the sendError() method, which can be used in the content body of the error.
These methods will have the side effect of committing the response, if it has not
already been committed, and terminating it. The Servlet should make no further
output to the client after these methods are called. If data is written to the response
after these methods are called, the data is ignored.
If data has been written to the response buffer, but not returned to the client (i.e.
the response is not committed), the data in the response buffer must be cleared and
replaced with the data set by these methods. If the response is committed, these
methods must throw an IllegalStateException.
Filtering
Filters are Java components that allow on the fly transformations of payload and
header information in both the request into a resource and the response from a
resource.
This describes the Java Servlet classes and methods that provide a lightweight
framework for filtering active and static content. It describes how filters are
configured in a Web application, and conventions and semantics for their
implementation.
What is a filter?
A filter is a reusable piece of code that can transform the content of HTTP requests,
responses, and header information. Filters do not generally create a response or
respond to a request as Servlets do; rather they modify or adapt the requests for a
resource, and modify or adapt responses from a resource. Filters can act on dynamic
or static content. Dynamic and static content are referred to as Web resources.
Among the types of functionality available to the developer needing to use filters are
the following:
Main Concepts
The main concepts of this filtering model are described in this section. The
application developer creates a filter by implementing the javax.Servlet.Filter
interface and providing a public constructor taking no arguments. The class is
packaged in the Web Archive along with the static content and Servlets that make up
the Web application. A filter is declared using the <filter> element in the deployment
descriptor. A filter or collection of filters can be configured for invocation by defining
<filter-mapping> elements in the deployment descriptor. This is done by mapping
filters to a particular Servlet by the Servlet’s logical name, or mapping to a group of
Servlets and static content resources by mapping a filter to a URL pattern.
Filter Lifecycle
After deployment of the Web application, and before a request causes the container
to access a Web resource, the container must locate the list of filters that must be
applied to the Web resource as described below. The container must ensure that it
has instantiated a filter of the appropriate class for each filter in the list, and called
its init(FilterConfig config) method. The filter may throw an exception to indicate that
it cannot function properly. If the exception is of type UnavailableException, the
container may examine the isPermanent attribute of the exception and may choose
to retry the filter at some later time.
When the container receives an incoming request, it takes the first filter instance in
the list and calls its doFilter() method, passing in the ServletRequest and
ServletResponse, and a reference to the FilterChain object it will use. The doFilter()
method of a filter will typically be implemented following this or some subset of the
following pattern:
Step 8: Before a filter instance can be removed from service by the container, the
container must first call the destroy method on the filter to enable the filter to
release any resources and perform other cleanup operations.
Optionally, the programmer can specify icons, a textual description, and a display
name for tool manipulation. The container must instantiate exactly one instance of
the Java class defining the filter per filter declaration in the deployment descriptor.
Hence, two instances of the same filter class will be instantiated by the container if
the developer makes two filter declarations for the same filter class. Here is an
example of a filter declaration:
<filter>
<filter-name>Image Filter</filter-name>
<filter-class>com.acme.ImageServlet</filter-class>
</filter>
Once a filter has been declared in the deployment descriptor, the assembler uses the
<filter-mapping> element to define Servlets and static resources in the Web
application to which the filter is to be applied. Filters can be associated with a Servlet
using the <Servlet-name> element. For example, the following code example maps
the Image Filter filter to the ImageServlet Servlet:
<filter-mapping>
<filter-name>Image Filter</filter-name>
<Servlet-name>ImageServlet</Servlet-name>
</filter-mapping>
Here the Logging Filter is applied to all the Servlets and static content pages in the
Web application, because every request URI matches the ‘/*’ URL pattern. When
processing a <filter-mapping> element using the <url-pattern> style, the container
must determine whether the <url-pattern> matches the request URI. The order the
container uses in building the chain of filters to be applied for a particular request
URI is as follows:
1. First, the <url-pattern> matching filter mappings in the same order that
these elements appear in the deployment descriptor.
2. Next, the <Servlet-name> matching filter mappings in the same order that
these elements appear in the deployment descriptor.
This requirement means that the container, when receiving an incoming request,
processes the request as follows:
• If there are filters matched by Servlet name and the Web resource has a
<Servlet-name>, the container builds the chain of filters matching in the
order declared in the deployment descriptor. The last filter in this chain
corresponds to the last <Servlet-name> matching filter and is the filter that
invokes the target Web resource.
• If there are filters using <url-pattern> matching and the <url-pattern>
matches the request URI. The last filter in this chain is the last <url-pattern>
matching filter in the deployment descriptor for this request URI. The last
filter in this chain is the filter that invokes the first filter in the <Servlet-
name> matching chain, or invokes the target Web resource if there are none.
It is expected that high performance Web containers will cache filter chains so that
they do not need to compute them on a per-request basis.
Sessions
The Hypertext Transfer Protocol (HTTP) is by design a stateless protocol. To build
effective Web applications, it is imperative that requests from a particular client be
associated with each other. Many strategies for session tracking have evolved over
time, but all are difficult or troublesome for the programmer to use directly. This
specification defines a simple HttpSession interface that allows a Servlet container to
use any of several approaches to track a user’s session without involving the
Application Developer in the nuances of any one approach.
Cookies
Session tracking through HTTP Cookies is the most used session tracking mechanism
and is required to be supported by all Servlet containers. The container sends a
cookie to the client. The client will then return the cookie on each subsequent
request to the server, unambiguously associating the request with a session. The
name of the session tracking cookie must be JSESSIONID.
URL Rewriting
URL Rewriting is the lowest common denominator of session tracking. When a client
will not accept a cookie, the server as the basis for session tracking may use URL
rewriting. URL rewriting involves adding data, a session ID, to the URL path that is
interpreted by the container to associate the request with a session. The session ID
must be encoded as a path parameter in the URL string. The name of the parameter
must be jsessionid. Here is an example of a URL containing encoded path
information:
http://www.myserver.com/catalog/index.html;jsessionid=1234
Creating a Session
A session is considered “new” when it is only a prospective session and has not been
established. Because HTTP is a request-response based protocol, an HTTP session is
considered to be new until a client “joins” it. A client joins a session when session
tracking information has been returned to the server indicating that a session has
been established. Until the client joins a session, it cannot be assumed that the next
request from the client will be recognized as part of a session.
These conditions define the situation where the Servlet container has no mechanism
by which to associate a request with a previous request.
A Servlet Developer must design his application to handle a situation where a client
has not, cannot, or will not join a session.
Session Scope
HttpSession objects must be scoped at the application (or Servlet context) level. The
underlying mechanism, such as the cookie used to establish the session, can be the
same for different contexts, but the object referenced, including the attributes in that
object, must never be shared between contexts by the container.
Session Timeouts
In the HTTP protocol, there is no explicit termination signal when a client is no longer
active. This means that the only mechanism that can be used to indicate when a
client is no longer active is a timeout period. The default timeout period for sessions
is defined by the Servlet container and can be obtained via the
getMaxInactiveInterval() method of the HttpSession interface. This timeout can be
changed by the Developer using the setMaxInactiveInterval() method of the
HttpSession interface.
The timeout periods used by these methods are defined in seconds. By definition, if
the timeout period for a session is set to -1, the session will never expire. The
session invalidation will not take effect until all Servlets using that session have
exited the service method. Once the session invalidation is initiated, a new request
must not be able to see that session.
Dispatching Requests
When building a Web application, it is often useful to forward processing of a request
to another Servlet, or to include the output of another Servlet in the response. The
RequestDispatcher interface provides a mechanism to accomplish this.
Obtaining a RequestDispatcher
An object implementing the RequestDispatcher interface may be obtained from the
ServletContext via the following methods:
• getRequestDispatcher()
• getNamedDispatcher()
The behavior of this method is similar to the method of the same name in the
ServletContext. The Servlet container uses information in the request object to
transform the given relative path against the current Servlet to a complete path.
The Container Provider should ensure that the dispatch of the request to a target
Servlet occurs in the same thread of the same JVM as the original request.
The path elements of the request object exposed to the target Servlet must reflect
the path used to obtain the RequestDispatcher.
The only exception to this is if the RequestDispatcher was obtained via the
getNamedDispatcher() method. In this case, the path elements of the request object
must reflect those of the original request.
Before the forward method of the RequestDispatcher interface returns, the response
content must be sent and committed, and closed by the Servlet container.
RPC, however, does not translate well into distributed object system, where
communication between program-level objects residing in different address spaces is
needed. In order to match the semantics of object invocation, distributed object
systems require Remote Method Invocation or RMI. In such systems, a local
surrogate (stub) object manages the invocation on a remote object.
The Java remote method invocation system described in this specification has been
specifically designed to operate in the Java environment. The Java language’s RMI
system assumes the homogeneous environment of the Java virtual Machine, and the
system can therefore take advantage of the Java object model whenever possible.
System Goals
The goals for supporting distributed objects in the Java language are:
Underlying all these goals is a general requirement that the RMI model be both
simple (easy to use) and natural (fits well in the language).
The first two chapters in this specification describe the distributed object model for
the Java language and the system overview. The remaining chapters describe the
RMI client and server visible APIs which are part of JDK 1.2.
Load class bytecodes for objects that are passed as parameters or return
values because RMI allows a caller to pass pure Java objects to remote
objects. RMI provides the necessary mechanisms for loading an object’s code
as well as transmitting its data.
The illustration below depicts an RMI distributed application that uses the registry to
obtain references to a remote object. The server calls the registry to associate a
name with a remote object. The client looks up the remote object by its name in the
server’s registry and then invokes a method on it. The illustration also shows that the
RMI system uses an existing web server to load Java class bytecodes from server to
client and from client to server, for objects when needed. RMI can load class
bytecodes using any URL protocol (Ex. HTTP, FTP, File etc.) that is supported by the
Java System.
Definition of Terms
In the Java distributed object model, a remote object is one whose methods can be
invoked from another Java Virtual Machine, potentially on a different host. An object
of this type is described by one or more remote interfaces, which are Java interfaces
that declare the methods of the remote objects.
The Java distributed object model differs from the Java object model in these ways:
Clients of remote objects interact with remote interfaces, never with the
implementation classes of those interfaces.
Non-Remote arguments to, and results from, a remote method invocation are
passed by copy rather than by references to objects are only useful within a
single virtual machine.
A remote object is passed by references, not by copying the actual remote
implementation.
The semantics of some of the methods defined by class java.lang.object are
specialized for remote objects.
Since the failure modes of invoking remote objects are inherently more
complicated than the failure modes of invoking local objects, clients must deal
with additional exceptions that can occur during a remote method invocation.
The class can define methods that do not appear in the remote interface, but
those methods can only be used locally and are not available remotely.
Referential Integrity
If two references to an object are passed from one VM to another VM in parameters
(or in the return value) in a single remote method call and those references refer to
the same object in the sending VM, those references will refer to a single copy of the
object in the receiving VM. More generally stated: within a single remote method
call, the RMI system maintains referential integrity among the objects passed as
parameters or as a return value in the call.
Class Annotation
When an object is sent from one VM to another in a remote method call, the RMI
system annotates the class descriptor in the call stream with information (the URL)
of the class so that the class can be loaded at the receiver. It is a requirement that
classes be downloaded on demand during remote method invocation.
Parameter Transmission
Parameters in an RMI call are written to a stream that is a subclass of the class
java.io.ObjectOutputStream in order to serialize the parameters to the destination
of the remote call. The ObjectOutputStream subclass overrides the replace Object
method to replace each remote object with its corresponding stub class. Parameters
that are objects are written to the stream using the ObjectOutputStream’s
writeObject() method.
For a client to invoke a method on a remote object, that client must first obtain a
reference to the object. A reference to a remote object is usually obtained as a
parameter or return value in a method call. The RMI system provides a simple
bootstrap name server from which to obtain remote objects on given hosts. The
java.rmi.Naming class provides Uniform Resource Locator (URL) based methods to
look up, bind, rebind, unbind, and list the name-object pairings maintained on a
particular host and port.
The stub hides the serialization of parameters and the network-level communication
in order to present a simple invocation mechanism to the caller.
In the remote VM, each remote object may have a corresponding skeleton (in JDK
1.2 only environments, skeletons are not required). The skeleton is responsible for
dispatching the call to the actual remote object implementation. When a skeleton
receives an incoming method invocation it does the following;
In JDK 1.2 and additional stub protocol was introduced that eliminates the need for
skeletons in JDK 1.2 only environments. Instead, generic code is used to carry out
the duties performed by skeletons in JDK 1.1. The rmic compiler generates stubs and
skeletons.
When any client does not references a remote object, the RMI runtime refers to it
using a weak reference. The weak reference allows the Java Virtual Machine’s
garbage collector to discard the object if not other local references to the object
exist, the distributed garbage collection algorithm interacts with the local Java Virtual
Machine’s garbage collector in the usual ways by holding normal or weak references
to objects.
Note that if a network partition exists between a client and a remote server object, it
is possible that premature collection of the remote object will occur (since the
transport might believe that the client crashed). Because of the possibility of
premature collection, remote references cannot guarantee referential integrity; in
other words, it is always possible that a remote reference may in fact not refer to an
existing object. An attempt to use such a reference may in fact not refer to an
existing object. An attempt to use such a reference will generate a
RemoteException, which must be handled by the application.
import java.rmi.*;
public interface Rmilnter extends Remote
{
public double getSqrt(doubled) thros RemoteException;
}
import java.rmi.*;
import java.rmi.server.*;
public class ServerImpl extends UnicastRemoteObject implements RmiInter
{
public ServerImpl() throws RemoteException
{
System.Out.println(“Object created”)
}
public double getSqrt(double d) throws RemoteException
{
return Math.sqrt(d);
}
public static void main (String args[]) throws Exception
{
ServerImpl si=new ServerImpl();
Naming.rebind(“server”,si);
System.out.println(“Object bounded to remote network”);
}
Client Application
import java.io.*;
import java.rmi.*;
public class Client
{
public satic void main(String args[]) throws Exception
{
RmiInter ri=(RmiInter) Naming.lookup(“rmi://localhost:1099/server”);
DataInputStream dis=new DataInputStream(System.in);
System.out.println(“Enter a double number to know its sqrt”);
String num=dis.readLine();
double d=Double.parseDouble(num);
System.out.prinIn(“The Sqr is” +ri.getDouble(d));
}
}
Remote Callbacks
We have seen earlier how a client can get a reference to a remote object as result of
a method invocation. A client also can be a remote object. In some situations, a
server may need to make a remote call to a client.
CallClientInter.java
import java.rmi.*;
public interface CallClientInter extends remote
{
public void msgPopup(String msg) throws RemoteException;
}
CallServerInter.java
import java.rmi.*;
public interface CallServerInter extends Remote
{
public String sayHello(CallClientInter cci) throws RemoteException;
}
CallClientImpl.java
import java.applet.*;
import java.awt.*;
import java.io.Serializable;
import java.rmi.*
import java.rmi.server.*;
public class CallClientImpl extends Applet implements
CallClientInter.Serializable
{
Strint msg=” “;
Frame f=new Frame();
Label l1=new Label(“
Public void init()
{
f.add(l);
try
{
UnicastRemoteObject.exportObject(this);
String host=”rmi://”+getcodeBase().getHost()+”HelloServer”;
CallServerInter csi=(CallServerInter) Naming.lookup(host);
Msg=csi.getSayHello(CallClientInter) this);
}
}
Public void paint(Graphics g)
{
g.drawstring(msg,50,50);
}
public void msgPopup(String s) throws RemoteException
{
l1.setText(s);
f.setSize(100,100);
f.setVisible(true);
}
}
CallClientImpl.html
CallServerImpl.java
import java.util.Date;
import java.rmi.*;
import java.rmi.server.*;
public class CallServerImpl extends UnicastRemoteObject implements
CallServerInter
{
public CallServerImpl() throws RemoteException
{
System.out.println(“Object created”);
}
When parameters and return values for a remote method invocation are
unmarshalled to become live objects in the receiving VM, class definitions are
required for all of the types of objects in the stream. The unmarshalling process first
attempts to resolve classes by name in its local class loading context (the context
class loader of the current thread). RMI also provides a facility for dynamically
loading the class definitions for the actual types of objects passed as parameters and
return values for remote method invocations from network locations specified by the
transmitting endpoint. This includes the dynamic downloading of remote stub classes
corresponding to particular remote object implementation classes (and used to
contain remote references) as well as any other type that is passed by value in RMI
calls, such as the subclass of a declared parameter type, that is not already available
in the class loading context of the unmarshalling side.
To support dynamic class loading, the RMI runtime uses special subclasses of
java.io.ObjectOutputStream and java.io.ObjectInputStream for the marshal
streams that it uses for marshalling and unmarshalling RMI parameters and return
values. These subclasses override the annotateClass method of
ObjectOutputStream and the resolveClass method of ObjectInputStream to
communicate information about where to locate class files containing the definitions
for classes corresponding to the class descriptors in the stream.
For every class descriptor written to an RMI marshal stream, the annotateClass
method adds to the stream the result of calling java.rmi.server.RMIClassLoader.
getClassAnnotation for the class object, which may be null or may be a String object
representing the codebase URL path (a space-separated list of URLs) from which the
remote endpoint should download the class definition file for the given class.
For every class descriptor read from an RMI marshal stream, the resolveClass
method reads a single object from the stream. If the object is a String (and the value
of the java.rmi.server.useCodebaseOnly, property is not “true”), then
resolveClass returns the result of calling RMIClassLoader.loadClass with the
annotated String object as the first parameter and the name of the desired class in
the class descriptor as the second parameter. Otherwise, resolveClass returns the
result of calling RMIClassLoader.loadClass with the name of the desired class as the
only parameter.
When a client requests a reference to the remote object, the registry returns the
stub to the client. The client looks for the class definition of the stub in its local
classpath (by default) and if found, the client loads the class or utilizes its codebase
property.
Based upon the above, five potential configuration can be set up to distribute
classes.
a. Closed
There is no dynamic loading of classes. JVM loads the classes from local
classpath only.
d. BootStrapped Client
On the client, all the classes are loaded from codebase specified by the server.
e. BootStrapped Server
On the server, all the classes are loaded from codebase specified by the client.
Example:
BootInter.java
import java.rmi.*;
public interface BootInter extends Remote
{
public String sayHello() throws RemoteException;
}
BootInterImpl.java
import java.rmi.*;
import java.rmi.server.*;
public class bootInterImpl extends UnicastRemoteObject implements BootInter
{
public BootInterImpl() throws RemoteException
{
System.out.println(“Remote object created”);
}
public String sayHello() throws RemoteException
{
return “Hellow from Server”;
}
BootServer.java
import java.rmi.*;
import java.util.*;
public class BootServer
{
public static void main (String args[]) throws Exception
{
Properties p=System.getProperties();
String s=p.getProperty(“java.rmi.server.codebase”);
Class c=RMIClassLoader.loadClass(s,”BootInterImpl”);
Naming.rebind(“bootserver”,(Remote) c.newInstance());
System.out.println(“Object Created”);
}
}
Client.java
Import java.rmi.*;
Public class Client
{
public Client()
{
try
{
BootInter bi=(BootInter) Naming.lookup(“rmi://localhost:1099/bootserver”);
System.out.println(bi.sayHello());
}catch(Exception e){}
}
}
BootClient.java
import java.rmi.*;
import java.rmi.server.*;
import java.util.*;
public class BootClient
{
public static void main(String args[]) throws Exception
{
Properties p=System.getProperties();
String s=p.getProperty(“java.rmi.server.codebase”);
Class c=RMIClassLoader.loadClass(S,”Client”);
c.newInstance();
}
}
Object Activation
Object Activation
Allows remote objects to be executed on as needed basis i.e. when an ‘activatable’
remote object is accessed (via a method invocation) if that remote object is not
currently executing the system initiates the objects execution inside an appropriate
JVM. RMI uses lazy activation, this is where the activation of an object is deferred
until a client first use, the first method invocation.
To understand the actual semantics of using activation model, let us understand few
terms.
Activator
It facilitates remote object activation by keeping track of all the information needed
to activate an object and is responsible for starting the instances of JVMs on the
server.
Activation Group
Activation Group creates instances of objects in its group, and informs its monitor
about the various active and passive states.
Activation Monitor
Every Activation Group have an Activation Monitor that keeps track of an object’s
state in the group and the group’s state as a whole.
Activation System
The Activation System provides a means for registering groups and activatable
objects to be activated within those groups.
JSP Directives
The Internet was once full of Web sites hosting static pages or simple forms at best.
Now it's an interactive environment for transacting daily business, from shopping to
trading stocks to interacting with suppliers, in a personalized and dynamic setting.
Today, the tools and products to build dynamic, Web-based applications are still
maturing. Traditionally, companies used CGI applications to generate dynamic
content for Web pages. But that solution hasn't scaled well to support complex
functionality and growing numbers of concurrent users. Java Server Pages
technology provides a highly scalable method for creating dynamic content for the
Web. As part of the Java family of APIs, JSP technology shares the “Write Once, Run
Anywhere” benefits of the Java platform, with easy access to a broad range of Java
APIs. JSP technology enables a tiered development methodology that lets
organizations leverage internal programming expertise to create applications that are
fast to deploy and easy to maintain.
The Web-based client architecture may have three or more layers. This multi-tier
architecture provides many benefits over a traditional (two-tiered) client/server
architecture.
The organizations that are building and maintaining these applications also have
stringent requirements when selecting the architectures, products, and tools for
creating Web-based applications.
JSP technology has evolved from the powerful servlet technology. (Servlets are Java
technology-based, server-side applications.) JSP extends the servlet technology in
many ways, making it easier and faster to build, deploy, and maintain server-side
applications that communicate with Web-based clients. The following sections
describe where JSP technology fits in the Java family of products, how JSP can
simplify the creation and maintenance of dynamic pages, and how these pages fit
into more complex, multi-tier applications.
What JSP pages do, however, is enable a different, more efficient development
methodology and simplify ongoing maintenance. This is because JSP technology truly
separates the page design and static content from the logic used to generate the
dynamic content.
Some of the earlier methods include CGI programs, the mod_perl plug-in for the
Apache Web Server, and Microsoft Active Server Pages (ASP). The JSP technology
surpasses these previous methods in two fundamental areas
Portability - Pages built with JSP technology are portable across platforms and
servers, and work with portable, reusable components.
Easier Maintenance and Development - Because the page design is truly
separate from the application logic, JSP enables tiered development and
The Java Server Pages (JSP) technology provides a simplified, fast way to create web
pages that display dynamically generated content. JSP technology was designed to
make it easier and faster to build web-based applications that work with a wide
variety of web servers, application servers, browsers and development tools.
The following lines provides an overview of the JSP technology, describing the
background in which it was developed and the overall goals for the technology. It
also describes the key components of a Java TM technology-based page, in the context
of a simple example.
Applications that can make use of browser-based clients have several advantages
over traditional client/server based applications. These include nearly unlimited client
access and greatly simplified application deployment and management. (To update
an application, a developer only needs to change one server-based program, not
thousands of client-installed applications.) As a result, the software industry is
moving quickly toward building multi-tiered applications using browser-based clients.
An early solution to this problem was the CGI-BIN interface; developers wrote
individual programs to this interface, and web-based applications called the
programs through the web server. This solution has significant scalability problems --
each new CGI request launches a new process on the server. If multiple users access
the program concurrently, these processes consume all of the web server's available
resources and the performance slows to a grind.
Individual web server vendors have tried to simplify web application development
providing "plug-ins" and APIs for their servers. These solutions are web-server
specific, and don't address the problem across multiple vendor solutions. For
example, Microsoft's Active Server PagesTM (ASP) technology makes it easier to
create dynamic content on a web page, but only works with Microsoft IIS or Personal
Web Server.
Other solutions exist, but they are not necessarily easy for the average page
designer to deploy. Technologies such as Java Servlets, for example, make it easier
to write server-based code using the Java programming language for interactive
applications. A Java Servlet is a Java technology-based program that runs on the
server (as opposed to an applet, which runs on the browser). Developers can write
Servlets that take an HTTP request from the web browser, generate the response
dynamically (possibly querying databases to fulfill the request) and then send a
response containing an HTML or XML document to the browser.
Using this approach, the entire page must be composed in the Java Servlet. If a
developer or web master wanted to tune the appearance of the page, they would
have to edit and recompile the Java Servlet, even if the logic were already working.
With this approach, generating pages with dynamic content still requires application
development expertise.
The following lines provides an overview of the JSP technology, describing the
background in which it was developed and the overall goals for the technology. It
also describes the key components of a Java TM technology-based page, in the context
of a simple example.
seems to be almost no limit to the possible uses for web-based clients in diverse
applications.
Applications that can make use of browser-based clients have several advantages
over traditional client/server based applications. These include nearly unlimited client
access and greatly simplified application deployment and management. (To update
an application, a developer only needs to change one server-based program, not
thousands of client-installed applications.) As a result, the software industry is
moving quickly toward building multi-tiered applications using browser-based clients.
An early solution to this problem was the CGI-BIN interface; developers wrote
individual programs to this interface, and web-based applications called the
programs through the web server. This solution has significant scalability problems --
each new CGI request launches a new process on the server. If multiple users access
the program concurrently, these processes consume all of the web server's available
resources and the performance slows to a grind.
Individual web server vendors have tried to simplify web application development
providing "plug-ins" and APIs for their servers. These solutions are web-server
specific, and don't address the problem across multiple vendor solutions. For
example, Microsoft's Active Server PagesTM (ASP) technology makes it easier to
create dynamic content on a web page, but only works with Microsoft IIS or Personal
Web Server.
Other solutions exist, but they are not necessarily easy for the average page
designer to deploy. Technologies such as Java Servlets, for example, make it easier
to write server-based code using the Java programming language for interactive
applications. A Java Servlet is a Java technology-based program that runs on the
server (as opposed to an applet, which runs on the browser). Developers can write
Servlets that take an HTTP request from the web browser, generate the response
dynamically (possibly querying databases to fulfill the request) and then send a
response containing an HTML or XML document to the browser.
Using this approach, the entire page must be composed in the Java Servlet. If a
developer or web master wanted to tune the appearance of the page, they would
have to edit and recompile the Java Servlet, even if the logic were already working.
With this approach, generating pages with dynamic content still requires application
development expertise.
The Java Server Pages (JSP) technology was designed to fit this need. The JSP
specification is the result of extensive industry cooperation between vendors of web
servers, application servers, transactional systems, and development tools. Sun
Microsystems developed the specification to integrate with and leverage existing
expertise and tools support for the Java programming environment, such as Java
Servlets and JavaBeans. The result is a new approach to developing web-based
applications that extends powerful capabilities to page designers using component-
based application logic.
JSP technology speeds the development of dynamic web pages in a number of ways:
Because the native scripting language for JSP pages is based on the Java
programming language, and because all JSP pages are compiled into Java
Servlets, JSP pages have all of the benefits of Java technology, including
robust memory management and security.
As part of the Java platform, JSP shares the Write Once, Run Anywhere TM
characteristics of the Java programming language. As more vendors add JSP
support to their products, you can use servers and tools of your choice,
changing tools or servers without affecting current applications.
When integrated with the Java 2 Platform, Enterprise Edition (J2EE) and
Enterprise JavaBeans technology, JSP pages will provide enterprise-class
scalability and performance necessary for deploying web-based applications
across the virtual enterprise.
The JSP technology is best described using an example. The following JSP page is
very simple; it prints the day of the month and the year, and welcomes you with
either "Good Morning" or "Good Afternoon," depending on the time of day.
<HTML>
<%@ page language=="java" imports=="com.wombat.JSP.*" %>
<H1>Welcome</H1>
<P>Today is </P>
<jsp:useBean id=="clock" class=="calendar.jspCalendar" />
<UL>
<LI>Day: <%==clock.getDayOfMonth() %>
<LI>Year: <%==clock.getYear() %>
</UL>
</HTML>
The page includes the following components:
A JSP directive passes information to the JSP engine. In this case, the first
line indicates the location of some Java programming language extensions to
be accessible from this page. Directives are enclosed in <%@ and %>
markers.
Fixed template data: Any tags that the JSP engine does not recognize it
passes on with the results page. Typically, these will be HTML or XML tags.
This includes the Unordered List and H1 tags in the example above.
JSP actions, or tags: These are typically implemented as standard tags or
customized tags, and have an XML tag syntax. In the example, the
jsp:useBean tag instantiates the Clock JavaBean on the server.
An expression: The JSP engine evaluates anything between <%== and %>
markers. In the List Items above, the values of the Day and Year attributes of
the Clock bean are returned as a string and inserted as output in the JSP file.
In the example above, the first list item will be the day of the year, and the
second item the year.
A scriptlet is a small script that performs functions not supported by tags or
ties everything together. The native scripting language for JSP 1.0 software is
based on the Java programming language. The scriptlet in the above sample
determines whether it is AM or PM and greets the user accordingly (for
daytime users, at any rate).
The example may be trivial, but the technology is not. Businesses can encapsulate
critical processing in server-side Beans, and web developers can easily access that
information, using familiar syntax and tools. Java-based scriptlets provide a flexible
way to perform other functions, without requiring extensive scripting.
JSP Directives
JSP pages use JSP directives to pass instructions to the JSP engine. These may
include the following:
Scripting Elements
JSP pages can includes include small scripts, called scriptlets, in a page. A scriplet is
a code fragment, executed at request time processing. Scriptlets may be combined
with static elements on the page (as in the example above) to create a dynamically
generated page.
Scripts are delineated within <% and %> markers. The scripting language engine, in
our example the Java virtual machine on the host, will evaluate anything within those
markers.
The JSP specification supports all of the usual script elements, including expressions
and declarations.
JSP pages are typically compiled into Java Servlets. Java Servlets are a standard
Java extension. The page developer has access to the complete Java application
environment, with all of the scalability and portability of the Java technology-enabled
family.
When a JSP page is first called, if it does not yet exist, it is compiled into a Java
Servlet class and stored in the server memory. This enables very fast responses for
subsequent calls to that page. (This avoids the CGI-bin problem of spawning new
processes for each HTTP request, or the runtime parsing required by server-side
includes.)
An Application
In a simple implementation, the browser directly invokes a JSP page, which itself
generates the requested content (perhaps invoking JDBC to get information directly
from a database). The JSP page can call JDBC components to generate results, and
creates standard HTML that it sends back to the browser as a result.
This model basically replaces the CGI-BIN concept with a JSP page (compiled as a
Java Servlet). This method has the following advantages:
This architecture works well for many applications, but it does not scale for a large
number of simultaneous Web-based clients accessing scarce enterprise resources,
since each must establish or share a connection to the content resource in question.
For example, if the JSP page accesses a database, it may generate many connections
to the database, which can affect the database performance.
into a result bean and invokes the JSP page. The JSP page accesses the dynamic
content from the bean and sends the results (as HTML) to the browser.
This approach creates more reusable components that can be shared between
applications, and may be implemented as part of a larger application. It still has
scalability issues in terms of handling connections to enterprise resources, such as
databases.
JSP Tags
Most JSP processing will be implemented through JSP-specific XML-based tags. JSP
1.0 includes a number of standard tags, referred to as the core tags. These include:
The advantage of tags is that they are easy to use and share between applications.
The real power of a tag-based syntax comes with the development of custom tag
libraries, in which tool vendors or others can create and distribute tags for specific
purposes.
This set of lines describes the standard actions of JavaServer Pages.Standard actions
are represented using XML elements with a prefix of jsp (though that prefix can be
redefined in the XML syntax). A translation error will result if the JSP prefix is used
for an element that is not a standard action.
<jsp:useBean>
A jsp:useBean action associates an instance of a Java programming language object
defined within a given scope and available with a given id with a newly declared
scripting variable of the same id.When a <jsp:useBean> action is used in an
scriptless page, or in an scriptless context (as in the body of an action so indicated),
there are no Java scripting variables created but instead a variable is created.
The jsp:useBean action is quite flexible; its exact semantics depends on the
attributes given. The basic semantic tries to find an existing object using id and
scope. If the object is not found it will attempt to create the object using the other
attributes.
It is also possible to use this action to give a local name to an object defined
elsewhere, as in another JSP page or in a servlet. This can be done by using the type
attribute and not providing class or beanName attributes.
At least one of type and class must be present, and it is not valid to provide both
class and beanName. If type and class are present, class must be assignable to type
(in the Java platform sense). For it not to be assignable is a translation time error.
The attribute beanName specifies the name of a Bean, as specified in the JavaBeans
specification. It is used as an argument to the instantiate method in the
java.beans.Beans class. It must be of the form a.b.c, which may be either a class, or
the name of a resource of the form a/b/c.ser that will be resolved in the current
ClassLoader. If this is not true, a request-time exception, as indicated in the
semantics of the instantiate method will be raised. The value of this attribute can be
a request-time attribute expression.
The id Attribute
The id=”name” attribute/value tuple in a jsp:useBean action has special meaning to
a JSP container, at page translation time and at client request processing time. In
particular:
The name must be unique within the translation unit, and identifies the
particular element in which it appears to the JSP container and page.
Duplicate id’s found in the same translation unit shall result in a fatal
translation error.
The JSP container will associate an object (a JavaBean component) with the
named value and accessed via that name in various contexts through the
pagecontext object described later in this specification. The name is also used
to expose a variable (name) in the page’s scripting language environment.
The scope of the scripting language variable is dependent upon the scoping
rules and capabilities of the scripting language used in the page.
Note that this implies the name value syntax must comply with the variable naming
syntax rules of the scripting language used in the page. Provides details for the case
where the language attribute is java.
If the object is found, the variable’s value is initialized with a reference to the
located object, cast to the specified type. If the cast fails, a
java.lang.ClassCastException shall occur. This completes the processing of this
jsp:useBean action.
If the jsp:useBean action had a non-empty body it is ignored. This completes
the processing of this jsp:useBean action.
If the object is not found in the specified scope and neither class nor
beanName are given, a java.lang.InstantiationException shall occur. This
completes the processing of this jsp:useBean action.
If the object is not found in the specified scope, and the class specified names
a non-abstract class that defines a public no-args constructor, then the class
is instantiated. The new object reference is associated with the scripting
variable and with the specified name in the specified scope using the
appropriate scope dependent association mechanism (sees PageContext).
After this, step 8 is performed. If the object is not found, and the class is
abstract, an interface, or no public no-args constructor is defined therein,
then a java.lang.InstantiationException shall occur. This completes the
processing of this jsp:useBean action.
If the object is not found in the specified scope; and beanName is given, then
the method instantiate of java.beans.Beans will be invoked with the
ClassLoader of the servlet object and the beanName as arguments. If the
method succeeds, the new object reference is associated the with the
scripting variable and with the specified name in the specified scope using the
appropriate scope dependent association mechanism (see PageContext). After
this, step 8 is performed.
If the jsp:useBean action has a non-empty body, the body is processed. The
variable is initialized and available within the scope of the body. The text of
the body is treated as elsewhere. Any template text will be passed through to
the out stream. Scriptlets and action tags will be evaluated. A common use of
a non-empty body is to complete initializing the created instance. In that case
the body will likely contain jsp:setProperty actions and scriptlets that are
evaluated. This completes the processing of this useBean action.
Examples
In the following example, a Bean with name connection of type
mycom.myapp.Connection is available after actions on this element, either because it
was already created and found, or because it is newly created.
In the final example, the object should have been present in the session. If so, it is
given the local name wombat with WombatType. A ClassCastException may be raised
if the object is of the wrong class, and an InstantiationException may be raised if the
object is not defined.
Syntax
This action may or not have a body. If the action has no body, it is of the form:
In this case, the body will be invoked if the Bean denoted by the action is created.
Typically, the body will contain either scriptlets or jsp:setProperty tags that will be
used to modify the newly created object, but the contents of the body are not
restricted.
<jsp:setProperty>
The jsp:setProperty action sets the values of properties in a bean. The name
attribute that denotes the bean must be defined before this action appears. There
are two variants of the jsp:setProperty action. Both variants set the values of one or
more properties in the bean based on the type of the properties. The usual bean
introspection is done to discover what properties are present, and, for each, its
name, whether it is simple or indexed, its type, and the setter and getter methods.
Introspection also indicates if a given property type has a PropertyEditor class.
Properties in a Bean can be set from one or more parameters in the request object,
from a String constant, or from a computed request-time expression. Simple and
indexed properties can be set using jsp:setProperty.
When assigning values to indexed properties the value must be an array; the rules
described in the previous paragraph apply to the actions. A conversion failure leads
to an error, whether at translation time or request time.
Examples
The following two actions set a value from the request parameter values.
<jsp:setProperty name=”request” property=”*” />
<jsp:setProperty name=”user” property=”user” param=”username” />
<jsp:getProperty>
The <jsp:getProperty> action places the value of a bean instance property,
converted to a String, into the implicit out object, from which the value can be
displayed as output. The bean instance must be defined as indicated in the name
attribute before this point in the page (usually via a jsp:useBean action). The
conversion to String is done as in the println methods, i.e. the toString
method of the object is used for Object instances, and the primitive types are
converted directly.
jsp:setProperty Attributes
Name - The name of a bean instance defined by a <jsp:useBean> action or some
other action. The bean instance must contain the property to be set. The defining
action must appear before the <jsp:setProperty> action in the same file.
Property - The name of the property whose value will be set. If property Name is set
to * then the tag will iterate over the current ServletRequest parameters, matching
parameter names and value type(s) to property names and setter method type(s),
setting each matched property to the value of the matching parameter. If a
parameter has a value of "", the corresponding property is not modified.
Param - The name of the request parameter whose value is given to a bean property.
The name of the request parameter usually comes from a web form. If param is
omitted, the request parameter name is assumed to be the same as the bean
property name. If the param is not set in the Request object, or if it has the value of
““, the jsp:setProperty action has no effect.. An action may not have both param and
value attributes. value The value to assign to the given property. This attribute can
accept a request-time attribute expression as a value.An action may not have both
param and value attributes.
The value of the name attribute in jsp:setProperty and jsp:getProperty will refer to
an object that is obtained from the pageContext object through its findAttribute
method. The object named by the name must have been “introduced” to the JSP
processor using either the jsp:useBean action or a custom action with an associated
VariableInfo entry for this name. If the object was not introduced in this manner, the
container implementation is recommended (but not required) to raise a translation
error.
Note – A consequence of the previous paragraph is that objects that are stored in,
say, the session by a front component are not automatically visible to jsp:set-
Property and jsp:getProperty actions in that page unless a jsp:useBean action, or
some other action, makes them visible.
If the JSP processor can ascertain that there is an alternate way guaranteed to
access the same object, it can use that information. For example it may use a
scripting variable, but it must guarantee that no intervening code has invalidated the
copy held by the scripting variable. The truth is always the value held by the
pageContext object.
Examples
<jsp:include>
A <jsp:include .../> action provides for the inclusion of static and dynamic resources
in the same context as the current page. Inclusion is into the current value of out.
The resource is specified using a relativeURLspec that is interpreted in the context of
the web application (i.e. it is mapped).
The page attribute of both the jsp:include and the jsp:forward actions are interpreted
relative to the current JSP page, while the file attribute in an include directive is
interpreted relative to the current JSP file. See below for some examples of
combinations of this.
An included page cannot change the response status code or set headers. This
precludes invoking methods like setCookie. Attempts to invoke these methods will be
ignored. The constraint is equivalent to the one imposed on the include method of
the RequestDispatcher class.
A jsp:include action may have jsp:param subelements that can provide values for
some parameters in the request to be used for the inclusion. Request processing
resumes in the calling JSP page, once the inclusion is completed.
The flush attribute controls flushing. If true, then, if the page output is buffered and
the flush attribute is given a true value, then the buffer is flushed prior to the
inclusion, otherwise the buffer is not flushed. The default value for the flush attribute
is false.
Examples
<jsp:include page=”/templates/copyright.html”/>
The above example is a simple inclusion of an object. The path is interpreted in the
context of the Web Application. It is likely a static object, but it could be mapped
into, for instance, a servlet via web.xml. For an example of a more complex set of
inclusions, consider the following four situations built using four JSP files: A.jsp,
C.jsp, dir/B.jsp and dir/C.jsp:
A.jsp says <%@ include file=”dir/B.jsp”%> and dir/B.jsp says <%@ include
file=”C.jsp”%>. In this case the relative specification C.jsp resolves to
dir/C.jsp.
A.jsp says <jsp:include page=”dir/B.jsp”/> and dir/B.jsp says <jsp:include
page=”C.jsp” />. In this case the relative specification C.jsp resolves to dir/
C.jsp.
A.jsp says <jsp:include page=”dir/B.jsp”/> and dir/B.jsp says <%@ include
file=”C.jsp” %>. In this case the relative specification C.jsp resolves to
dir/C.jsp.
A.jsp says <%@ include file=”dir/B.jsp”%> and dir/B.jsp says <jsp:include
page=”C.jsp”/>. In this case the relative specification C.jsp resolves to C.jsp.
<jsp:forward>
A <jsp:forward page=”urlSpec” /> action allows the runtime dispatch of the current
request to a static resource, a JSP page or a Java servlet class in the same page The
URL is a relative urlSpec as in Section.
Relative paths are interpreted relative to the current JSP page. Accepts a request-
time attribute value (which must evaluate to a String that is a relative URL
specification). flush Optional boolean attribute. If the value is true, the buffer is
flushed now. The default value is false.
<jsp:forward>
A jsp:forward effectively terminates the execution of the current page. The relative
urlSpec.
The request object will be adjusted according to the value of the page attribute.
A jsp:forward action may have jsp:param subelements that can provide values for
some parameters in the request to be used for the forwarding.
If the page output is buffered and the buffer was flushed, an attempt to forward the
request will result in an IllegalStateException. If the page output was unbuffered and
anything has been written to it, an attempt to forward the request will result in an
IllegalStateException.
Examples
The following action might be used to forward to a static page based on some
dynamic condition.
<% String whereTo = “/Trendz/”+someValue; %>
<jsp:forward page=’<%= whereTo %>’ />
Syntax
<jsp:forward page=”relativeURLspec” />
and
<jsp:forward page=”urlSpec”>
{ <jsp:param .... /> }*
</jsp:forward>
<jsp:param>
The jsp:param element is used to provide key/value information. This element is
used in the jsp:include, jsp:forward, and jsp:params elements. A translation error
shall occur if the element is used elsewhere.
When doing jsp:include or jsp:forward, the included page or forwarded page will see
the original request object, with the original parameters augmented with the new
parameters, with new values taking precedence over existing values when applicable.
The scope of the new parameters is the jsp:include or jsp:forward call; i.e. in the
case of an jsp:include the new parameters (and values) will not apply after the
include. This is the same behavior as in the ServletRequest include and forward
methods.
Syntax
This action has two mandatory attributes: name and value. name indicates the name
of the parameter, and value, which may be a request-time expression, indicates its
value.
<jsp:plugin>
The plugin action enables a JSP page author to generate HTML that contains the
appropriate client browser dependent constructs (OBJECT or EMBED) that will result
in the download of the Java Plugin software (if required) and subsequent execution of
the Applet or JavaBeans component specified therein.
Examples
<jsp:plugin type=”applet” code=”MyApplet.class” codebase=”/html” >
<jsp:params>
<jsp:param name=”Trendz” value=”People Committed to Quality”/>
</jsp:params>
</jsp:plugin>
In addition to the standard actions, JSP v1.1 technology supports the development of
reusable modules called custom actions. A custom action is invoked by using a
custom tag in a JSP page. A tag library is a collection of custom tags.
Some examples of tasks that can be performed by custom actions include form
processing, accessing databases and other enterprise services such as email and
directories, and flow control. Before the availability of custom actions, JavaBeans
components in conjunction with scriplets were the main mechanism for performing
such processing. The disadvantage of using this approach is that it makes JSP pages
more complex and difficult to maintain.
Custom actions alleviate this problem by bringing the benefits of another level of
componentization to JSP pages. Custom actions encapsulate recurring tasks so that
they can be reused across more than one application and increase productivity by
encouraging division of labor between library developers and library users. JSP tag
libraries are created by developers who are proficient at the Java programming
language and expert in accessing data and other services. JSP tag libraries are used
by Web application designers who can focus on presentation issues rather than being
concerned with how to access databases and other enterprise services.
• They can be customized via attributes passed from the calling page.
• They have access to all the objects available to JSP pages.
• They can modify the response generated by the calling page.
• They can communicate with each other. You can create and initialize a
JavaBeans component, create a variable that refers to that bean in one tag,
and then use the bean in another tag.
• They can be nested within one another, allowing for complex interactions
within a JSP page.
The uri attribute refers to a URI that uniquely identifies the tag library. This URI can
be relative or absolute. If it is relative it must be mapped to an absolute location in
the taglib element of a Web application deployment descriptor, the configuration file
associated with Web applications developed according to the Java Servlet and
JavaServer Pages specifications. The prefix attribute defines the prefix that
distinguishes tags provided by a given tag library from those provided by other tag
libraries.
JSP custom actions are expressed using XML syntax. They have a start tag and end
tag, and possibly a body:
<tlt:tag>
body
</tlt:tag>
A tag with no body can be expressed as follows:
<tlt:tag />
Simple Tags
The following simple tag invokes an action that creates a greeting:
<tlt:greeting />
Tag attributes can be set from one or more parameters in the request object or from
a String constant. The only types of attributes that can be set from request
parameter values and String constants are those listed in Table 1; the conversion
applied is that shown in the table. When assigning values to indexed attributes the
value must be an array; the rules just described apply to the elements.
The following tag has an attribute named date, which accepts a String value obtained
by evaluating the variable today:
By working with a consortium of industry leaders, Sun has ensured that the JSP
specification is open and portable. You should be able to author JSP pages anywhere
and deploy them anywhere, using any client and server platforms. Over time, tool
vendors and others will extend the functionality of the platform by providing
customized tag libraries for specialized functions.
JNDI
• Naming & Directory Service
• Installing OpenLDAP
• Accessing Naming Service
• Accessing Directory Service
Java Transaction API
• Transaction Service
• Bean managed Transaction
Java Mail API
• Java Mail & JAF
• Sample Application
EJB
• Overview
• Remote & Home Interfaces
• Entity Beans with CMP and BMP
• Session Bean as Stateless and Stateful
• EJB 2.0 Features
• EJB QL
• Message-Driven Bean
• EJB & WebService
J2EE Design Pattern
• Model-view-Controller
• What is Design Pattern
• Helpful Hints
JAVA 2 EDITIONS
J2EE Architecture
The J2EE platform uses a multi-tiered distributed application model for both
enterprise applications
Application logic is divided into “components” according to function, and the
various application components that make up a J2EE application are installed
on different machines depending on the tier in the multi-tiered J2EE
environment to which the application component belongs
J2EE multi-tiered applications are generally considered to be three-tiered
applications because they are distributed over three different locations
Client machines
The J2EE server machine
The database or legacy machines at the back end
Three-tiered applications that run in this way extend the standard two-tiered
client and server model by placing a multithreaded application server between
the client application and back-end storage
J2EE Containers
J2EE ARCHITECTURE
Each naming service has its own rules for making valid names. For example, the
rules for valid filenames Linux are different from the rules in Windows.
Objects in some naming services cannot be stored directly inside the naming service.
Instead, the name service stores pointers or references to objects. A reference
contains an address, that is, specific information on how to access the object itself.
A Context
In a naming service, obviously you have more than one name-to-object binding. The
set of bindings is called a context. There are two types of contexts: root and
subcontext. A root context is the base name of an object. In a file system, the root
context is the base from which all other directories and files are stored. In the Unix
file system, the root context is /. Under Windows it is normally C:\.
A subcontext is a name that adds another level to the root context. For example, a
directory, such as ‘usr’ under / in a Unix file system, is a subcontext. In the Unix
system, this subcontext is called a subdirectory. That is, in a directory, /usr, the
directory usr is a subcontext of /. In another example, a DNS domain, like COM or
NET, is a context. A DNS domain named relative to another DNS domain is a
subcontext. For example, in the DNS domain brainysoftware.com, the DNS domain
brainysoftware is a subcontext of COM.
Going back to the Unix file system, it's not just a naming service but also a directory
service. Each file can have attributes like owner and date. In real world applications,
a directory object in a directory server can be used to represent anything: a printer,
a computer, a network, or even a person in an organization.
Attributes
An attribute of a directory object is a property of the object. For example, a person
can have the following attributes: Last name, First name, User name, email address,
Telephone number, and so on. A printer can have attributes like Resolution, Color,
and Speed.
An attribute has an identifier, which is a unique name in that object. Each attribute
can have one or more values. For instance, a person object can have an attribute
called LastName. The LastName is the identifier of an attribute. An attribute value is
the content of the attribute. For example, the LastName attribute can have a value
like "Martin".
in which you specify the attributes that the object or objects must have. The query is
called a search filter. This style of searching is sometimes called reverse lookup or
content-based searching. The directory service searches for and returns the objects
that satisfy the search filter.
LDAP
Directory services are very common these days. There already exist a plethora of
directory service implementations:
Accessing a directory service and manipulating its objects used to be complex and
difficult. The traditional protocol is X.500, a set of directory recommendations
specified by the International Telecommunication Union. X.500 was enormous and
complex.
You probably already have an LDAP-aware client installed on your computer. Many
email clients can access an LDAP directory for email addresses, including Outlook,
Eudora, Netscape Communicator, QuickMail Pro, and Mulberry.
The LDAP naming convention orders components from right to left, delimited by a
comma. LDAP arranges all directory objects in a tree, called a Directory Information
Tree (DIT). Within the DIT, an organization object, for example, might contain group
objects that might in turn contain person objects. When directory objects are
arranged in this way, they play the role of naming contexts in addition to being
attribute containers.
Note that the term binding in LDAP is different from its generic directory services
meaning. Binding here refers to the authentication that a user is required to perform
before accessing an entry in the directory.
A number of universities in the US also provides LDAP service to search for students
or staff members. For a list of university public LDAP services, see eMailman's Public
LDAP Servers.
The most popular LDAP server today is iPlanet's Directory Server. Others include
Novell's NDS eDirectory, Critical Path's Global Directory Server, Computer Associates'
eTrust Directory, Siemens' DirX, and Oracle's Oracle Internet Directory. Deciding the
one, which is best for your situation is often tricky.
NetworkWorld Fusion published a good article last year, which compares the
performance of many LDAP servers. If it's to be believed, iPlanet is the best
performer and also the fastest; it concludes that iPlanet's Directory Server is the best
choice for commercial use.
If you only need an LDAP server for testing, you probably want to use something
else. Downloads for the latest version of iPlanet's Directory Server (version 5.0 beta)
range from 53 MB to 78 MB, depending on your operating system. For the project in
this article, I chose the much slimmer LDAP server from OpenLDAP. Even though not
the fastest, their free product is only a 1.52 MB download. OpenLDAP's products are
only available for Linux; but once you have seeded it with entries, you can use this
article's project code to access any LDAP server on any operating system.
Installing OpenLDAP
You can download OpenLDAP from the project's site. The LDAP server is called slapd
(a stand-alone LDAP server). The latest version of slapd is 2.0.7. Other programs
downloadable from the Web site are the replication server, some libraries, and a
variety of tools.
To install slapd, you first need to download openldap-2_0_7.tgz into the /usr/local/
directory of a Linux system. You can use another directory but you'll need to do
some adjustment to the following instructions.
3. Next, run
./configure
4. Then the following commands:
5. make depend
6. make
make test
7. If everything goes smoothly, you are now ready to install, for which you'll
need root access. Run
su root -c 'make install'
Configuring slapd
If installation is as expected, you are now ready to configure slapd. The configuration
file is called slapd.conf and can be found at the /usr/local/etc/openldap/ directory.
Open this file with your favorite text editor.
database ldbm
suffix "dc=<MY-DOMAIN>,dc=<COM>"
rootdn "cn=Manager,dc=<MY-DOMAIN>,dc=<COM>"
rootpw secret
directory /usr/local/var/openldap-ldbm
You need to edit the <MY-DOMAIN> and the <COM> parts to reflect your domain
name. Using the correct names ensures that your LDAP server can be accessed from
the Internet.
For example, for the brainysoftware.com domain, the configuration lines will look like
database ldbm
suffix "dc=brainysoftware,dc=com"
rootdn "cn=Manager,dc=brainysoftware,dc=com"
rootpw secret
directory /usr/local/var/openldap-ldbm
If your domain contains additional components -- like sandal.jepit.edu.au -- do
something like
database ldbm
suffix "dc=sandal,dc=jepit,dc=edu,dc=au"
rootdn "cn=Manager,dc=sandal,dc=jepit,dc=edu,dc=au"
rootpw secret
directory /usr/local/var/openldap-ldbm
The fourth line (rootpw secret) contains the root password that you need to supply to
the server to make changes to the entries and do some other functions.
Running slapd
Running slapd requires root access, so run
su root -c /usr/local/libexec/slapd
or
/usr/local/libexec/slapd
if you're already logged in as root.
To check that the server is running and configured correctly, you can search it with
ldapsearch. By default, ldapsearch is installed as /usr/local/bin/ldapsearch. Use the
following command:
ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts
A directory schema specifies, among other things, the types of objects that a
directory may have and the attributes that are mandatory and optional to that
object. A directory schema also contains attribute type definitions, object class
definitions, and other information, which a server uses to determine how to match a
filter or attribute value assertion against the attributes of an entry, and whether to
permit add and modify operations.
The LDAP v3 schema is based on the X.500 standard for common objects found in a
network like countries, localities, organizations, users/persons, groups and devices.
All LDAP entries in the directory are typed. Each entry belongs to object classes that
identify the type of data represented by the entry. The object class specifies the
mandatory and optional attributes that can be associated with an entry of that class.
The object classes for all objects in the directory form a class hierarchy. The classes
top and alias are at the root of the hierarchy. For example, the organizationalPerson
object class is a subclass of the Person object class, which in turn is a subclass of
top. When creating a new LDAP entry, you must always specify all of the object
classes to which the new entry belongs. Because many directories do not support
object class subclassing, you also should always include all of the superclasses of the
entry.
• Structural. Indicates the attributes that the entry may have and where each
entry may occur in the DIT.
• Auxiliary. Indicates the attributes that the entry may have.
• Abstract. Indicates a "partial" specification in the object class hierarchy; only
structural and auxiliary subclasses may appear as entries in the directory.
For example, for an organizationalPerson object, you should list in its object classes
the organizationalPerson, person, and top classes. The organizationalPerson, person,
and top objects are listed as the following entries in the core.schema file.
LDAP v3 specifies that each directory entry may contain an operational attribute that
identifies its subschema subentry. A subschema subentry contains the schema
definitions for the object classes and attribute type definitions used by entries in a
particular part of the directory tree. If a particular entry does not have a subschema
subentry, then the subschema subentry of the root DSE, which is named by the
empty DN, is used. For more information about the schema, refer to RFCs 2252 and
2256.
Adding Entries
Adding entries to the server is the first thing you should do. To add entries to slapd,
you use ldapadd, which reads the content of an ldif file, checks the validity of its
entries, and adds the entries to the server if the entries are correct.
To add entries to the LDAP server, you need to pass the domain name and the
password for the root user. For example, with the following command you pass the
domain name (sendal.jepit.edu.au) and the password (secret) and the example.ldif
containing the entries to be added.
ldapadd -x -D "cn=Manager ,dc=sendal,dc=jepit,dc=edu,dc=au" -w secret -f
example.ldif
The argument list of ldapadd can be displayed by typing ldapadd with no arguments.
cn:: IGJlZ2lucyB3aXRoIGEgc3BhY2U=
Blank lines separate multiple entries within the same LDIF file.
Here is an example of an LDIF file containing three entries.
dn: cn=Barbara J Jensen, o=University of Michigan, c=US
cn: Barbara J Jensen
cn: Babs Jensen
objectclass: person
sn: Jensen
dn: cn=Bjorn J Jensen, o=University of Michigan, c=US
cn: Bjorn J Jensen
cn: Bjorn Jensen
objectclass: person
sn: Jensen
dn: cn=Jennifer J Jensen, o=University of Michigan, c=US
cn: Jennifer J Jensen
cn: Jennifer Jensen
objectclass: person
sn: Jensen
jpegPhoto:: /9j/4AAQSkZJRgABAAAAAQABAAD/2wBDABALD
A4MChAODQ4SERATGCgaGBYWGDEjJR0oOjM9PDkzODdASFxOQ
ERXRTc4UG1RV19iZ2hnPk1xeXBkeFxlZ2P/2wBDARESEhgVG ...
Notice that the jpegPhoto in Jennifer Jensen's entry is encoded in base 64.
• javax.naming
• javax.naming.directory
• javax.naming.event
• javax.naming.ldap
• javax.naming.spi
For the project in this article you only need the javax.naming and
javax.naming.directory packages.
JNDI is included in version 1.3 of Java 2 SDK. If you are using this version, you are
in luck. For users of JDK 1.1 and Java 2 SDK version 1.2, the JNDI can be
downloaded and installed separately. In the Java 2 SDK, version 1.3, you can find
service providers for the following services:
• LDAP
• CORBA Common Object Service (COS) Name Service
• Java Remote Method Invocation (RMI) Registry.
If you are using an older version of Java, you must first download the JNDI as a
Standard Extension on the JDK 1.1 and Java 2 SDK, version 1.2.
You must also download one or more service providers. These service providers act
like JDBC drivers for database access.
To obtain the initial context, you call the InitialContext() constructor, passing all the
necessary environment information in a Hashtable object:
Into the Hashtable, you then put the service provider. For example, if you are using
the file system service provider from Sun, this is the line of code you need.
env.put(Context.INITIAL_CONTEXT_FACTORY,
"com.sun.jndi.fscontext.RefFSContextFactory");
The file system service provider can be downloaded here. If you are using a
different service provider, replace put()'s second argument.
Another important environment property that you need to get the initial context is
the PROVIDER_URL. This property is assigned the location of the initial context. This
could be a URL on the Internet or it could just be a directory in a file system. For
instance, if you decide that your initial context when accessing a Unix file system is
the /usr/local directory, then you need the following line of code.
env.put(Context.PROVIDER_URL, "file:/usr/local");
Or, on a Windows system, if you want the C:\data directory to be the initial context,
your code would look like the following.
env.put(Context.PROVIDER_URL, "file:C:\\data");
And, optionally, you can also put the user credentials such as the username and
password.
env.put(Context.SECURITY_PRINCIPAL, "james");
env.put(Context.SECURITY_CREDENTIALS, "secret");
Having the environment information ready, you can now create the initial context.
Context ctx = new InitialContext(env);
If the object is created successfully, you can use the resulting Context object to
access the naming service. The lookup method of the Context interface can be used
to retrieve an object by passing its name.
For example, the following code prepares an environment Hashtable object, creates
an initial context, and retrieves the info.txt file.
import java.util.Hashtable;
import javax.naming.*;
import java.io.File;
try {
Context ctx = new InitialContext(env);
File f = (File)ctx.lookup("info.txt");
}
catch (NamingException e) {
System.out.println(e.toString());
}
}
}
The ‘Object’ object from the lookup method is cast to a File object. If the object is a
Printer, you can do something similar:
Some of the code is in a try-catch wrapper because many methods in the JNDI
packages can throw a NamingException.
• bind() -- Binds an object to a name. After the binding, you can retrieve the
object by looking up the name.
• rebind() -- Adds or replaces a binding. If the name is already bound to an
object, it will be unbound and bound with the new object specified as the
argument of this method.
• unbind() -- Removes a binding.
• list() -- Enumerates the names bound in the named context, along with the
class names of objects bound to them.
Every naming method in the Context interface has two overloads: one that accepts a
Name argument and one that accepts a java.lang.String name. Name is an interface
that represents a generic name; an ordered sequence of zero or more components.
The overloads that accept Name are useful for applications that need to manipulate
names, that is, composing them, comparing components, and so on.
env.put(Context.INITIAL_CONTEXT_FACTORY,
"com.sun.jndi.ldap.LdapCtxFactory");
If you are using a service provider from another vendor, just replace the second
argument to put(). Next, you supply the location of the service. For example, the
following specifies a location of an LDAP server at ldap://sendal.jepit.edu.au:389
(389 is the default port for the LDAP service).
env.put(Context.PROVIDER_URL,
"ldap://sendal.jepit.usyd.edu.au:389");
You can then acquire an initial context by passing the environment Hashtable.
However, unlike accessing a naming system, you use the DirContext interface instead
of the Context interface.
Having a DirContext object, you can access the directory service using the methods
of the DirContext interface; the important methods of which include getAttributes,
getSchema and search.
For convenience, the person object in the core.schema file is re-presented here.
The person object has two mandatory attributes: sn and cn, and four optional
attributes:
• userPassword
• telephoneNumber
• seeAlso
• description
This reads the example.ldif file and insert its content as entries to the server. The
example.ldif file contains the following.
Make sure that you have installed the correct service provider and your CLASSPATH
variable contains the path to the JNDI packages.
The Code
The code for the white pages service is given in Listing 1. The Java code allows you
to access the LDAP server and search a person or persons by passing a surname.
The code starts by preparing an environment Hashtable object and setting the
necessary properties for the environment.
env.put(Context.INITIAL_CONTEXT_FACTORY,
"com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.PROVIDER_URL,
"ldap://sendal.jepit.edu.au:389");
And then, as explained above, you need a DirContext object as the initial context,
which is done by calling the InitialDirContext constructor, passing the environment
Hashtable.
Once you have a DirContext object, you can use it to access the LDAP service. To
start searching, use the search method by passing a SearchControls object.
Then, display the search result, i.e., the attributes of all the person objects that
match the search criteria.
For each person object found, you use the getAttributes method to retrieve the
object's attributes. This method returns the Attributes object. You can then use the
get method of the Attributes object to obtain the value of an attribute by passing the
attribute name.
attributes.get( attributeName );
The part of the code that displays the attribute names of the person objects found is
given below.
} // end of while
If you run the code in Listing 1, you can see the result that looks something like the
following.
JTA/XA Transactions
A transaction managed and coordinated by the J2EE platform is a JTA or XA
transaction. A J2EE product is required to support JTA transactions according to the
transaction requirements defined in the J2EE specification. There are two ways to
begin a JTA transaction. A component can begin a JTA transaction explicitly using the
JTA javax.transaction.UserTransaction interface or it can also be started implicitly or
automatically by the EJB container if an EJB bean uses container managed
transaction specification. The main benefit of using JTA transactions is the ability to
seamlessly combine multiple application components and RDBMS/EIS accesses into
one single transaction with a little coding effort. For example, if a component X
begins a JTA transaction and invokes a method of component Y, the transaction will
be propagated transparently from component X to Y by the platform. Enterprise
beans using container-managed transaction demarcation will not need to begin or
commit transactions programmatically as the EJB container itself handles the
demarcation automatically. It is always recommended to access an RDBMS or EIS
within the scope of a JTA transaction. JTA allows applications to access transaction
management independent of any specific implementation by specifying standard Java
interfaces between a transaction manager, the transactional application, the J2EE
server, and the resource managers.
web component needs to access enterprise information systems under the scope of a
JTA transaction, this is quite common. The code snippet below illustrates the use of
the JTA interface to specify transactions within a web component:
It is important to keep in mind that a web component like a servlet may only start a
transaction in its service method. Moreover, a transaction started by a servlet or a
JSP page must be completed before the service method returns. Transactions cannot
span across web requests. The following guidelines are recommended for handling
interactions in web components between JTA transactions, threads, and JDBC
connections.
• JTA transactions should start and complete only from the thread in which the
service method is called. Additional threads created in the servlet should not
attempt to start any JTA transaction.
• JDBC connections may be acquired and released by a thread other than the
service method thread, but should not be shared between threads.
• JDBC Connection objects should not be stored in static fields.
• For web components implementing the SingleThreadModel, JDBC Connection
objects may be stored in class instance fields.
• For web components (Servlets) not implementing the SingleThreadModel,
JDBC Connection objects should not be stored in class instance fields and
should be acquired and released within the same invocation of the service
method.
When EJB A invokes EJB B, the two application servers cooperate to propagate the
transaction context from A to B. This transaction context propagation is transparent
to the application. At commit time, the two application servers use distributed two
phase commit protocol (if the capability exists) to ensure the atomicity of the
database updates and the transaction.
UserTransaction ut = ejbContext.getUserTransaction();
ut.begin();
// Transactional work is done here
ut.commit();
The following example illustrates a business method of a typical session bean that
performs a bean-managed transaction involving both a database connection and a
JMS connection.
javax.sql.DataSource ds;
java.sql.Connection dcon;
java.sql.Statement stmt;
javax.jms.QueueConnectionFactory qcf;
javax.jms.QueueConnection qcon;
javax.jms.Queue q;
javax.jms.QueueSession qsession;
javax.jms.QueueSender qsender;
javax.jms.Message message;
InitialContext initCtx = new InitialContext();
// obtain db conn object and set it up for transactions
ds = (javax.sql.DataSource)
initCtx.lookup("java:comp/env/jdbc/Database");
dcon = ds.getConnection();
stmt = dcon.createStatement();
// obtain jms conn object and set up session for transactions
qcf = (javax.jms.QueueConnectionFactory)
initCtx.lookup("java:comp/env/jms/qConnFactory");
qcon = qcf.createQueueConnection();
qsession = qcon.createQueueSession(true,0);
q = (javax.jms.Queue)
initCtx.lookup("java:comp/env/jms/jmsQueue");
qsender = qsession.createSender(q);
message = qsession.createTextMessage();
message.setText("some message");
//
// Now do a transaction that involves the two connections.
//
ut = ejbContext.getUserTransaction();
// start the transaction
ut.begin();
// Do database updates and send message. The Container
// automatically enlists dcon and qsession with the
// transaction.
stmt.executeQuery(...);
stmt.executeUpdate(...);
stmt.executeUpdate(...);
qsender.send(message);
// commit the transaction
ut.commit();
// release connections
stmt.close();
qsender.close();
qsession.close();
dcon.close();
qcon.close();
}
...
}
The following example illustrates a stateful session bean that retains transaction
context across three client calls, invoked in the order {method1, method2, and
method3}.
con2.close();
}
...
}
It is possible for an enterprise bean to open and close a database connection in each
business method rather than hold the connection open until the end of transaction. If
the client in the following example executes the sequence of methods {method1,
method2, method2, method3}, all the database updates done by the multiple
invocations of method2 are performed in the scope of the same transaction. This is
the transaction started in method1 and committed in method3.
An enterprise bean with bean-managed transaction specification need not and should
not use the getRollbackOnly() and setRollbackOnly()methods of the EJBContext
interface since, if necessary, it can obtain the status of a transaction by using the
getStatus() method of the javax.transaction.UserTransaction interface. It can also
roll back a transaction using the rollback()method of the same interface if required.
What does it mean for the client to pass an instance of Moneyto the server? At a
minimum, it means that the server is able to call public methods on the instance of
Money.
One way to do this would be to implicitly make Moneyinto a server as well. For
example, imagine that the client sends the following two pieces of information
whenever it passes an instance as an argument:
The RMI runtime layer in the server can use this information to construct a stub for
the instance of Money, so that whenever the Accountserver calls a method on what it
thinks of as the instance of Money, the method call is relayed over the wire, as
shown in Figure 10-2.
• You can't access fields on the objects that have been passed as
arguments.
• Stubs work by implementing an interface. They implement the methods in the
interface by simply relaying the method invocation across the network. That
is, the stub methods take all their arguments and simply marshall them for
transport across the wire. Accessing a public field is really just dereferencing
a pointer--there is no method invocation and hence, there isn't a method call
to forward over the wire.
• It can result in unacceptable performance due to network latency.
Even in our simple case, the instance of Accountis going to need to call
getCents( )on the instance of Money. This means that a simple call to
makeDeposit( )really involves at least two distinct networked method calls:
makeDeposit( )from the client and getCents( )from the server.
• It makes the application much more vulnerable to partial failure.
Let's say that the server is busy and doesn't get around to handling the
request for 30 seconds. If the client crashes in the interim, or if the network
goes down, the server cannot process the request at all. Until all data has
been requested and sent, the application is particularly vulnerable to partial
failures.
This last point is an interesting one. Any time you have an application that requires a
long-lasting and durable connection between client and server, you build in a point of
failure. The longer the connection needs to last, or the higher the communication
bandwidth the connection requires, the more likely the application is to occasionally
break down.
TIP: The original design of the Web, with its stateless connections,
serves as a good example of a distributed application that can tolerate
almost any transient network failure.
These three reasons imply that what is really needed is a way to copy objects and
send them over the wire. That is, instead of turning arguments into implicit servers,
arguments need to be completely copied so that no further network calls are needed
to complete the remote method invocation. Put another way, we want the result of
makeWithdrawal( )to involve creating a copy of the instance of Moneyon the server
side. The runtime structure should resemble Figure 10-3.
server.makeWithdrawal(amount);
....
server.makeDeposit(amount);
The client has no way of knowing whether the server still has a copy of amount. After
all, the server may have used it and then thrown the copy away once it was done.
This means that the client has to marshall amountand send it over the wire to the
server.
The RMI runtime can demarshall amount, which is the instance of Money the client
sent. However, even if it has the previous object, it has no way (unless equals()has
been overridden) to tell whether the instance it just demarshalled is equal to the
previous object.
More generally, if the object being copied isn't immutable, then the server might
change it. In this case, even if the two objects are currently equal, the RMI runtime
has no way to tell if the two copies will always be equal and can potentially be
replaced by a single copy. To see why, consider our Printer example again. At the end
of Chapter 3, we considered a list of possible feature requests that could be made.
One of them was the following:
Now consider what happens when the user actually wants to print two copies of the
same document. The client application could call:
server.printDocument(document);
twice with the "same" instance of DocumentDescription. And it would be an error for
the RMI runtime to create only one instance of DocumentDescriptionon the server
side. Even though the "same" object is passed into the server twice, it is passed as
parts of distinct requests and therefore as different objects.
TIP: This is true even if the runtime can tell that the two instances of
DocumentDescription are equal when it finishes demarshalling. An
implementation of a printer may well have a notion of a job queue that
holds instances of DocumentDescription. So our client makes the first
call, and the copy of document is placed in the queue (say, at number
5), but not edited because the document hasn't been printed yet. Then
our client makes the second call. At this point, the two copies of
document are equal. However, we don't want to place the same object
in the printer queue twice. We want to place distinct copies in the
printer queue.
Thus, we come to the following conclusion: network latency, and the desire to avoid
vulnerability to partial failures, forces us to have a deep copy mechanism for most
arguments to a remote method invocation. This copying mechanism has to make
deep copies, and it cannot perform any validation to eliminate "extra" copies across
methods.
Using Serialization
Serialization is a mechanism built into the core Java libraries for writing a graph of
objects into a stream of data. This stream of data can then be programmatically
manipulated, and reversing the process can make a deep copy of the objects. This
reversal is often called deserialization.
As a Persistence mechanism
If the stream being used is FileOutputStream, then the data will automatically
be written to a file.
As a Copy mechanism
If the stream being used is ByteArrayOutputStream, then the data will be
written to a byte array in memory. This byte array can then be used to create
duplicates of the original objects.
As a Communication mechanism
If the stream being used comes from a socket, then the data will
automatically be sent over the wire to the receiving socket, at which point
another program will decide what to do.
The important thing to note is that the use of serialization is independent of the
serialization algorithm itself. If we have a serializable class, we can save it to a file or
make a copy of it simply by changing the way we use the output of the serialization
mechanism.
ObjectOutputStream
ObjectOutputStream, defined in the java.io package, is a stream that implements the
"writing-out" part of the serialization algorithm. (RMI actually uses a subclass of
ObjectOutputStream to customize its behavior.) The methods implemented by
ObjectOutputStream can be grouped into three categories: methods that write
information to the stream, methods used to control the stream's behavior, and
methods used to customize the serialization algorithm.
For the most part, these methods should seem familiar. writeFloat( ), for example,
works exactly as you would expect after reading Chapter 1 -- it takes a floating-point
number and encodes the number as four bytes. There are, however, two new
methods here: writeObject() and defaultWriteObject().
Of course, this works seamlessly with the other methods for writing data. That is, if
you wanted to write two floats, a String, and an object to a file, you could do so with
the following code snippet:
Write the nonstatic and nontransient fields of the current class to this
stream. This may only be called from the writeObject method of the
class being serialized. It will throw the NotActiveException if it is called
otherwise.
That is, defaultWriteObject() is a method that works only when it is called from
another specific method at a particular time. Since defaultWriteObject() is useful only
when you are customizing the information stored for a particular class, this turns out
to be a reasonable restriction. We'll talk more about defaultWriteObject() later in the
chapter, when we discuss how to make a class serializable.
These methods are more important to people who tailor the serialization algorithm to
a particular use or develop their own implementation of serialization. As such, they
require a deeper understanding of the serialization algorithm. We'll discuss these
methods in more detail later, after we've gone over the actual algorithm used by the
serialization mechanism.
ObjectInputStream
ObjectInputStream, defined in the java.io package, implements the "reading-in" part
of the serialization algorithm. It is the companion to ObjectOutputStream--objects
serialized using ObjectOutputStream can be deserialized using ObjectInputStream.
Like ObjectOutputStream, the methods implemented by ObjectInputStream can be
grouped into three categories: methods that read information from the stream,
methods that are used to control the stream's behavior, and methods that are used
to customize the serialization algorithm.
This code is exactly inverse to the code we used for serializing the object in the first
place. If we wanted to make a deep copy of a serializable object, we could first
serialize the object and then deserialize it, as in the following code example:
This code simply places an output stream into memory, serializes the object to the
memory stream, creates an input stream based on the same piece of memory, and
runs the deserializer on the input stream. The end result is a deep copy of the object
with which we started.
The three new methods are also straightforward. skipBytes( ) skips the indicated
number of bytes in the stream, blocking until all the information has been read. And
the two readFully( ) methods perform a batch read into a byte array, also blocking
until all the data has been read in.
These methods are more important to people who tailor the serialization algorithm to
a particular use or develop their own implementation of serialization. Like before,
they also require a deeper understanding of the serialization algorithm, so I'll hold off
on discussing them right now.
There are four basic things you must do when you are making a class serializable.
They are:
Reasonable people may wonder about the utility of an empty interface. Rather than
define an empty interface, and require class definitions to implement it, why not just
simply make every object serializable? The main reason not to do this is that there
are some classes that don't have an obvious serialization. Consider, for example, an
instance of File. An instance of File represents a file. Suppose, for example, it was
created using the following line of code:
It's not at all clear what should be written out when this is serialized. The problem is
that the file itself has a different lifecycle than the serialized data. The file might be
edited, or deleted entirely, while the serialized information remains unchanged. Or
the serialized information might be used to restart the application on another
machine, where "C:\\temp\\foo" is the name of an entirely different file.
The serialization mechanism has a nice default behavior -- if all the instance-level,
locally defined variables have values that are either serializable objects or primitive
data types, then the serialization mechanism will work without any further effort on
our part. For example, our implementations of Account, such as Account_Impl, would
present no problems for the default serialization mechanism:
While _balance doesn't have a primitive type, it does refer to an instance of Money,
which is a serializable class.
If, however, some of the fields don't have primitive types, and don't refer to
serializable classes, more work may be necessary. Consider, for example, the
implementation of ArrayList from the java.util package. An ArrayList really has only
two pieces of state:
But hidden in here is a huge problem: ArrayList is a generic container class whose
state is stored as an array of objects. While arrays are first-class objects in Java,
they aren't serializable objects. This means that ArrayList can't just implement the
Serializable interface. It has to provide extra information to help the serialization
mechanism handle its nonserializable fields. There are three basic solutions to this
problem:
This tells the default serialization mechanism to ignore the variable. In other words,
the serialization mechanism simply skips over the transient variables. In the case of
ArrayList, the default serialization mechanism would attempt to write out size, but
ignore elementData entirely.
When the serialization mechanism starts to write out an object, it will check to see
whether the class implements writeObject(). If so, the serialization mechanism will
not use the default mechanism and will not write out any of the instance variables.
Instead, it will call writeObject() and depend on the method to store out all the
important state. Here is ArrayList's implementation of writeObject():
The first thing this does is call defaultWriteObject(). defaultWriteObject() invokes the
default serialization mechanism, which serializes all the nontransient, nonstatic
instance variables. Next, the method writes out elementData.lengthand then calls the
stream's writeObject( )for each element of elementData.
If you adopt a unit-testing methodology, then any serializable class should pass the
following three tests:
Similar constraints hold for classes that implement the Externalizable interface.
Declaring serialPersistentFields
The final option that can be used is to explicitly declare which fields should be stored
by the serialization mechanism. This is done using a special static final variable called
serialPersistentFields, as shown in the following code snippet:
This line of code declares that the field named size, which is of type int, is a serial
persistent field and will be written to the output stream by the serialization
TIP: What if you try to do both? That is, suppose you declare some
variables to be transient, and then also provide a definition for
serialPersistentFields? The answer is that the transient keyword is
ignored; the definition of serialPersistentFields is definitive.
So far, we've talked only about instance-level state. What about class-level state?
Suppose you have important information stored in a static variable? Static variables
won't get saved by serialization unless you add special code to do so. In our context,
(shipping objects over the wire between clients and servers), static are usually a bad
idea anyway.
If the superclass doesn't implement Serializable, you will need to store its state.
There are two different ways to approach this. You can use serialPersistentFields to
tell the serialization mechanism about some of the superclass instance variables, or
you can use writeObject( )/ readObject( )to handle the superclass state explicitly.
Both of these, unfortunately, require you to know a fair amount about the
superclass. If you're getting the .class files from another source, you should be
aware that versioning issues can cause some really nasty problems. If you subclass a
class, and that class's internal representation of instance-level state changes, you
may not be able to load in your serialized data. While you can sometimes work
around this by using a sufficiently convoluted readObject( ) method, this may not be
a solvable problem. We'll return to this later. However, be aware that the ultimate
solution may be to just implement the Externalizable interface instead, which we'll
talk about later.
However, since serialization will supply the instance variables with correct values
from an active instance immediately after instantiating the object, the only way this
problem could arise is if the constructors actually do something with their
arguments--besides setting variable values.
If all the constructors take arguments and actually execute initialization code as part
of the constructor, then you may need to refractor a bit. The usual solution is to
move the local initialization code into a new method (usually named something like
initialize() ), which is then called from the original constructor:
public MyObject(arglist) {
// set local variables from arglist
// perform local initialization
}
to something that looks like:
private MyObject( ) {
// zero argument constructor, invoked by serialization
// and never by any other
// piece of code.
// note that it doesn't call initialize( )
}
intialize( );
}
The same problem occurs with hashCode(). Note that Object implements hashCode()
by returning the memory address of the instance. Hence, no two instances ever have
the same hashCode( ) using Object's implementation. If two objects are equal,
however, then they should have the same hashcode. So if you need to override
equals( ), you probably need to override hashCode( ) as well.
We will make this into a serializable class by following the steps outlined in the
previous section.
Of these, four are primitive types that serialization can handle without any problem.
However, _actualDocumentis a problem. InputStream is not a serializable class. And
the contents of _actualDocumentare very important; _actualDocumentcontains the
document we want to print. There is no point in serializing an instance of
DocumentDescription unless we somehow serialize _actualDocument as well.
If we have fields that serialization cannot handle, and they must be serialized, then
our only option is to implement readObject( ) and writeObject( ). For Document-
Description, we declare _actualDocument to be transient and then implement
readObject( )and writeObject( ) as follows:
This code is a little ugly. We're using serialization, but we're still forced to think about
how to encode some of our state when we're sending it out of the stream. In fact,
the code for writeObject() and readObject() is remarkably similar to the marshalling
code we implemented directly for the socket-based version of the printer server. This
is, unfortunately, often the case. Serialization's default implementation handles
simple objects very well. But, every now and then, you will want to send a
nonserializable object over the wire, or improve the serialization algorithm for
efficiency. Doing so amounts to writing the same code you write if you implement all
the socket handling yourself, as in our socket-based version of the printer server.
TIP: There is also an order dependency here. The first value written
must be the first value read. Since we start writing by calling
defaultWriteObject( ), we have to start reading by calling default-
ReadObject( ). On the bright side, this means we'll have an accurate
value for _length before we try to read _actualDocument from the
stream.
the algorithm and protocol, so you can understand how the various hooks for
customizing serialization work and how they fit into the context of an RMI
application.
Figure 10-4.
Inheritance diagram
After writing out the associated class information, the serialization mechanism stores
out the following information for each instance:
And so on until:
• Data associated with the instance, interpreted as an instance of the most-
derived class.
So what really happens is that the type of the instance is stored out, and then all the
serializable state is stored in discrete chunks that correspond to the class structure.
But there's a question still remaining: what do we mean by "a description of the
most-derived class?" This is either a reference to a class description that has already
been recorded (e.g., an earlier location in the stream) or the following information:
• The version ID of the class, which is an integer used to validate the. class files
• A boolean stating whether writeObject( )/ readObject( )are implemented
This should, of course, immediately seem familiar. The class descriptions consist
entirely of metadata that allows the instance to be read back in. In fact, this is one of
the most beautiful aspects of serialization; the serialization mechanism automatically,
at runtime, converts class objects into metadata so instances can be serialized with
the least amount of programmer work.
Writing
Because the class descriptions actually contain the metadata, the basic idea behind
the serialization algorithm is pretty easy to describe. The only tricky part is handling
circular references.
The problem is this: suppose instance A refers to instance B. And instance B refers
back to instance A. Completely writing out A requires you to write out B. But writing
out B requires you to write out A. Because you don't want to get into an infinite loop,
or even write out an instance or a class description more than once you need to keep
track of what's already been written to the stream. (Serialization is a slow process
that uses the reflection API quite heavily in addition to the bandwidth)
If, however, writeObject( ) is passed an instance that has not yet been written to the
stream, two things happen. First, the instance is assigned a reference handle, and
the mapping from instance to reference handle is stored by ObjectOutputStream.
The handle that is assigned is the next integer in a sequence.
Second, the instance data is written out as per the data format described earlier. This
can involve some complications if the instance has a field whose value is also a
serializable instance. In this case, the serialization of the first instance is suspended,
and the second instance is serialized in its place (or, if the second instance has
already been serialized, the reference handle for the second instance is written out).
After the second instance is fully serialized, serialization of the first instance
resumes. The contents of the stream look a little bit like Figure 10-5.
Reading
From the description of writing, it's pretty easy to guess most of what happens when
readObject() is called. Unfortunately, because of versioning issues, the
implementation of readObject( ) is actually a little bit more complex than you might
guess.
The problem is that the class descriptions that the instance of ObjectInputStream
reads from the stream may not be equivalent to the class descriptions of the same
classes in the local JVM. For example, if an instance is serialized to a file and then
read back in three years later, there's a pretty good chance that the class definitions
used to serialize the instance have changed.
This means that ObjectInputStream uses the class descriptions in two ways:
• It uses them to actually pull data from the stream, since the class
descriptions completely describe the contents of the stream.
• It compares the class descriptions to the classes it has locally and tries to
determine if the classes have changed, in which case it throws an exception.
If the class descriptions match the local classes, it creates the instance and
sets the instance's state appropriately.
deserializing instances, but they follow from, and can easily be deduced from, the
description of the serialization changes.
The three most important methods from the point of view of RMI are:
annotateClass( )
ObjectOutputStream calls annotateClass() when it writes out class descriptions.
Annotations are used to provide extra information about a class that comes from the
serialization mechanism and not from the class itself. The basic serialization
mechanism has no real need for annotations; most of the information about a given
class is already stored in the stream.
RMI, on the other hand, uses annotations to record codebase information. That is,
RMI, in addition to recording the class descriptions, also records information about
the location from which it loaded the class's bytecode. Codebases are often simply
locations in a file system. Incidentally, locations in a file system are often useless
information, since the JVM that deserializes the instances may have a very different
file system than the one from where the instances were serialized. However,
codebase isn't restricted to being a location in a file system. The only restriction on
codebases is that they have to be valid URLs. That is, codebase is a URL that
specifies a location on the network from which the bytecode for a class can be
obtained. This enables RMI to dynamically load new classes based on the serialized
information in the stream.
replaceObject( )
The idea of replacement is simple; sometimes the instance that is passed to the
serialization mechanism isn't the instance that ought to be written out to the data
stream. To make this more concrete, recall what happened when we called rebind( )
to register a server with the RMI registry. The following code was used in the bank
example:
This creates an instance of Account_Impl and then calls rebind( ) with that instance.
Account_Impl is a server that implements the Remote interface, but not the
Serializable interface. And yet, somehow, the registry, which is running in a different
JVM, is sent something.
What the registry actually gets is a stub. The stub for Account_Impl, which was
automatically generated by rmic, begins with:
Calling Naming.rebind( ) actually winds up passing a stub to the RMI registry. When
clients make calls to Naming.lookup( ), as in the following code snippet, they also
receive copies of the stub. Since the stub is serializable, there's no problem in
making a copy of it:
This is very good from a bandwidth and network latency point of view. But it can also
be somewhat problematic. Suppose, for example, B implements load balancing.
Since B isn't involved in the A to C communication, it has no direct way of knowing
whether A is still using C, or how heavily. We'll revisit this in Chapters and, when we
discuss the distributed garbage collector and the Unreferenced interface.
Versioning Classes
A few pages back, I described the serialization mechanism:
This is great as long as the classes don't change. When classes change, the
metadata, which was created from obsolete class objects, accurately describes the
serialized information. But it might not correspond to the current class
implementations.
The second type of version problem arises from local changes to a serializable class.
Suppose, for example, that in our bank example, we want to add the possibility of
handling different currencies. To do so, we define a new class, Currency, and change
the definition of Money:
The important distinction between the two types of versioning problems is that the
first type can't really be repaired. If you have old data lying around that was
serialized using an older class hierarchy, and you need to use that data, your best
option is probably something along the lines of the following:
1. Using the old class definitions, write an application that deserializes the data
into instances and writes the instance data out in a neutral format, say as
tab-delimited columns of text.
2. Using the new class definitions, write a program that reads in the neutral-
format data, creates instances of the new classes, and serializes these new
instances.
The second type of versioning problem, on the other hand, can be handled locally,
within the class definition.
This single long, called the class's stream unique identifier (often abbreviated suid),
is used to detect when a class changes. It is an extraordinarily sensitive index. For
example, suppose we add the following method to Money:
We haven't changed, added, or removed any fields; we've simply added a method
with no side effects at all. But adding this method changes the suid. Prior to adding
it, the suid was 6625436957363978372L; afterwards, it was
-3144267589449789474L. Moreover, if we had made isBigBucks( ) a protected
method, the suid would have been 4747443272709729176L.
The default behavior for the serialization mechanism is a classic "better safe than
sorry" strategy. The serialization mechanism uses the suid, which defaults to an
extremely sensitive index, to tell when a class has changed. If so, the serialization
mechanism refuses to create instances of the new class using data that was
serialized with the old classes.
in our source code, then the suid would be 1, no matter how many changes we made
to the rest of the class. Explicitly declaring serialVersionUID allows us to change the
class, and add convenience methods such as isBigBucks( ), without losing backwards
compatibility.
The serialization mechanism won't detect that these are completely incompatible
classes. Instead, when it tries to create the new instance, it will throw away all the
data it reads in. Recall that, as part of the metadata, the serialization algorithm
records the name and type of each field. Since it can't find the fields during
deserialization, it simply discards the information.
In addition, your readObject( ) code should start with a switch statement based on
the version number:
Doing this will enable you to explicitly control the versioning of your class. In addition
to the added control you gain over the serialization process, there is an important
consequence you ought to consider before doing this. As soon as you start to
explicitly version your classes, defaultWriteObject( ) and defaultReadObject( ) lose a
lot of their usefulness.
Trying to control versioning puts you in the position of explicitly writing all the
marshalling and demarshalling code. This is a trade-off you might not want to make.
Performance Issues
Serialization is a generic marshalling and demarshalling algorithm, with many hooks
for customization. As an experienced programmer, you should be skeptical--generic
algorithms with many hooks for customization tends to be slow. Serialization is not
an exception to this rule. It is, at times, both slow and bandwidth-intensive. There
are three main performance problems with serialization: it depends on reflection, it
has an incredibly verbose data format, and it is very easy to send more data than is
required.
This isn't a lot of information, but it's information that RMI computes and sends with
every method invocation. (Recall that RMI resets the serialization mechanism with
every method call.) Even if the first two bullets comprise only 100 extra bytes of
information, the cumulative impact is probably significant.
The second problem is that each serialized instance is treated as an individual unit. If
we are sending large numbers of instances within a single method invocation, then
there is a fairly good chance that we could compress the data by noticing
commonalities across the instances being sent.
What happens as a result of this? On the bright side, the application still works. After
everything is recompiled, the entire application, including the remote method
invocations, will still work. That's the nice aspect of serialization--we added new
fields, and the data format used to send arguments over the wire automatically
adapted to handle our changes. We didn't have to do any work at all.
On the other hand, adding a new field redefined the data format associated with
Employee. Because serialVersionUID wasn't defined in the first version of the class,
none of the old data can be read back in anymore. And there's an even more serious
Suppose Bob works in the mailroom. And we serialize the object associated with Bob.
In the old version of our application, the data for serialization consisted of:
The new version of the application isn't backwards-compatible because our old data
can't be read by the new version of the application. In addition, it's slower and is
much more likely to cause network congestion.
These have roughly the same role that readObject() and writeObject( ) have for
serialization. There are, however, some very important differences. The first, and
most obvious, is that readExternal( ) and writeExternal( ) are part of the
Externalizableinterface. An object cannot be declared to be Externalizablewithout
implementing these methods.
However, the major difference lies in how these methods are used. The serialization
mechanism always writes out class descriptions of all the serializable superclasses.
And it always writes out the information associated with the instance when viewed as
an instance of each individual superclasses.
Externalization gets rid of some of this. It writes out the identity of the class (which
boils down to the name of the class and the appropriate serialVersionUID). It also
stores the superclass structure and all the information about the class hierarchy. But
instead of visiting each superclass and using that superclass to store some of the
state information, it simply calls writeExternal( ) on the local class definition. In a
nutshell: it stores all the metadata, but writes out only the local instance
information.
On the other hand, Externalizable isn't particularly easy to do, isn't very flexible, and
requires you to rewrite your marshalling and demarshalling code whenever you
change your class definitions. However, because it eliminates almost all the reflective
calls used by the serialization mechanism and gives you complete control over the
marshalling and demarshalling algorithms, it can result in dramatic performance
improvements.
To demonstrate this, I have defined the EfficientMoney class. It has the same fields
and functionality as Money but implements Externalizable instead of Serializable:
We now want to compare Money with EfficientMoney. We'll do so using the following
application:
}
}
On my home machine, averaging over 10 trial runs for both Money and
EfficientMoney, I get the results shown in Table 10-1. (We need to average because
the elapsed time can vary (it depends on what else the computer is doing). The size
of the file is, of course, constant.)
These results are fairly impressive. By simply converting a leaf class in our hierarchy
to use externalization, I save 67 bytes and 10 milliseconds when serializing a single
instance. In addition, as I pass larger data sets over the wire, I save more and more
bandwidth--on average, 18 bytes per instance.
If I need more efficiency, I can go further and remove ValueObject from the
hierarchy entirely. The ReallyEfficientMoney class directly extends Object and
implements Externalizable:
Compared to Money, this is quite impressive; I've shaved almost 200 bytes of
bandwidth and saved 40 milliseconds for the typical remote method call. The
downside is that I've had to abandon my object hierarchy completely to do so; a
significant percentage of the savings resulted from not including ValueObject in the
inheritance chain. Removing superclasses makes code harder to maintain and forces
programmers to implement the same method many times (ReallyEfficientMoney can't
use ValueObject's implementation of equals( ) and hashCode( ) anymore). But it
does lead to significant performance improvements.
JAVA MAIL
The design of the Java Mail API is a good example of Sun's continuing efforts to
provide common API frameworks for the Java development community. Emphasizing
these common frameworks, as opposed to vendor-specific solutions, bodes well for
the creation of an increasingly open development environment.
On the e-mail messaging front, higher level (consumer) developers can shop around
for the implementation of the common API framework that best fits their needs -- or
even support multiple implementations simultaneously. Lower level implementation
providers can develop solutions that ensure efficient access to their mail server
products. As an example of what this means, a small startup company can
concentrate on developing that killer mail client and be assured of easily supporting
it for any mail system environment. And the bluechip IT giant can focus on providing
widespread access to its newly developed industrial-strength mail services, assured
of a rich wealth of application support. The big winners are the IS customers, who
can mix and match the best vendor products or solutions to develop their systems
yet still swap components as requirements dictate (whether these be performance,
financial, or political).
One key to developing highly reusable and open API frameworks is to emphasize
abstract interfaces in a way that supports existing standards but does not limit future
enhancements or alternative implementations. The Java Mail API does just that!
Furthermore, Sun is also rapidly developing -- or providing through third parties --
default implementations and utilities for the most commonly available protocols and
standards. For example, default implementations such as POP3, SMTP, and IMAP
protocol servers are currently available, so you can start developing that award-
winning killer app now without having to reinvent the protocol wheel unless you want
to (or really need to).
On first glance, the number of Java Mail API classes and the detailed layout of these
classes may cause you to believe you're in for a heavy learning curve. But in reality,
once you get working, you'll find that this API is a simple and handy tool for
implementing robust mail/messaging functionality in your applications.
Analysis of the primary Java Mail API package classes provides insight into the
common mechanics of e-mail messaging systems. A high-level overview of the
classes in the relative order in which they are normally encountered in a typical
application reveals the simplicity of the Java Mail API.
Although the Java Mail API contains many more classes than those discussed here,
concentrating on some of the core classes to start with makes it easy to understand
the essence of the API. The following is a detailed description of these core classes,
which include javax.mail.Session, javax.mail.Store, javax.mail.Transport,
javax.mail.Folder, and javax.mail.Message.
javax.mail.Session
The javax.mail.Session class is the top-level entry class for the Java Mail API, and its
most commonly used methods provide the ability to control and load the classes that
represent the service provider implementations (SPI) for various mail protocols. For
example, instances of the javax.mail.Store and javax.mail.Session classes --
described below -- are obtained via the Session class. (Note: A service provider is a
developer and/or vendor that provides an implementation for an API; examples of
Java Mail API implementations include POP3, SMTP, and IMAP4 -- some are available
from Sun, others via third parties.)
javax.mail.Store
The javax.mail.Store class is implemented by a service provider, such as a POP Mail
implementation developer, and allows for read, write, monitor, and search access for
a particular mail protocol. The javax.mail.Folder class is accessed through this class
and is detailed below.
javax.mail.Transport
The javax.mail.Transport class is another provider-implemented class and is used for
sending a message over a specific protocol.
javax.mail.Folder
The javax.mail.Folder class is implemented by a provider; it gives hierarchical
organization to mail messages and provides access to e-mail messages in the form of
javax.mail.Message class objects.
javax.mail.Message
The javax.mail.Message class is implemented by a provider and models all the details
of an actual e-mail message, such as the subject line, sender/recipient e-mail
address, sent date, and so on. The guidelines for providers who implement the
javax.mail.Message dictate that the actual fetching of e-mail message components
should be delayed as long as possible in order to make this class as lightweight as
possible.
current application easily supports POP3, SMTP, and IMAP servers, and adding
support for Lotus e-mail, say, would be as simple as plugging in an implementation
from IBM and abstracting any hard coded references to protocols.
Running the ListServer application is very simple. Just remember to include the JAR
files for Java Mail, JAF, and the default POP3 implementation in the CLASSPATH, as
shown in the following MS-DOS batch file example. (You can obtain these JAR files
from the Java Mail home page link provided in the Resources section at the end of
this article.)
@echo off
PATH .;d:\jdk1.1\bin
set CLASSPATH=.;d:\jdk1.1\lib\classes.zip;activation.jar;mail.jar;pop3.jar
java ListServer %1 %2 %3 %4 %5 %6 %7 %8 %9
Upon starting, the ListServer main() routine will read in the settings, including the
appropriate mail servers, mail accounts, and update frequency. Next, an instance of
a ListServer is instantiated and created, and the program enters an infinite loop of
processing new messages and sleeping, until it is time to check for messages again.
The heart of this ListServer program occurs in the process() routine, which directs
the reading and broadcasting of all new messages. The significant Java Mail API-
specific code snippets in the method process() perform the following actions:
The code for setting up the message fields such as to, from, subject, and date is very
simple:
// create a message
//
Address replyToList[] = { new InternetAddress(replyTo) };
Message newMessage = new MimeMessage(session);
if (_fromName != null)
newMessage.setFrom(new InternetAddress(from,
_fromName + " on behalf of " + replyTo));
else
newMessage.setFrom(new InternetAddress(from));
newMessage.setReplyTo(replyToList);
newMessage.setRecipients(Message.RecipientType.BCC, _toList);
newMessage.setSubject(subject);
newMessage.setSentDate(sentDate);
Setting the contents of the message requires reading in the desired contents and
then calling the appropriate setContents...() routine as follows:
}
else
{
debugMsg("Sending Text message (" + debugText + ")");
newMessage.setText((String)content);
}
The source code for ListServer is very basic but provides a fully functional list server.
Furthermore, this basic list server can easily be enhanced by adding features such as
automatic subscribe and unsubscribe.
Overview
The Enterprise JavaBeans™ (EJB) specification defines an architecture for the
development and deployment of transactional, distributed object applications-based,
server-side software components. Organizations can build their own components or
purchase components from third-party vendors. These server-side components,
called enterprise beans, are distributed objects that are hosted in Enterprise
JavaBean containers and provide remote services for clients distributed throughout
the network.
The container isolates the enterprise bean from direct access by client applications.
When a client application invokes a remote method on an enterprise bean, the
container first intercepts the invocation to ensure persistence, transactions, and
security are applied properly to every operation a client performs on the bean. The
container manages security, transactions, and persistence automatically for the bean,
so the bean developer doesn't have to write this type of logic into the bean code
itself. The enterprise bean developer can focus on encapsulating business rules, while
the container takes care of everything else.
Containers will manage many beans simultaneously in the same fashion that the
Java WebServer manages many Servlets. To reduce memory consumption and
processing, containers pool resources and manage the lifecycles of all the beans very
carefully. When a bean is not being used, a container will place it in a pool to be
reused by another client, or possibly evict it from memory and only bring it back
when its needed. Because client applications don't have direct access to the beans --
the container lies between the client and bean -- the client application is completely
unaware of the containers resource management activities. A bean that is not in use,
for example, might be evicted from memory on the server, while its remote reference
on the client remains intact. When the client invokes a method on the remote
reference, the container simply re-incarnates the bean to service the request. The
client application is unaware of the entire process.
• Callback Methods
Every bean implements a subtype of the EnterpriseBean interface which defines
several methods, called callback methods. Each callback method alerts the
bean of a different event in its lifecycle and the container will invoke these
methods to notify the bean when it's about to pool the bean, persist its state
to the database, end a transaction, remove the bean from memory, etc. The
callback methods give the bean a chance to do some housework immediately
before or after some event. Callback methods are discussed in more detail in
later sections.
• EJBContext
Portability is central to the value that EJB brings to the table. Portability ensures that
a bean developed for one container can be migrated to another if another brand
offers more performance, features, or savings. Portability also means that the bean
developer's skills can be leveraged across several EJB container brands, providing
organizations and developers with better opportunities.
In addition to portability, the simplicity of the EJB programming model makes EJB
valuable. Because the container takes care of managing complex tasks like security,
transactions, persistence, concurrency and resource management the bean
developer is free to focus attention on business rules and a very simple programming
model. A simple programming model means that beans can be developed faster
without requiring a Ph.D. in distributed objects, transactions and other enterprise
systems. EJB brings transaction processing and distributed objects development into
the mainstream.
Enterprise Beans
To create an EJB server-side component, an enterprise bean developer provides two
interfaces that define a bean's business methods, plus the actual bean
implementation class. The client then uses a bean's public interfaces to create,
manipulate, and remove beans from the EJB server. The implementation class, to be
called the bean class, is instantiated at run time and becomes a distributed object.
Enterprise beans live in an EJB container and are accessed by client applications over
the network through their remote and home interfaces. The remote and home
interfaces expose the capabilities of the bean and provide all the method needed to
create, update, interact with, and delete the bean. A bean is a server-side
component that represents a business concept like a Customer or a Hotel Clerk.
The Home interface represents the life-cycle methods of the component (create,
destroy, find) while the remote interface represents the business method of the
bean. The Remote and Home interfaces extend the javax.ejb.EJBObject and
javax.ejb.EJBHome interfaces respectively. These EJB interface types define a standard
set of utility methods and provide common base types for all remote and home
interfaces.
Clients use the bean's home interface to obtain references to the bean's remote
interface. The remote interface defines the business methods like accessor and
mutator for changing a Customer's name, or business methods that perform tasks
like using the HotelClerk bean to reserve a room at a hotel. Below is an example of
how a Customer bean might be accessed from a client application. In this case the
home interface is the CustomerHome type and the remote interface is the Customer
type.
The remote interface defines the business methods of a bean; the methods that are
specific to the business concept it represents. Remote interfaces are subclassed from
the javax.ejb.EJBObject interface, which is a subclass of the java.rmi.Remote interface.
The importance of the remote interfaces inheritance hierarchy is discussed later. Now
focus on the business methods and their meaning. Below is the definition of a remote
interface for a Customer bean.
import javax.ejb.EJBObject;
import java.rmi.RemoteException;
The remote interface defines accessor and mutator methods to read and update
information about a business concept. This is typical of a type of bean called an
entity bean, which represents a persistent business object; business objects whose
data is stored in a database. Entity beans represent business data in the database
and add behavior specific to that data.
Business Methods
Business methods can also represent tasks that a bean performs. Although entity
beans often have task-oriented methods, tasks are more typical of a type of bean
called a session bean. Session beans do not represent data like entity beans. They
represent business processes or agents that perform a service, like making a
reservation at a hotel. Below is the definition of the remote interface for a HotelClerk
bean, which is a type of session bean.
import javax.ejb.EJBObject;
import java.rmi.RemoteException;
The business methods defined in the HotelClerk remote interface represent processes
rather than simple accessors. The HotelClerk bean acts as an agent in the sense that
it performs tasks on behalf of the user, but is not itself persistent in the database.
You don't need information about the HotelClerk, you need the hotel clerk to perform
tasks for you. This is typical behavior for a session bean.
There are two basic types of enterprise beans: entity beans, which represent data in
a database, and session beans, which represent processes or act as agents
performing tasks. As you build an EJB application you will create many enterprise
beans, each representing a different business concept. Each business concept will be
manifested as either an entity bean or a session bean. You will choose which type of
bean a business concept becomes based on how it is intended to be used.
Entity Beans
For every remote interface there is an implementation class; a business object that
actually implements the business methods defined in the remote interface. This is
the bean class; the key element of the bean. Below is a partial definition of the
Customer bean class.
import javax.ejb.EntityBean;
Address myAddress;
Name myName;
CreditCard myCreditCard;
...
}
CustomerBean is the implementation class. It holds the data and provides accessor
methods and other business methods. As an entity bean, the CustomerBean provides
an object view of customer data. Instead of writing database access logic in an
application, the application can simply use the remote interface to the Customer
bean to access customer data. Entity beans implement the javax.ejb.EntityBean type,
which defines a set of notification methods that the bean uses to interact with its
container. These notification methods are examined in detail later in this course.
Session Beans
The HotelClerk bean is a session bean, which is similar in many respects to an entity
bean. Session beans represent a set of processes or tasks, which are performed on
behalf of the client application. Session beans may use other beans to perform a task
or access the database directly. A little bit of code shows a session bean doing both.
The reserveRoom() method shown below uses several other beans to a accomplish a
task, while the availableRooms() method uses JDBC to access the database directly.
import javax.ejb.SessionBean;
public void reserveRoom(Customer cust, RoomInfo ri, Date from, Date to) {
CreditCard card = cust.getCreditCard();
RoomHome roomHome = // ... get home reference
Room room = roomHome.findByPrimaryKey(ri.getID());
double amount = room.getPrice(from,to);
CreditServiceHome creditHome = // ... get home reference
CreditService creditAgent = creditHome.create();
creditAgent.verify(card, amount);
ReservationHome resHome = // ... get home reference
Reservation reservation = resHome.create(cust,room,from,to);
}
You may have noticed that the bean classes defined above do not implement the
remote or home interfaces. EJB doesn't require that the bean class implement these
interfaces; in fact it's discouraged because the base types of the remote and home
interfaces (EJBObject and EJBHome) define a lot of other methods that are
implemented by the container automatically. The bean class does however provide
implementations for all the business methods defined in the remote interface as well
as methods. Callback methods are discussed in more detail below.
import javax.ejb.EJBHome;
import javax.ejb.CreateException;
import javax.ejb.FinderException;
import java.rmi.RemoteException;
The create() method is used to create a new entity. This will result in a new record in
the database. A home may have many create() methods. The number and datatype
of the arguments of each create() are left up to the bean developer, but the return
type must be the remote interface datatype. In this case, invoking create() on the
CustomerHome interface will return an instance of Customer. The findByPrimaryKey()
and findByZipCode() methods are used to locate specific instance of the customer
bean. Again, you may define as many find methods as you need.
The javax.ejb.EJBHome interface also defines other methods that the CustomerBean
automatically inherits, including a set of remove() methods that allow the application
to destroy bean instances.
To make an object instance in one address space available in another requires a little
trick involving network sockets. To make the trick work, wrap the instance in a
special object called a skeleton that has a network connection to another special
object called a stub. The stub implements the remote interface so it looks like a
business object. But the stub doesn't contain business logic; it holds a network
socket connection to the skeleton. Every time a business method is invoked on the
stub's remote interface, the stub sends a network message to the skeleton telling it
which method was invoked. When the skeleton receives a network message from the
stub, it identifies the method invoked and the arguments, and then invokes the
corresponding method on the actual instance. The instance executes the business
method and returns the result to the skeleton, which sends it to the stub. The
diagram below illustrates:
The stub returns the result to the application that invoked its remote interface
method. From the perspective of the application using the stub, it looks like the stub
does the work locally. Actually, the stub is just a dumb network object that sends the
requests across the network to the skeleton, which in turn invokes the method on
the actual instance. The instance does all the work, the stub and skeleton just pass
the method identity and arguments back and forth across the network.
In EJB, the skeleton for the remote and home interfaces is implemented by the
container, not the bean class. This is to ensure that every method invoked on these
reference types by a client application are first handled by the container and then
delegated to the bean instance. The container must intercept these requests
intended for the bean so that it can apply persistence (entity beans), transactions,
and access control automatically.
Distributed object protocols define the format of network messages sent between
address spaces. Distributed object protocols get pretty complicated, but luckily you
don't see any of it because it's handled automatically. Most EJB servers support
either the Java Remote Method Protocol (JRMP) or CORBA's Internet Inter-ORB
Protocol (IIOP). The bean and application programmer only see the bean class and
its remote interface, the details of the network communication are hidden.
With respect to the EJB API, the programmer doesn't care whether the EJB server
uses JRMP or IIOP--the API is the same. The EJB specification requires that you use
a specialized version the Java RMI API, when working with a bean remotely. Java RMI
is an API for accessing distributed objects and is somewhat protocol agnostic -- in
the same way that JDBC is database agnostic. So, an EJB server can support JRMP or
IIOP, but the bean and application developer always uses the same Java RMI API. In
order for the EJB server to have the option of supporting IIOP, a specialized version
of Java RMI, called Java RMI-IIOP was developed. Java RMI-IIOP uses IIOP as the
protocol and the Java RMI API. EJB servers don't have to use IIOP, but they do have
to respect Java RMI-IIOP restrictions, so EJB 1.1 uses the specialized Java RMI-IIOP
conventions and types, but the underlying protocol can be anything.
interface to data that would normally be accessed by the JDBC or some other back-
end API. More than that, entity beans provide a component model that allows bean
developers to focus their attention on the business logic of the bean, while the
container takes care of managing persistence, transactions, and access control.
There are two basic kinds of entity bean: Container-Managed Persistence (CMP), and
Bean-Managed Persistence (BMP). With CMP, the container manages the persistence
of the entity bean. Vendor tools are used to map the entity fields to the database and
absolutely no database access code is written in the bean class. With BMP, the entity
bean contains database access code (usually JDBC) and is responsible for reading
and writing its own state to the database. BMP entities have a lot of help with this
since the container will alert the bean as to when it's necessary to make an update
or read its state from the database. The container can also handle any locking or
transactions, so that the database maintains integrity.
Container-Managed Persistence
Container-managed persistence beans are the simplest for the bean developer to
create and the most difficult for the EJB sever to support. This is because all the logic
for synchronizing the bean's state with the database is handled automatically by the
container. This means that the bean developer doesn't need to write any data access
logic, while the EJB server is supposed to take care of all the persistence needs
automatically -- a tall order for any vendor. Most EJB vendors support automatic
persistence to a relational database, but the level of support varies. Some provide
very sophisticated Object-to-Relational mapping, while others are very limited.
In this section, you will expand the CustomerBean developed earlier to a complete
definition of a Container-managed persistence bean. In the next section on Bean-
managed persistence you will modify the CustomerBean to manage its own
persistence.
Bean Class
An enterprise bean is a complete component, which is made up of at least two
interfaces and a bean implementation class. All these types will be presented and
their meaning and application explained, starting with the bean class, which is
defined below:
import javax.ejb.EntityBean;
int customerID;
Address myAddress;
Name myName;
CreditCard myCreditCard;
// CREATION METHODS
public Integer ejbCreate(Integer id) {
customerID = id.intValue();
return null;
}
// BUSINESS METHODS
public Name getName() {
return myName;
}
// CALLBACK METHODS
public void setEntityContext(EntityContext cntx) {
}
This is a good example of a fairly simple CMP entity bean. Notice that there is no
database access logic in the bean. This is because the EJB vendor provides tools for
mapping the fields in the CustomerBean to the database. The CustomerBean class, for
example, could be mapped to any database providing it contains data that is similar
to the fields in the bean. In this case the bean's instance fields are comprised of a
primitive int and simple dependent objects (Name, Address,and CreditCard) with their
own attributes. Below are the definitions for these dependent objects.
public Name() {}
}
public Address() {}
}
this.name = name;
this.expDate = expDate;
}
public CreditCard() {}
}
These fields are called container-managed fields because the container is responsible
for synchronizing their state with the database; the container manages the fields.
Container-managed fields can be any primitive data types or serializable type. This
case uses both a primitive int (customerID) and serializable objects (Address, Name,
CreditCard). In order to map the dependent objects to the database a fairly
sophisticated mapping tool would be needed. Not all fields in a bean are
automatically container-managed fields; some may be just plain instance fields for
the bean's transient use. A bean developer distinguishes container-managed fields
from plain instance fields by indicating which fields are container-managed in the
deployment descriptor.
With container-managed persistence, the vendor must have some kind of proprietary
tool that can map the bean's container-managed fields to their corresponding
columns in a specific table, CUSTOMER in this case.
Once the bean's fields are mapped to the database, and the Customer bean is
deployed, the container will manage creating records, loading records, updating
records, and deleting records in the CUSTOMER table in response to methods
invoked on the Customer bean's remote and home interfaces.
A subset (one or more) of the container-managed fields will also be identified as the
bean's primary key. The primary key is the index or pointer to a unique record(s) in
the database that makes up the state of the bean. In the case of the CustomerBean,
the id field is the primary key field and will be used to locate the beans data in the
database. Primitive single field primary keys are represented as their corresponding
object wrappers. The primary key of the Customer bean for example is a primitive int
in the bean class but to a bean's clients it will manifest itself as the java.lang.Integer
type. Primary keys that are make up of several fields, called compound primary keys,
will be represented by a special class defined by the bean developer. Primary keys
are similar in concept to primary keys in a relational database -- actually when a
relational database is used for persistence they are often the same thing.
Home Interface
To create a new instance of a CMP entity bean, and therefore insert data into the
database, the create() method on the bean's home interface must be invoked. The
Customer bean's home interface is defined by the CustomerHome interface - the
definition is shown below.
A bean's home interface may declare zero or more create() methods, each of which
must have corresponding ejbCreate() and ejbPostCreate() methods in the bean class.
These creation methods are linked at run time, so that when a create() method is
invoked on the home interface, the container delegates the invocation to the
corresponding ejbCreate() and ejbPostCreate() methods on the bean class.
When the create() method on a home interface is invoked, the container delegates
the create() method call to the bean instance's matching ejbCreate() method. The
ejbCreate() methods are used to initialize the instance state before a record is
inserted into the database. In this case, they initialize the customerID and Name
fields. When the ejbCreate() method is finished (they return null in CMP) the container
reads the container-managed fields and inserts a new record into the CUSTOMER
table indexed by the primary key, in this case customerID as it maps to the
CUSOTMER.ID column.
In EJB, an entity bean doesn't technically exist until after its data has been inserted
into the database, which occurs during the ejbCreate() method. Once the data has
been inserted, the entity bean exists and can access its own primary key and remote
references, which isn't possible until after the ejbCreate() method completes and the
data is in the database. If a bean needs to access its own primary key or remote
reference after its created, but before it services any business methods, it can do so
in the ejbPostCreate() method. The ejbPostCreate() allows the bean to do any post-
create processing before it begins serving client requests. For every ejbCreate() there
must be a matching (matching arguments) ejbPostCreate() method.
The methods in the home interface that begins with "find" are called the find
methods. These are used to query the EJB server for specific entity beans, based on
the name of the method and arguments passed. Unfortunately, there is no standard
query language defined for find methods, so each vendor will implement the find
method differently. In CMP entity beans, the find methods are not implemented with
matching methods in the bean class; containers implement them when the bean is
deployed in a vendor specific manner. The deployer will use vendor specific tools to
tell the container how a particular find method should behave. Some vendors will use
Object-Relational mapping tools to define the behavior of a find method while others
will simply require the deployer to enter the appropriate SQL command.
There are two basic kinds of find methods: single-entity and multi-entity find
methods. Single-entity find methods return a remote reference to the one specific
entity bean that matches the find request. If no entity beans are found, the method
throws an ObjectNotFoundException. Every entity bean must define the single-entity
find method with the method name findByPrimaryKey(), which takes the bean's
primary key type as an argument. (In the above example you used the Integer type
which wrapped the int type of the id field in the bean class.) The multi-entity find
methods return a collection (Enumeration or Collection type) of entities that match the
find request. If no entities are found, the multi-entity find returns an empty
collection. (Note that an empty collection is not the same thing as a null reference.)
Remote Interface
Every entity bean must define a remote interface in addition to the home interface.
The remote interface defines the business methods of the entity bean. The following
is the remote interface definition for the Customer bean:
import javax.ejb.EJBObject;
import java.rmi.RemoteException;
Below is an example of how a client application would use the remote interface to
interact with a bean:
The business methods in the remote interface are delegated to the matching
business methods in the bean instance. In the Customer bean, the business methods
are all simple accessors and mutators, but they could be more complicated. In other
words, an entity's business methods are not limited to reading and writing data.
They can also perform tasks and computations.
If customers had, for example, a loyalty program that rewarded frequent use, there
might be methods to upgrade membership in the program based on an accumulation
of overnight stays. See below:
import javax.ejb.EJBObject;
import java.rmi.RemoteException;
The addNights() and upgradeMembership() methods are more sophisticated than simple
accessor methods. They apply business rules to change the membership level and go
beyond reading and writing data.
Callback Methods
The bean class defines create methods that match methods in the home interface
and business methods that match methods in the remote interface. The bean class
also implements a set of callback methods that allow the container to notify the bean
of events in its lifecycle. The callback methods are defined in the javax.ejb.EntityBean
interface that is implemented by all entity beans, including the CustomerBean class.
The EntityBean interface has the following definition. Notice that the bean class
implements these methods.
The setEntityContext() method provides the bean with an interface to the container
called the EntityContext. The EntityContext interface contains methods for obtaining
information about the context under which the bean is operating at any particular
moment. The EntityContext interface is used to access security information about the
caller; to determine the status of the current transaction or to force a transaction
rollback; or to get a reference to the bean itself, its home, or its primary key. The
EntityContext is only set once in the life of a entity bean instance, so its reference
should be put into one of the bean instance's fields if it will be needed later.
The Customer bean above doesn't use the EntityContext, but it could. For example, it
could use the EntityContext to validate the caller's membership in a particular security
role. Below is an example, where the EntityContext is used to validate that the caller
is a Manager, the only role (security identity) that can set the credit card type on a
customer to be a WorldWide card, the no-limit card of the super wealthy. (Customers
with this card are automatically tagged for extra service.)
import javax.ejb.EntityBean;
int customerID;
Address myAddress;
Name myName;
CreditCard myCreditCard;
EntityContext ejbContext;
// CALLBACK METHODS
public void setEntityContext(EntityContext cntx) {
ejbContext = cntx
}
// BUSINESS METHODS
public void setCreditCard(CreditCard card) {
if (card.type.equals("WorldWide"))
if (ejbContext.isCallerInRole("Manager"))
myCreditCard = card;
else
throw new SecurityException();
else
myCreditCard = card;
}
The unsetEntityContext() method is used at the end of the bean's lifecycle -- before
the instance is evicted from memory -- to dereference the EntityContext and perform
any last minuet clean-up.
The ejbLoad() and ejbStore() methods in CMP entities are invoked when the entity
bean's state is being synchronized with the database. The ejbLoad() is invoked just
after the container has refreshed the bean container-managed fields with its state
from the database. The ejbStore() method is invoked just before the container is
about to write the bean container-managed fields to the database. These methods
are used to modify data as its being synchronized. This is common when the data
stored in the database is different than the data used in the bean fields. The
methods might be used, for example, to compress data before it is stored and
decompress it when it is retrieved from the database.
In the Customer bean the ejbLoad() and ejbStore() methods might be used to convert
the dependent objects (Name, Address, CreditCard) to simple String objects and
primitive types, if the EJB container is not sophisticated enough to map the
dependent objects to the CUSTOMER table. Below is an example of how the bean
might be modified.
import javax.ejb.EntityBean;
//container-managed fields
int customerID;
String lastName;
String firstName;
String middleName;
...
// not-container-managed fields
Name myName;
Address myAddress;
CreditCard myCreditCard;
// BUSINESS METHODS
public Name getName() {
return myName;
}
if (myName == null)
myName = new Name();
myName.lastName = lastName;
myName.firstName = firstName;
myName.middleName = middleName;
...
}
lastName = myName.lastName;
firstName = myName.firstName;
middleName = myName.middleName;
...
}
}
The ejbPassivate() and ejbActivate() methods are invoked on the bean by the container
just before the bean is passivated and just after the bean is activated, respectively.
Passivation in entity beans means that the bean instance is disassociated with its
remote reference so that the container can evict it from memory or reuse it. It's a
resource conservation measure the container employs to reduce the number of
instances in memory. A bean might be passivated if it hasn't been used for a while or
as a normal operation performed by the container to maximize reuse of resources.
Some containers will evict beans from memory, while others will reuse instances for
other more active remote references. The ejbPassivate() and ejbActivate() methods
provide the bean with a notification as to when its about to be passivated
(disassociated with the remote reference) or activated (associated with a remote
reference).
Bean-Managed Persistence
The bean-managed persistence (BMP) enterprise bean manages synchronizing its
state with the database as directed by the container. The bean uses a database API
(usually JDBC) to read and write its fields to the database, but the container tells it
when to do each synchronization operation and manages the transactions for the
bean automatically. Bean-managed persistence gives the bean developer the
flexibility to perform persistence operations that are too complicated for the
container or to use a data source that is not supported by the container -- custom
and legacy databases for example.
In this section, you'll modify the CustomerBean class to be a BMP bean instead of a
CMP bean. This modification will not impact the remote or home interface at all. In
fact, you won't modify the original CustomerBean directly. Instead, you'll change it to
bean-managed persistence by extending the bean and overriding the appropriate
methods. Below is the definition of the class that will extend the Customer bean class
to make it a BMP entity. In most cases, you will not extend a bean to make it BMP,
you will just implement the bean as BMP directly. This strategy (extending the CMP
bean) is done for two reasons: it allows the bean to be either a CMP or BMP bean;
and it conveniently cuts down on the amount of code needed to display. Below is the
definition of the BMP class which will be added to as this section proceeds:
With BMP beans, the ejbLoad() and ejbStore() methods are used differently by the
container and bean than was the case in CMP. In BMP, the ejbLoad() and ejbStore()
methods contain code to read the bean's data from the database and to write
changes to the database, respectively. These methods are called on the bean
automatically, when the EJB server decides it's a good time to read or write data.
The CustomerBean_BMP bean manages its own persistence. In other words, the
ejbLoad() and ejbStore() methods must include database access logic, so that the
bean can load and store its data when the EJB container tells it to. The container will
execute the ejbLoad() and ejbStore() methods automatically when appropriate.
import java.sql.Connection;
In the ejbLoad() method, use the ejbContext() reference to the bean's EntityContext to
obtain the instance's primary key. This ensures that you use the correct index to the
database. Obviously, the CustomerBean_BMP will need to use the inherited
setEntityContext() and unsetEntityContext() methods as illustrated earlier.
The ejbStore() method is invoked by the container on the bean, at the end of a
transaction, just before the container attempts to commit all changes to the
database.
import java.sql.Connection;
In both the ejbLoad() and ejbStore() methods the bean synchronizes its own state with
the database using JDBC. If you studied the code carefully you may have noticed
that the bean obtains its database connection from the mysterious this.getConnection()
import java.sql.Connection;
Database connections are obtained from the container using a default JNDI context
called the JNDI Environment Naming Context (ENC). The ENC provides access to
transactional, pooled, JDBC connections through the standard connection factory, the
javax.sql.DataSource type.
In BMP, the ejbLoad() and ejbStore() methods are invoked by the container to
synchronize the bean instance with data in the database. To insert and remove
entities in the database, the ejbCreate() and ejbRemove() methods are implemented
with similar database access logic. The ejbCreate() methods are implemented so that
a new record is inserted into the database and the ejbRemove() methods are
implemented so that the entity's data is deleted from the database. The ejbCreate()
methods and the ejbRemove() method of a BMP entity are invoked by the container in
response to invocations on their corresponding methods in the home and remote
interfaces. The implementations of these methods are shown below.
In BMP, the bean class is responsible for implementing the find methods defined in
the home interface. For each find method defined in the home interface there must
be corresponding ejbFind() method in the bean class. The ejbFind() methods locate the
appropriate bean records in the database and return their primary keys to the
container. The container converts the primary keys into bean references and returns
them to the client. Below is an example implementation of the ejbFindByPrimaryKey()
method in the CustomerBean_BMP class, which corresponds to the findByPrimaryKey()
defined in the CustomerHome interface.
Single-entity find methods like the one above return a single primary key or throw
the ObjectNotFoundException if no matching record is located. Multi-entity find
methods return a collection (java.util.Enumeration or java.util.Collection) of primary
keys. The container converts the collection of primary keys into a collection of
remote references, which are returned to the client. Below is an example of how the
multi-entity ejbFindByZipCode() method, which corresponds to the findByZipCode()
method defined in the CustomerHome interface, would be implemented in the
CustomerBean_BMP class.
With the implementation of all these methods and a few minor changes to the bean's
deployment descriptor, the CustomerBean_BMP is ready to be deployed as a BMP
entity.
There are two basic kinds of session bean: Stateless and Stateful. Stateless session
beans are made up of business methods that behave like procedures; they operate
only on the arguments passed to them when they are invoked. Stateless beans are
called "stateless" because they are transient; they do not maintain business state
between method invocations. Stateful session beans encapsulate business logic and
state specific to a client. Stateful beans are called "stateful" because they do
maintain business state between method invocations, held in memory and not
persistent.
// remote interface
public interface CreditService extends javax.ejb.EJBObject {
public void verify(CreditCard card, double amount)
throws RemoteException, CreditServiceException;
public void charge(CreditCard card, double amount)
throws RemoteException, CreditServiceException;
}
// home interface
public interface CreditServiceHome extends java.ejb.EJBHome {
public CreditService create()
throws RemoteException, CreateException;
}
The remote interface, CreditService, defines two methods, verify() and charge() which
are used by the Hotel to verify and charge credit cards. The Hotel might use the
verify() method to make a reservation, but not charge the customer. The charge()
method would be used to charge a customer for a room. The home interface,
CreditServiceHome provides one create() method with no arguments. All home
interfaces for stateless session beans will define just one method, a no-argument
create() method, because session beans do not have find methods and they cannot
be initiated with any arguments when they are created. Stateless session beans do
not have find methods, because stateless beans are all equivalent and are not
persistent. In other words, there is no unique stateless session beans that can be
located in the database. Because stateless session beans are not persisted, they are
transient services. Every client that uses the same type of session bean gets the
same service.
Below is the bean class definition for the CreditService bean. This bean encapsulates
access to the Acme Credit Card processing services. Specifically, this bean accesses
the Acme secure web server and posts requests to validate or charge the customer's
credit card.
import javax.ejb.SessionBean;
The CreditService stateless bean demonstrates that a stateless bean can represent a
collection of independent but related services. In this case credit card validation and
charges are related but not necessarily interdependent. Stateless beans might be
used to access unusual resources -- as is the case with the CreditService bean --
databases or to perform complex computations. The CreditService bean is used as an
example in this tutorial to demonstrate the service nature of a stateless bean and to
provide a context for discussing the behavior of the call back methods. The use of a
stateless session beans is not limited to the behavior illustrated in this example;
stateless beans can be used to perform any kind of service.
The CreditServiceBean class uses a URL resource factory (acmeURL()) to obtain and
maintain a reference to the Acme web server which exists on another computer very
far away. The CreditServiceBean uses the acmeURL() to obtain a connection to the
web server and post requests for the validation and charge of credit cards. The
CreditService bean is used by clients instead of a direct connection so that the
service can be better managed by the EJB container, which will pool connections and
managed transactions and security automatically for the EJB client.
The ejbCreate() method is invoked at the beginning of its lifetime and is invoked only
once. The ejbCreate() method is a convenient method for initiating resource
connections and variables that will be of used to the stateless bean for its lifetime. In
the example above, the CreditServiceBean uses the ejbCreate() to obtain a reference to
the HttpURLConnection factory, which it will use throughout its lifetime to obtain
connections to the Acme web server.
The CreditServiceBean uses the JNDI ENC to obtain a URL connection factory in the
same way that the CustomerBean used the JNDI ENC to obtain a DataSource resource
factory for JDBC connections. The JNDI ENC is a default JNDI context that all beans
have access to automatically. The JNDI ENC is used to access static properties, other
beans, and resource factories like the java.net.URL and JDBC javax.sql.DataSource. In
addition, the JNDI ENC also provides access to JavaMail and Java Messaging Service
resource factories.
The ejbCreate() and ejbRemove() methods are each only invoked once in its lifetime by
the container; when the bean is first created and when it's finally destroyed.
Invocations of the create() and remove() methods on its home and remote interfaces
by the client do not result in invocations on the ejbCreate() and ejbRemove() methods
on the bean instance. Instead, an invocation of the create() method provides the
client with a reference to the stateless bean type and the remove() methods
invalidate the reference. The container will decide when bean instances are actually
created and destroyed and will invoke the ejbCreate() and ejbRemove() methods at
these times. This allows stateless instances to be shared between many clients
without impacting the clients' references.
In the CreditServiceBean, the ejbCreate() and ejbRemove() methods are used to obtain a
URL connection at the beginning the bean instance's life and to disconnect from it at
the end of the bean instance's life. Between the times that the URL connection is
obtained and disconnected it is used by the business methods. This approach is used
to reduce the number of times that a connection needs to be obtained and
disconnected, conserving resources. It may seem wasteful to maintain the URL
connection between method invocations, but stateless session beans are designed to
be shared between many clients, so that they are in constant use. As soon as a
stateless instance completes a method invocation for a client, it can immediately
service another client. Ideally, there is little down time for a stateless session bean
instance, so it makes sense to maintain the URL connection.
The verify() and charge() methods delegate their requests to the post() method, a
private helper method. The post() method uses an HttpURLConnection to submit the
credit card information to the Acme web server and return the reply to the verify() or
charge() method. The HttpURLConnection may have been disconnected automatically
by the container -- this might occur if, for example, a lot of time elapsed since its last
use -- so the post() method always invokes the connect() method, which does nothing
if the connection is already established. The verify() and charge() methods parse the
return value looking for the substring "approved" which indicates that the credit card
was not denied. If "approved" was not found, its assumed that the card was denied
and a business exception is thrown.
The setSessionContext() method provides the bean instance with a reference to the
SessionContext which serves the same purpose as the EntityContext did for the
CustomerBean in the section on Entity beans. The SessionContext is not used in this
example.
The ejbActivate() and ejbPassivate() methods are not implemented in the CreditService
bean because passivation is not used in stateless session beans. These methods are
defined in the javax.ejb.SessionBean interface for the Stateful session beans, and so an
empty implementation must be provided in stateless session beans. Stateless session
bean will never provide anything but empty implementations of these methods.
Stateless session beans can also be used to access the database as well as
coordinate the interaction of other bean to accomplish a task. Below is the definition
of the HotelClerkBean shown earlier in this tutorial:
import javax.ejb.SessionBean;
import javax.naming.InitialContext;
InitialContext jndiContext;
CreditServiceHome creditHome =
(CreditServiceHome) getHome(
"java:comp/env/ejb/CreditServiceEJB",
CreditServiceHome.class);
CreditService creditAgent = creditHome.create();
creditAgent.verify(card, amount);
ReservationHome resHome =
(ReservationHome) getHome(
"java:comp/env/ejb/ReservationEJB",
ReservationHome.class);
Reservation reservation = resHome.create(cust.getName(),
room,from,to);
}
The HotelClerkBean is also a stateless bean. All the information needed to process a
reservation or to query a list of available rooms is obtained from the method
arguments. In the reserveRoom() method, operations on several other beans (Room,
CreditService, and Reservation) are coordinated to accomplish one larger task,
reserving a room for a customer. This is an example of a session bean managing the
interactions of other beans on behalf of the client. The availableRooms() method is
used to query the database and obtain a list of rooms -- the information is returned
to the client as a collection of data wrappers defined by the RoomInfo class. The use
of this class, shown below, is a design pattern that provides the client with a light-
weight wrapper of just the information needed.
In the EJB 1.1 specification, RMI over IIOP is the specified programming model, so
CORBA references types must be supported. CORBA references cannot be cast using
Java native casting. Instead the PortableRemoteObject.narrow() method must be used
to explicitly narrow a reference from one type to its subtype. Since JNDI always
returns an Object type, all bean references should be explicitly narrowed to support
portability between containers.
As an example, the HotelClerk bean can be modified to be a stateful bean which can
maintain conversational-state between method invocations. This would be useful, for
example, if you want the HotelClerk bean to be able to take many reservations, but
then process them together under one credit card. This occurs frequently, when
families need to reserve two or more rooms or when corporations reserve a block of
rooms for some event. Below the HotelClerkBean is modified to be a stateful bean.
import javax.ejb.SessionBean;
import javax.naming.InitialContext;
InitialContext jndiContext;
//conversational-state
Customer cust;
Vector resVector = new Vector();
while (resEnum.hasMoreElements()) {
ReservationInfo resInfo =
(ReservationInfo) resEnum.nextElement();
In the stateful version of the HotelClerkBean class, the conversational state is the
Customer reference, which is obtained when the bean is created, and the Vector of
ReservationInfo objects. By maintaining the conversational-state in the bean, the
client is absolved of the responsibility of keeping track of this session state. The bean
keeps track of the reservations and processes them in a batch when the
serverRooms() method is invoked.
To conserve resources, stateful session beans may be passivated when they are not
in use by the client. Passivation in stateful session beans is different then for entity
beans. In stateful beans, passivation means the bean conversational-state is written
to a secondary storage (often disk) and the instance is evicted from memory. The
client's reference to the bean is not affected by passivation, it remains alive and
usable while the bean is passivated. When the client invokes a method on a bean
that is passivated, the container will activate the bean by instantiating a new
instance and populating its conversational-state with the state written to secondary
storage. This passivation/activation process is often accomplished using simple Java
serialization but it can be implemented in other proprietary ways as long as the
mechanism behaves the same as normal serialization. (One exception to this is that
transient fields do not need to be set to their default initial values when a bean is
activated.)
Stateful session beans, unlike stateless beans, do use the ejbActivate() and
ejbPassivate() methods. The container will invoke these methods to notify the bean
when its about to passivated (ejbPassivate()) and immediately following activation
(ejbActivate()). Bean developers should use these method to close open resources
and to do other clean-up before the instance's state is written to secondary storage
and evicted from memory.
The ejbRemove() method is invoked on the stateful instance when the client invokes
the remove() method on the home or remote interface. The bean should use the
ejbRemove() method to do the same kind of clean-up performed in the ejbPassivate()
method.
A deployment descriptor has a predefined format that all EJB compliant beans must
use and all EJB compliant servers must know how to read. This format is specified in
an XML Document Type Definition, or DTD. The deployment descriptor describes the
type of bean (session or entity) and the classes used for the remote, home, and
bean class. It also specifies the transactional attributes of every method in the bean,
which security roles can access each method (access control), and whether
persistence in the entity beans is handled automatically or is preformed by the bean.
Below is an example of a XML deployment descriptor used to describe the Customer
bean:
<?xml version="1.0"?>
<ejb-jar>
<enterprise-beans>
<entity>
<description>
This bean represents a customer
</description>
<ejb-name>CustomerBean</ejb-name>
<home>CustomerHome</home>
<remote>Customer</remote>
<ejb-class>CustomerBean</ejb-class>
<persistence-type>Container</persistence-type>
<prim-key-class>Integer</prim-key-class>
<reentrant>False</reentrant>
<cmp-field><field-name>myAddress</field-name></cmp-field>
<cmp-field><field-name>myName</field-name></cmp-field>
<cmp-field><field-name>myCreditCard</field-name></cmp-field>
</entity>
</enterprise-beans>
<assembly-descriptor>
<security-role>
<description>
This role represents everyone who is allowed full access
to the Customer bean.
</description>
<role-name>everyone</role-name>
</security-role>
<method-permission>
<role-name>everyone</role-name>
<method>
<ejb-name>CustomerBean</ejb-name>
<method-name>*</method-name>
</method>
</method-permission>
<container-transaction>
<description>
All methods require a transaction
</description>
<method>
<ejb-name>CustomerBean</ejb-name>
<method-name>*</method-name>
</method>
<trans-attribute>Required</trans-attribute>
</container-transaction>
</assembly-descriptor>
</ejb-jar>
EJB-capable application servers usually provide tools, which can be used to build the
deployment descriptors; this greatly simplifies the process.
When a bean is to be deployed, its remote, home, and bean class files and the XML
deployment descriptor must be packaged into a JAR file. The deployment descriptor
must be stored in the JAR under the special name META-INF/ejb-jar.xml. This JAR file,
called an ejb-jar, is vendor neutral; it can be deployed in any EJB container that
supports the complete EJB specification. When a bean is deployed in an EJB
container its XML deployment descriptor is read from the JAR to determine how to
manage the bean at runtime. The person deploying the bean will map attributes of
the deployment descriptor to the container's environment. This will include mapping
access security to the environment's security system, adding the bean to the EJB
container's naming system, etc. Once the bean developer has finished deploying the
bean it will become available for client applications and other beans to use.
The remote interface defines the business methods, such as accessor and mutator
methods for changing a customer's name, or business methods that perform tasks
like using the HotelClerk bean to reserve a room at a hotel. Below is an example of
how a Customer bean might be accessed from a client application. In this case the
home interface is CustomerHome and the remote interface is Customer.
CustomerHome home;
Object ref;
A client first obtains a reference to the home interface by using JNDI ENC to lookup
the server beans. In EJB 1.1, Java RMI-IIOP is the specified programming model. As
a consequence, all CORBA references types must be supported. CORBA references
cannot be cast using Java native casting. Instead the PortableRemoteObject.narrow()
method must be used to explicitly narrow a reference from one type to its subtype.
Since JNDI always returns an Object type, all bean references should be explicitly
narrowed to support portability between containers.
After the home interface is obtained, the client uses the methods defined on the
home interface to create, find, or remove the server bean. Invoking one of the
create() methods on the home interface returns the client a remote reference to the
server bean, which the client uses to perform its job.
EJB QL
EJB QL is a SQL like language used to provide a standard syntax for finder methods
to locate beans. Also provides for select methods, private query methods only
useable internally by the bean class.
Because of this, developers have to redefine queries for the finder methods
whenever an application is moved from one vendor's application server to another's.
Obviously, this makes applications built using CMP less portable.
What's more, EJB 1.1 offered no standard way to define a query for an entity bean
with CMP to navigate with other entity beans in a variety of code contexts,
relationships, or associations. There was no proper mechanism to define queries to
navigate from one entity bean to its dependent classes and the member variables of
those dependent classes.
EJB 2.0 deals with these shortcomings by defining the EJB Query Language. EJB QL
is based on the SQL-92 specification for defining various finder and select methods of
entity beans with CMP. The EJB QL query string consists of three clauses: SELECT,
FROM, and WHERE. Among other things, EJB QL offers a standard way to define
relationships between entity beans and dependent classes by introducing abstract
schema types and relationships in the deployment descriptor. EJB QL also defines
queries for navigation using abstract schema names and relationships.
The EJB QL must always contain SELECT and FROM clauses. The WHERE clause is
optional. The FROM clause provides declarations for the identification variables based
on abstract schema name, for navigating through the schema. The SELECT clause
uses these identification variables to define the return type of the query, and the
WHERE clause defines the conditional query.
The query for EJB QL is defined in the deployment descriptor using the <query> tag
as shown below:
<query>
<query-method>
<method-name></method-name>
<method-params>
<method-param></method-param>
</method-params>
</query-method>
<result-type-mapping></result-type-mapping>
<ejb-ql></ejb-ql>
</query>
You specify the name of the finder or select method in <method-name> and the
parameters in <method-param>. The <result-type-mapping> indicates the return
type, and can contain either Local (the default) or Remote values. The query string is
in the <ejb-ql> tag.
I will give a brief description of the finder and select methods since these methods
use EJB QL to define the queries. EJB 2.0 defines the finder and select methods for
entity beans. The select method is a new addition to the specification.
Finder Methods: Finder methods get either a single or a collection of entity bean
instances from the persistence store through a relational database. These methods
define the home interface(s) of an entity bean. Hence, they are exposed to the
client. A home interface can either be a Remote Home interface, EJBHome, or a Local
Home interface, EJBLocalHome. The return type of the finder method defined in the
remote home interface is either the entity bean's remote interface or a collection of
objects implementing the entity bean's remote interface. The return type of the
finder method defined in the local home interface is either the entity bean's local
interface or a collection of objects implementing the entity bean's local interface. For
example:
• ejbSelect<METHOD>
• ejbSelect<METHOD>InEntity
Example:
public abstract class OrderBean implements javax.ejb.EntityBean {
...
public abstract java.util.Collection ejbSelectAllOrderedProducts(Date date)
throws FinderException;
...
public abstract java.util.Collection ejbSelectAllOrderedProductsInEntity(Date date)
throws FinderException;
}
Here the Order example explains each clause in detail. The relationship between
OrderEJB, LineItemEJB, ProductEJB and AddressEJB are shown in the following
diagram:
<ejb-relation>
<ejb-relation-name>Order-LineItem</ejb-relation-name>
<ejb-relationship-role>
<ejb-relationship-role-name>
order-has-lineitems
</ejb-relationship-role-name>
<multiplicity>One</multiplicity>
<relationship-role-source>
<ejb-name>OrderEJB</ejb-name>
</relationship-role-source>
<cmr-field>
<cmr-field-name>lineItems</cmr-field-name>
<cmr-field-type>java.util.Collection
</cmr-field-type>
</cmr-field>
</ejb-relationship-role>
<ejb-relationship-role>
<ejb-relationship-role-name>lineitem-belongsto-order
</ejb-relationship-role-name>
<multiplicity>Many</multiplicity>
<cascade-delete/>
<relationship-role-source>
<ejb-name>LineItemEJB</ejb-name>
</relationship-role-source>
<cmr-field>
<cmr-field-name>order</cmr-field-name>
</cmr-field>
</ejb-relationship-role>
</ejb-relation>
<!--
ONE-TO-MANY unidirectional relationship:
Product is not aware of its relationship with LineItem
-->
<ejb-relation>
<ejb-relation-name>Product-LineItem</ejb-relation-name>
<ejb-relationship-role>
<ejb-relationship-role-name>product-has-lineitems</ejb-relationship-role-name>
<multiplicity>One</multiplicity>
<relationship-role-source>
<ejb-name>ProductEJB</ejb-name>
</relationship-role-source>
<!-- since Product does not know about LineItem there is no cmr field in Product
for accessing Lineitem -->
</ejb-relationship-role>
<ejb-relationship-role>
<ejb-relationship-role-name>lineitem-for-product</ejb-relationship-role-name>
<multiplicity>Many</multiplicity>
<relationship-role-source>
<ejb-name>LineItemEJB</ejb-name>
</relationship-role-source>
<cmr-field>
<cmr-field-name>product</cmr-field-name>
</cmr-field>
</ejb-relationship-role>
</ejb-relation>
</relationships>
FROM Clause
The FROM clause defines the domain of the query by declaring the identification
variables. Identification variables cannot be declared in the SELECT and WHERE
clauses. Instead, the SELECT and WHERE clauses can use only those identification
variables defined in the FROM clause. You can define multiple identification variables
in the FROM clause.
Any valid identifier may be used as an identification variable, though there are few
restrictions (see sidebar). Identification variables are case insensitive.
For example, to select all orders containing Floppy Drive products, the query is:
Here the FROM clause declares the identifier "o" as a range variable and "li" as a
collection member variable. A range variable declares the abstract schema type,
which uses the reserved identifier, AS.
• reserved identifier, IN
• abstract-schema-types of range variables and
• abstract-schema-types of associated entity beans.
The range variable "o" designates the abstract schema type, Order. Similarly, the
identification variables: "li" is the abstract schema type, LineItem, and li.product is
the abstract schema type, Product. The expression li.product.product_type in the
WHERE clause is a java.lang.String type. Since all clauses are evaluated from left to
right in EJB QL, the identification variable "li" utilizes the results of the navigation on
"o".
The identifier OBJECT in the SELECT clause is required, because the OBJECT operator
must qualify all stand-alone identification variables in the SELECT clause.
You may also declare a range variable using the optional identifier, AS. Therefore, the
FROM clause in the above query becomes:
Also, you may define more than one range variable in the FROM clause. For example:
WHERE Clause
The WHERE clause defines conditional expressions to select objects or values that
satisfy the expression.
All of the identification variables in a WHERE clause in EJB QL must be declared from
a FROM clause. You may also pass input parameters for the finder and select
methods. The input parameters are only in the WHERE clause of a query.
Input parameters are designated by a question mark (?) prefix, followed by a one-
based index of the parameter in the method declaration (i.e., ?1, ?2).
For example, the select method query for selecting Line Items (method-a) is:
Similarly, the select method query for choosing all products based on name and price
is:
<query>
<description>
Method to find order specified no of lineItems</description>
<query-method>
<method-name>ejbSelectLineItems</method-name>
<method-params>
<method-param>int</method-param>
</method-params>
</query-method>
<result-type-mapping>Local</result-type-mapping>
<ejb-ql>
SELECT OBJECT (o) FROM Order AS o IN (o.lineItems) li
WHERE li.quantity = ?1
</ejb-ql>
</query>
In the same way, you may define the WHERE clause with input parameters.
The number of distinct input parameters in an EJB QL query must be the same as the
number of input parameters for the finder and select methods. It is not required to
use all of the input parameters for the finder or select methods in a query, though.
You can also pass an input parameter that corresponds to a particular EJBObject or
EJBLocalObject. Containers map these input parameters to the abstract-schema-type
values.
Next, I will show the various comparison operators available for use with the WHERE
clause. In addition to the navigation operator (.) in the queries above, EJB QL
supports the fundamental arithmetic operators (i.e., unary, multiplication and
division, addition and subtraction), comparison operators (i.e., =, >, >=, <,<=, <>)
and logical operators (i.e., NOT,OR, AND).
You can also use the comparison operators BETWEEN or NOT BETWEEN in a query.
For example, the query to select all Line Items with a quanitity between 100 and 200
is shown here:
Using comparison operators >= and <=, the WHERE clause is now in the following
expression:
The following query selects all Line Items with a quantity less than 100 or more than
200:
EJB QL also supports the IN and LIKE expressions. For example, to select the
address(es) of an office in various cities, the query expression of the WHERE clause
is:
The expression results are true for Florida and false for Texas.
Usage of NOT IN is just the opposite of the above:
Here, the expression results are true for Texas and false for Florida.
The single_valued_path_expression must result in a String value. You use any string
literal in the pattern-value. An underscore (_) represents any single character and a
percent (%) any sequence of characters, including an empty sequence. The escape-
character is a single character string literal, and is for escaping the special meaning
of underscore and percent characters in pattern-value.
For example, the following query selects all employees whose names start with
CHRIS:
And to select all employees whose names don't start with CHRIS, use the following:
If the value of emp.name in the above expression is NULL, then the value of the
expression is unknown. To avoid this, check whether a
single_valued_path_expression is NULL by using the IS NULL operator. The following
expression returns true when an employee's name is a NULL value.
emp.name IS NULL
The only identifiers that use a collection valued path expression are the comparison
expressions: IS [NOT] EMPTY and [NOT] MEMBER [OF]. Note that in EJB QL queries,
any comparison or arithmetic operations with a NULL value or an unknown value
always yields an unknown value. Path expressions with NULL values during
evaluation return NULL values.
DISTINCT works in the same way as SQL in selecting unique values from the query
result.
You may also restrict the return type to contain only unique values by declaring
java.util.Set as the return type for the finder or select methods. Since java.util.Set
doesn't allow duplicate values, whenever the return type is java.util.Set, the
container internally applies DISTINCT to the query. Therefore, it is not required to
explicitly use the DISTINCT identifier in the query string. But when the return type is
a java.util.Collection, then it requires an explicit DISTINCT identifier in the query
expression to get unique values.
The SELECT clause determines the type of values returned by a query. For example,
to get all orders, the query is:
Similarly, to get all products that are associated with a line item:
Looking carefully, the above query doesn't work, because the return type of the
query is not a single-valued expression. The earlier query to get all products
associated with line items works fine. This is because the relationship between line
item and product is one-to-one, whereas the relationship between order and line
items is one-to-many. The single valued path expression is the
single_valued_cmr_field, which is a cmr-field name in one-to-one or many-to-one
relationship. Since the relationship between order and line is one-to-many, the result
yields a collection_valued_path_expression type.
Therefore, the SELECT clause must be specified to return a single valued expression.
The SELECT clause of a query defined for a finder method must always correspond to
the abstract schema type of entity bean for which the finder method is defined. The
SELECT clause of a query defined for a select method returns abstract schema types
of other entity beans, as well as the values for cmp-fields.
Now, you will see some queries of select methods whose return type is that of cmp-
field. To select the names of all products that have been ordered:
Here, order is the abstract-type-name of OrderEJB. lineItems and products are cmr-
field names. name is the cmp-field name defined for the entity bean ProductEJB.
EJB QL also provides built-in functions for performing simple operations on objects of
String class and primitive types. Specifically, EJB QL provides the following built-in
functions:
String Functions:
• CONCAT (String, String) - This function returns the concatenated string value.
• SUBSTRING (String, start, length) - This function returns the substring of the
original string.
• LOCATE (String, String [, start]) This function returns int.
• LENGTH (String) This function returns the length of the string passed as
parameter.
Arithmetic Functions:
• ABS (number)
• SQRT (double)
Advantages
Limitations
• Some of the useful features of SQL are not yet provided by EJB QL. For
example, the ORDER BY identifier, which becomes very handy as the
application becomes large and complex, is not yet supported in EJB QL. Some
of the application servers may provide these features, but usage of the same
may limit portability across application servers.
• Date and time values should be passed as millisecond value using Java
primitive type, long.
• EJB QL does not support fixed decimal comparison in arithmetic expressions.
• String and Boolean comparison is restricted to = and <>. However, the built-
in functions(CONCAT, SUBSTRING, etc.) can be used to perform other
operations on String.
• EJB QL does not support comments.
Summary
The addition of EJB QL to the new EJB 2.0 specification justifies the distributed
component architecture as the standard way of defining queries. EJB QL allows
applications to be more portable. I believe future versions of EJB QL may provide
support for more built-in functions, as well as other SQL features like ORDER BY. In
the EJB 2.0 specification, the data model for CMP does not currently support
inheritance; therefore, you cannot compare objects or value classes of different
types. This may be addressed in future versions of EJB QL.
Message-driven beans can receive JMS messages and process them. While a
message-driven bean is responsible for processing messages, its container takes
care of automatically managing the component's entire environment, including
transactions, security, resources, concurrency, and message acknowledgment.
One of the most important aspects of message-driven beans is that they can
consume and process messages concurrently. This capability provides a significant
advantage over traditional JMS clients, which must be custom-built to manage
resources, transactions, and security in a multithreaded environment. The message-
driven bean containers provided by EJB manage concurrency automatically, so the
bean developer can focus on the business logic of processing the messages. The MDB
can receive hundreds of JMS messages from various applications and process them
all at the same time, because numerous instances of the MDB can execute
concurrently in the container.
The JMS messages that notify the ReservationProcessor EJB of new reservations
might come from another application in the enterprise or an application in some
other organization. When the ReservationProcessor EJB receives a message, it
creates a new Reservation EJB (adding it to the database), processes the payment
using the ProcessPayment EJB, and sends out a ticket. This process is illustrated in
Figure 13-3.
package com.titan.reservationprocessor;
import javax.jms.Message;
import javax.jms.MapMessage;
import com.titan.customer.*;
import com.titan.cruise.*;
import com.titan.cabin.*;
import com.titan.reservation.*;
import com.titan.processpayment.*;
import com.titan.travelagent.*;
import java.util.Date;
MessageDrivenContext ejbContext;
Context jndiContext;
}
}
ReservationLocal reservation =
resHome.create(customer, cruise, cabin, price, new Date());
deliverTicket(reservationMsg, ticket);
} catch(Exception e) {
throw new EJBException(e);
}
}
MessageDrivenBean Interface
The Message-Driven Bean class is required to implement the
javax.ejb.MessageDrivenBean interface, which defines callback methods similar to
those in entity and session beans. Here is the definition of the MessageDrivenBean
interface:
package javax.ejb;
MessageDrivenContext ejbContext;
Context jndiContext;
MessageDrivenContext
The MessageDrivenContext simply extends the EJBContext; it does not add any new
methods. The EJBContext is defined as:
package javax.ejb;
public interface EJBContext {
// transaction methods
public javax.transaction.UserTransaction getUserTransaction()
throws java.lang.IllegalStateException;
public boolean getRollbackOnly() throws java.lang.IllegalStateException;
public void setRollbackOnly() throws java.lang.IllegalStateException;
// security methods
public java.security.Principal getCallerPrincipal();
public boolean isCallerInRole(java.lang.String roleName);
// deprecated methods
public java.security.Identity getCallerIdentity();
public boolean isCallerInRole(java.security.Identity role);
public java.util.Properties getEnvironment();
}
Only the transactional methods the MessageDrivenContext inherits from EJBContext
are available to message-driven beans. The home methods--getEJBHome() and
getEJBLocalHome()--throw a RuntimeException if invoked, because MDBs do not
Message-driven beans also have access to their own JNDI environment naming
contexts (ENCs), which provide the MDB instances access to environment entries,
other enterprise beans, and resources. For example, the ReservationProcessor EJB
takes advantage of the JNDI ENC to obtain references to the Customer, Cruise,
Cabin, Reservation, and ProcessPayment EJBs as well as a JMS
QueueConnectionFactory and Queue for sending out tickets.
MessageListener Interface
In addition to the MessageDrivenBean interface, MDBs implement the
javax.jms.MessageListener interface, which defines the onMessage() method; bean
developers implement this method to process JMS messages received by a bean. It's
in the onMessage() method that the bean processes the JMS message:
package javax.jms;
public interface MessageListener {
public void onMessage(Message message);
}
It's interesting to consider why the MDB implements the MessageListener interface
separately from the MessageDrivenBean interface. Why not just put the onMessage()
method, MessageListener's only method, in the MessageDrivenBean interface so that
there is only one interface for the MDB class to implement? This was the solution
taken by an early proposed version of EJB 2.0. However, it was quickly realized that
message-driven beans could, in the future, process messages from other types of
systems, not just JMS. To make the MDB open to other messaging systems, it was
decided that it should implement the javax.jms.MessageListener interface separately,
thus separating the concept of the message-driven bean from the types of messages
it can process. In a future version of the specification, other types of MDB might be
available for technologies such as SMTP (email) or JAXM ( Java API for XML
Messaging) for ebXML. These technologies will use methods other than onMessage(),
which is specific to JMS.
Date expirationDate =
new Date(reservationMsg.getLong("CreditCardExpDate"));
String cardNumber = reservationMsg.getString("CreditCardNum");
String cardType = reservationMsg.setString("CreditCardType");
CreditCardDO card = new CreditCardDO(cardNumber,
expirationDate, cardType);
The ReservationProcessor EJB needs to access the Customer, Cruise, and Cabin EJBs
in order to process the reservation. The MapMessage contains the primary keys for
these entities; the ReservationProcessor EJB uses helper methods (getCustomer(),
getCruise(), and getCabin()) to look up the entity beans and obtain EJB object
references to them:
return cruise;
}
public CabinLocal getCabin(Integer key)
throws NamingException, FinderException{
ReservationLocal reservation =
resHome.create(customer, cruise, cabin, price, new Date());
Object ref =
jndiContext.lookup("java:comp/env/ejb/ProcessPaymentHomeRemote");
deliverTicket(reservationMsg, ticket);
This illustrates that, like a session bean, the MDB can access any other entity or
session bean and use that bean to complete a task. In this way, the MDB fulfills its
role as an integration point in B2B application scenarios. MDB can manage a process
and interact with other beans as well as resources. For example, it is commonplace
for an MDB to use JDBC to access a database based on the contents of the message
it is processing.
jndiContext.lookup("java:comp/env/jms/QueueFactory");
sender.send(message);
connect.close();
As stated earlier, every message type has two parts: a message header and a
message body (a.k.a. payload). The message header contains routing information
and may also have properties for message filtering and other attributes, including a
JMSReplyTo attribute. When a JMS client sends a message, it may set the
JMSReplyTo attribute to be any destination accessible to its JMS provider. In the case
of the reservation message, the sender set the JMSReplyTo attribute to the queue to
which the resulting ticket should be sent. Another application can access this queue
to read tickets and distribute them to customers or store the information in the
sender's database.
You can also use the JMSReplyTo address to report business errors that occur while
processing the message. For example, if the Cabin is already reserved, the
ReservationProcessor EJB might send an error message to the JMSReplyTo queue
explaining that the reservation could not be processed. Including this type of error
handling is left as an exercise for the reader.
Here's the XML deployment descriptor that defines the ReservationProcessor EJB.
This deployment descriptor also defines the Customer, Cruise, Cabin, and other
beans, but these are left out here for brevity:
<enterprise-beans>
...
<message-driven>
<ejb-name>ReservationProcessorEJB</ejb-name>
<ejb-class>
com.titan.reservationprocessor.ReservationProcessorBean
</ejb-class>
<transaction-type>Container</transaction-type>
<message-selector>MessageFormat = 'Version 3.4'</message-selector>
<acknowledge-mode>Auto-acknowledge</acknowledge-mode>
<message-driven-destination>
<destination-type>javax.jms.Queue</destination-type>
</message-driven-destination>
<ejb-ref>
<ejb-ref-name>ejb/ProcessPaymentHomeRemote</ejb-ref-name>
<ejb-ref-type>Session</ejb-ref-type>
<home>com.titan.processpayment.ProcessPaymentHomeRemote</home>
<remote>com.titan.processpayment.ProcessPaymentRemote</remote>
</ejb-ref>
<ejb-ref>
<ejb-ref-name>ejb/CustomerHomeRemote</ejb-ref-name>
<ejb-ref-type>Entity</ejb-ref-type>
<home>com.titan.customer.CustomerHomeRemote</home>
<remote>com.titan.customer.CustomerRemote</remote>
</ejb-ref>
<ejb-local-ref>
<ejb-ref-name>ejb/CruiseHomeLocal</ejb-ref-name>
<ejb-ref-type>Entity</ejb-ref-type>
<local-home>com.titan.cruise.CruiseHomeLocal</local-home>
<local>com.titan.cruise.CruiseLocal</local>
</ejb-local-ref>
<ejb-local-ref>
<ejb-ref-name>ejb/CabinHomeLocal</ejb-ref-name>
<ejb-ref-type>Entity</ejb-ref-type>
<local-home>com.titan.cabin.CabinHomeLocal</local-home>
<local>com.titan.cabin.CabinLocal</local>
</ejb-local-ref>
<ejb-local-ref>
<ejb-ref-name>ejb/ReservationHomeLocal</ejb-ref-name>
<ejb-ref-type>Entity</ejb-ref-type>
<local-home>com.titan.reservation.ReservationHomeLocal</local-home>
<local>com.titan.reservation.ReservationLocal</local>
</ejb-local-ref>
<security-identity>
<run-as>
<role-name>everyone</role-name>
</run-as>
</security-identity>
<resource-ref>
<res-ref-name>jms/QueueFactory</res-ref-name>
<res-type>javax.jms.QueueConnectionFactory</res-type>
<res-auth>Container</res-auth>
</resource-ref>
</message-driven>
...
</enterprise-beans>
<message-selector>
Message selectors allow an MDB to be more selective about the messages it receives
from a particular topic or queue. Message selectors use Message properties as
criteria in conditional expressions. (Message selectors are also based on message
headers, which are outside the scope of this chapter.) These conditional expressions
use Boolean logic to declare which messages should be delivered to a client.
Message properties, upon which message selectors are based, are additional headers
that can be assigned to a message. They give the application developer or JMS
vendor the ability to attach more information to a message. The Message interface
provides several accessor and mutator methods for reading and writing properties.
Properties can have a String value or one of several primitive values (boolean, byte,
short, int, long, float, double). The naming of properties, together with their values
and conversion rules, is strictly defined by JMS.
sender.send(message);
The message selectors are based on a subset of the SQL-92 conditional expression
syntax that is used in the WHERE clauses of SQL statements. They can become fairly
complex, including the use of literal values, Boolean expressions, unary operators,
and so on.
<message-selector>
<![CDATA[
PhysicianType IN ('Chiropractic','Psychologists','Dermatologist')
AND PatientGroupID LIKE 'ACME%'
]]>
</message-selector>
<message-selector>
<![CDATA[
InventoryID ='S93740283-02' AND Quantity BETWEEN 1000 AND 13000
]]>
</message-selector>
<message-selector>
<![CDATA[
TotalCharge >500.00 AND ((TotalCharge /ItemCount)>=75.00)
AND State IN ('MN','WI','MI','OH')
]]>
</message-selector>
<acknowledge-mode>
JMS has the concept of acknowledgment, which means that the JMS client notifies
the JMS provider (message router) when a message is received. In EJB, it's the MDB
container's responsibility to send an acknowledgment to the JMS provider when it
receives a message. Acknowledging a message tells the JMS provider that MDB
container has received the message and processed it using an MDB instance. Without
an acknowledgment, the JMS provider will not know whether the MDB container has
received the message, so it will try to redeliver it. This can cause problems. For
example, once we have processed a reservation message using the
ReservationProcessor EJB, we don't want to receive the same message again.
When transactions are involved, the acknowledgment mode set by the bean provider
is ignored. In this case, the acknowledgment is performed within the context of the
transaction. If the transaction succeeds, the message is acknowledged. If the
transaction fails, the message is not acknowledged. If the MDB is using container-
managed transactions, as it will in most cases, the acknowledgment mode is ignored
by the MDB container. When using container-managed transactions with a Required
transaction attribute, the <acknowledge-mode> is usually not specified; however, we
included it in the deployment descriptor for the sake of discussion:
<acknowledge-mode>Auto-acknowledge</acknowledge-mode>
When the MDB executes with bean-managed transactions, or with the container-
managed transaction attribute NotSupported (see Chapter 14), the value of
<acknowledge-mode> becomes important.
<message-driven-destination>
The <message-driven-destination> element designates the type of destination from
which the MDB receives messages. The allowed values for this element are
javax.jms.Queue and javax.jms.Topic. In the ReservationProcessor EJB this value is
set to javax.jms.Queue, indicating that the MDB is getting its messages via the p2p
messaging model from a queue:
<message-driven-destination>
<destination-type>javax.jms.Queue</destination-type>
</message-driven-destination>
When the MDB is deployed, the deployer will map the MDB so that it listens to a real
queue on the network.
<message-driven-destination>
<destination-type>javax.jms.Topic</destination-type>
<subscription-durability>Durable</subscription-durability>
</message-driven-destination>
The <subscription-durability> element determines whether or not the MDB's
subscription to the topic is Durable. A Durable subscription outlasts an MDB
container's connection to the JMS provider, so if the EJB server suffers a partial
failure, is shut down, or is otherwise disconnected from the JMS provider, the
messages that it would have received will not be lost. While a Durable MDB container
is disconnected from the JMS provider, it is the responsibility of the JMS provider to
store any messages the subscriber misses. When the Durable MDB container
reconnects to the JMS provider, the JMS provider sends it all the unexpired messages
that accumulated while it was down. This behavior is commonly referred to as store-
and-forward messaging. Durable MDBs are tolerant of disconnections, whether they are
intentional or the result of a partial failure.
The rest of the elements in the deployment descriptor should already be familiar. The
<ejb-ref> element provides JNDI ENC bindings for a remote EJB home object while
the <ejb-local-ref> elements provide JNDI ENC bindings for local EJB home objects.
Note that the <resource-ref> element that defined the JMS QueueConnectionFactory
used by the ReservationProcessor EJB to send ticket messages is not accompanied
by a <resource-env-ref> element. The queue to which the tickets are sent is
obtained from the JMSReplyTo header of the MapMessage itself, and not from the
JNDI ENC.
import javax.jms.Message;
import javax.jms.MapMessage;
import javax.jms.QueueConnectionFactory;
import javax.jms.QueueConnection;
import javax.jms.QueueSession;
import javax.jms.Session;
import javax.jms.Queue;
import javax.jms.QueueSender;
import javax.jms.JMSException;
import javax.naming.InitalContext;
import java.util.Date;
import com.titan.processpayment.CreditCardDO;
QueueSession session =
connect.createQueueSession(false,Session.AUTO_ACKNOWLEDGE);
message.setInt("CruiseID",1);
message.setInt("CustomerID",i%10);
message.setInt("CabinID",i);
message.setDouble("Price", (double)1000+i);
sender.send(message);
connect.close();
}
You may have noticed that the JmsClient_ReservationProducer sets the CustomerID,
CruiseID, and CabinID as primitive int values, but the ReservationProcessorBean
reads these values as java.lang.Integer types. This is not a mistake. The
MapMessage automatically converts any primitive to its proper wrapper if that
primitive is read using MapMessage.getObject(). So, for example, a named value
that is loaded into a MapMessage using setInt() can be read as an Integer using
getObject(). For example, the following code sets a value as a primitive int and then
accesses it as a java.long.Integer object:
mapMsg.setInt("TheValue",3);
if(myInteger.intValue() == 3 )
// this will always be true
import javax.jms.Message;
import javax.jms.ObjectMessage;
import javax.jms.QueueConnectionFactory;
import javax.jms.QueueConnection;
import javax.jms.QueueSession;
import javax.jms.Session;
import javax.jms.Queue;
import javax.jms.QueueReceiver;
import javax.jms.JMSException;
import javax.naming.InitalContext;
import com.titan.travelagent.TicketDO;
new JmsClient_TicketConsumer();
while(true){Thread.sleep(10000);}
QueueSession session =
connect.createQueueSession(false,Session.AUTO_ACKNOWLEDGE);
receiver.setMessageListener(this);
connect.start();
}
} catch(JMSException jmsE) {
jmsE.printStackTrace();
}
}
public static InitialContext getInitialContext() throws JMSException {
// create vendor-specific JNDI context here
}
}
To make the ReservationProcessor EJB work with the two client applications,
JmsClient_ReservationProducer and JmsClient_TicketConsumer, you must configure
your EJB container's JMS provider so that it has two queues: one for reservation
messages and another for ticket messages.
Finally, the no-argument ejbCreate() method is invoked by the container on the bean
instance. The MDB has only one ejbCreate() method, which takes no arguments. The
ejbCreate() method is invoked only once in the life cycle of the MDB.
MDBs are not subject to activation, so they can maintain open connections to
resources for their entire life cycles. The ejbRemove() method should close any open
resources before the MDB is evicted from memory at the end of its life cycle.
context. Once the instance has finished, it is immediately available to handle a new
message.
• Message-driven beans (MDBs): can now accept messages from sources other
than JMS.
• EJB query language (EJB-QL): many new functions are added to this
language: ORDER BY, AVG, MIN, MAX, SUM, COUNT, and MOD.
• Support for Web services: stateless session beans can be invoked over
SOAP/HTTP. Also, an EJB can easily access a Web service using the new
service reference.
• EJB timer service: a new event-based mechanism for invoking EJBs at specific
times.
• Many small changes: support for the latest versions of Java specifications,
XML schema, and message destinations.
There are two things that differ. First, as you may know, in order to write an MDB
class, it must implement some interfaces. These are javax.ejb.MessageDrivenBean
and javax.jms.MessageListener. The latter is the interface that enables the EJB
container to subscribe the bean to the JMS server. To make your EJB listen to
another type of messaging server, the MDB must implement another interface. For
example, in order to listen to JAXM messages, the MDB must implement
javax.xml.messaging.OneWayListener or javax.xml.messaging.ReqRespListener.
As for the second difference, obviously, the configuration side of a JMS-MDB will
differ from a non-JMS one. The EJB container must know which destination or
endpoint to which it must connect. The configuration of the MDB is done with a new
<messaging-type> tag, and by specifying "configuration properties" with the tag
<activation-config-propertyType>. This contains arbitrary name/value pairs that are
specific to the messaging service being used. It is more versatile than being forced to
use JMS-specific tags, like <message-selector>.
The way non-JMS servers will plug into the EJB container is more or less overlooked
in the current draft version of the specification. There is a mention of using the J2EE-
CA, but no details are provided. This will probably be left to the EJB container to
implement, just like persistence services.
EJB-QL Enhancements
When EJB-QL came out as a standard way to write queries, it was criticized for being
a reinvention of the wheel. Don't we have already some querying languages, like SQL
and XQuery? Plus, EJB-QL lacks several features that made it less than useful, in
some cases. EJB 2.1 is trying to remedy these problems by extending the language
to make it more SQL-like. Here are a few clauses and functions that are now added
to this language.
ORDER BY
Ordering is something that's usually more optimized when it's done by the database
than when it's done on the client side. This new clause simply works by specifying
which fields to sort on, and if data needs to be sorted in ascending or descending
order. For example:
MOD
This numeric function returns the rest of the division between an integer and another
integer (modulo). For example:
AVG
AVG is an aggregate function that can be used for ejbSelect methods in the SELECT
clause. It returns the average value of a specified field. For example:
MIN
This aggregate function can be used for ejbSelect methods in the SELECT clause. It
returns the minimum value of a specified field. For example:
MAX
This aggregate function can be used for ejbSelect methods in the SELECT clause. It
returns the maximum value of a specified field. For example:
SUM
This aggregate function can be used for ejbSelect methods in the SELECT clause. It
adds up all values for a field. For example:
COUNT
This aggregate function counts the number of elements. It can only be used in
ejbSelect methods and in the SELECT clause. For example:
Personally, I don't see why all of these aggregate functions cannot be used in the
WHERE clause, as well. The specification says these can be used in the SELECT
clause only. It could be useful to test each record against a condition that uses an
aggregate function. For example, getting all employees whose salary is above
average:
Web Services
One of the best features of EJB 2.1 is the support for Web services. This applies to
two different areas: accessing an EJB as if it were a Web service, and an EJB directly
accessing a Web service.
Forcing the use of HTTP is somewhat restrictive, since Web services are supposed to
accept any protocol and technology, but this follows the general trend. By spelling
out these restrictions, though, I think we will get into the same trap as JMS-only
message-driven beans. The specification will later have to change to accommodate
more standards.
Many people have pointed out the disconnection between what exists in the Web
service realm and what is available in EJBs. Here are a few examples:
• How can the client's identity be propagated to the EJB? How can one set
permissions?
• How can the client demarcate transactions?
• How can the client perform connection-based services (stateful session) or
obtain data (entity)?
While these things are not addressed in the specification, they are not addressed in
SOAP, either. There are programmatic solutions around those limitations, and it will
be up to the container providers and/or bean developers to invent these solutions.
Note that this timer service is not meant to be used in real-time systems.
Notifications will be sent at approximate times.
Only "stateless" objects (stateless session beans, pooled entity beans) can process
timer events. Their bean implementations must implement the
javax.ejb.TimedObject interface, which contains a single method: void
ejbTimeout(Timer). Also, the security identity during the timer call can be specified
using <run-as>; othrwise, permissions cannot be verified.
Lastly, it is not clear how the EJB container provider will know which bean is linked to
which timer. Timer creation methods do not have parameters to specify which bean
would consume the ejbTimeout events. Plus it would be nice to have the choice of
setting up timers declaratively in ejb-jar.xml (this would take effect at deployment
time).
Despite its relatively recent introduction, Java Server Pages (JSP) technology is well
on its way to becoming the preeminent Java technology for building applications that
serve dynamic Web content. Java developers love JSP for myriad reasons. Some like
the fact that it brings the "write once, run anywhere" paradigm to interactive Web
pages; others appreciate the fact that it is fairly simple to learn and lets them wield
Java as a server-side scripting language. But most concur on one thing -- the biggest
advantage of using JSP is that it helps effectively separate presentation from
content. In this article, I provide an in-depth look at how you can gain optimal
separation of presentation from content by using the JSP Model 2 architecture. This
model can also be seen as a server-side implementation of the popular Model-View-
Controller (MVC) design pattern. Please note that you should be familiar with the
basics of JSP and servlet programming before continuing on, as I do not address any
syntax issues in this article.
Differing philosophies
The early JSP specifications advocated two philosophical approaches for building
applications using JSP technology. These approaches, termed the JSP Model 1 and
Model 2 architectures, differ essentially in the location at which the bulk of the
request processing was performed. In the Model 1 architecture, shown in Figure 1,
the JSP page alone is responsible for processing the incoming request and replying
back to the client. There is still separation of presentation from content, because all
data access is performed using beans. Although the Model 1 architecture should be
perfectly suitable for simple applications, it may not be desirable for complex
implementations. Indiscriminate usage of this architecture usually leads to a
significant amount of scriptlets or Java code embedded within the JSP page,
especially if there is a significant amount of request processing to be performed.
While this may not seem to be much of a problem for Java developers, it is certainly
an issue if your JSP pages are created and maintained by designers -- which is
usually the norm on large projects. Ultimately, it may even lead to an unclear
Session Facade
Context
Enterprise beans encapsulate business logic and business data and expose their
interfaces, and thus the complexity of the distributed services, to the client tier.
Problem
In a multitiered Java 2 Platform, Enterprise Edition (J2EE) application environment,
the following problems arise:
Application clients need access to business objects to fulfill their responsibilities and
to meet user requirements. Clients can directly interact with these business objects
because they expose their interfaces. When you expose business objects to the
client, the client must understand and be responsible for the business data object
relationships, and must be able to handle business process flow.
However, direct interaction between the client and the business objects leads to tight
coupling between the two, and such tight coupling makes the client directly
dependent on the implementation of the business objects. Direct dependence means
that the client must represent and implement the complex interactions regarding
business object lookups and creations, and must manage the relationships between
the participating business objects as well as understand the responsibility of
transaction demarcation.
Tight coupling between objects also results when objects manage their relationship
within themselves. Often, it is not clear where the relationship is managed. This
leads to complex relationships between business objects and rigidity in the
application. Such lack of flexibility makes the application less manageable when
changes are required.
When accessing the enterprise beans, clients interact with remote objects. Network
performance problems may result if the client directly interacts with all the
participating business objects. When invoking enterprise beans, every client
invocation is potentially a remote method call. Each access to the business object is
relatively fine-grained. As the number of participants increases in a scenario, the
number of such remote method calls increases. As the number of remote method
calls increases, the chattiness between the client and the server-side business
objects increases. This may result in network performance degradation for the
application, because the high volume of remote method calls increases the amount
of interaction across the network layer.
A problem also arises when a client interacts directly with the business objects. Since
the business objects are directly exposed to the clients, there is no unified strategy
for accessing the business objects. Without such a uniform client access strategy, the
business objects are exposed to clients and may reduce consistent usage.
Forces
Provide a simpler interface to the clients by hiding all the complex interactions
between business components.
Reduce the number of business objects that are exposed to the client across
the service layer over the network.
Hide from the client the underlying interactions and interdependencies
between business components. This provides better manageability,
centralization of interactions (responsibility), greater flexibility, and greater
ability to cope with changes.
Provide a uniform coarse-grained service layer to separate business object
implementation from business service abstraction.
Avoid exposing the underlying business objects directly to the client to keep
tight coupling between the two tiers to a minimum.
Solution
Use a session bean as a facade to encapsulate the complexity of interactions
between the business objects participating in a workflow. The Session Facade
manages the business objects, and provides a uniform coarse-grained service access
layer to clients.
The Session Facade abstracts the underlying business object interactions and
provides a service layer that exposes only the required interfaces. Thus, it hides from
the client's view the complex interactions between the participants. The Session
Facade manages the interactions between the business data and business service
objects that participate in the workflow, and it encapsulates the business logic
associated with the requirements. Thus, the session bean (representing the Session
Facade) manages the relationships between business objects. The session bean also
manages the life cycle of these participants by creating, locating (looking up),
modifying, and deleting them as required by the workflow. In a complex application,
the Session Facade may delegate this lifestyle management to a separate object. For
example, to manage the lifestyle of participant session and entity beans, the Session
Facade may delegate that work to a Service Locator object
Other Patterns:
The term J2EE is tossed around a lot because it is a generic term that covers many
areas of enterprise and distributed development. The J2EE modules and environment
continue to grow as a rapid pace, and as many of you have come to learn, in recent
years J2EE development has had its trials and tribulations.
For exactly this reason, it is important to take advantage of the most efficient and
effective strategies for new development or the refactoring of existing projects. To
keep up with the new developments, it is imperative that you aren't wasting time
maintaining designs that have poor architecture or code that was poorly written.
This article covers how to use and identify design patterns, specifically for the
presentation tier, in J2EE applications. The interest in design patterns has been
around for a number of years in the software industry. However, interest among
mainstream software developers is a fairly recent development -- and one that's long
overdue, in my opinion. There are a number of reasons for this: it takes a highly
experienced engineer to recognize a pattern; it requires collaboration; and it requires
ongoing refinements. It also requires a sense of fluidity. Design patterns are not
absolutes -- they are more expressions of proven solutions. It is up to you, the
engineer or architect, to apply a pattern appropriately to your given scenario. This, of
course, is easier said than done.
Patterns are not a magic pill. Just because a problem is observed and a pattern
applied does not mean that you will have a perfect application -- or solution, for that
matter. Patterns are a way of bringing clarity to a system architecture and they allow
for the possibility of better systems being built. Building a system that meets the
intended business requirements, performs well, is maintainable, and is delivered on
time, is what keeps us engineers in business. Patterns have the distinct advantage of
helping us do it all quicker.
It is a rare instance (although not an impossible one), that a design pattern is used
in an isolated fashion. Typically, patterns have relationships and work together to
form a weave, in that a pattern can be composed of, or rely on, other patterns. That
is why you will see that the more familiar you are with different patterns, the better
equipped you are to determine their interactions. Patterns can also form frameworks
that can then be used for implementations.
Enterprise systems are built crossing many tiers; this should not be news to anyone
reading this article. This discussion is focused on the presentation tier, and primarily
covers patterns that can be used with JSP and Servlet technology.
While the patterns listed above might not mean anything to you now, by the time
you finish this article, you will understand how using patterns to describe solutions
will give you a clear understanding of the problem being discussed, just by the use of
the pattern name. Think of the time that will be saved in meetings by just saying, "I
think the View Helper can be applied here," instead of droning on about a complete
problem description that we've all faced many times.
Helpful Hints
A couple of words of advice if you are just starting on your pattern odyssey. If you
haven't boned up on UML yet, the time is now. UML is quite commonly used to
describe patterns in pattern catalogs, including class diagrams, sequence or
interaction diagrams, and stereotypes. I'm not going to go into UML in this article,
but I highly recommend getting up to speed on it.
Composite View patterns, you might want to define the naming convention to be
[action]Helper.java or [action].jsp, respectively; for instance, CreatePageHelper.java
or CreatePage.jsp.
Make a list of requirement statements that you will be addressing and then try to
identify relevant patterns (once you are familiar with them) that might be applicable.
By doing this, you will be amazed at how quickly you will start to recognize
appropriate solutions to problems. For example:
Intercepting Filter
The Intercepting Filter intercepts incoming requests and outgoing responses, and
applies a filter. Filters may be added and removed in a declarative manner, allowing
them to be applied in a variety of combinations. After pre- or post-processing is
finished, the final filter in the group passes control to the original target object,
typically a Front Controller.
Front Controller
A Front Controller is a container that holds common processing logic that occurs
within the presentation tier and that may otherwise be misplaced in a View. A
controller handles requests and manages content retrieval, security, view
management, navigation, and delegation to a Dispatcher component, which further
dispatches to a View.
View Helper
View Helper encourages the separation of formatting-related code from other
business logic. It suggests using Helper components to encapsulate logic relating to
initiating content retrieval and validation, as well as adapting and formatting the
model. The View component is then left to encapsulate the presentation formatting.
Helper components typically delegate to Business Services (a business-tier pattern
which we won't be discussing further).
Composite View
The Composite View composes a View from numerous pieces. Multiple smaller views,
which could be either static or dynamic, are pieced together to create a single
template.
of View processing. This pattern also suggests that the Dispatcher plays a more
limited role in the View management, as the choice of the View is typically already
included in the request.
Sample Scenario
Our scenario is a system that creates presentation content that requires processing
of dynamic business data. This is a fairly common scenario to anyone doing J2EE
development. The problem is that it is not uncommon for changes to occur in the
presentation tier during the course of development. When business data logic and
presentation-formatting logic are mingled together, the system becomes less flexible,
harder to maintain, and provides a poor separation of tiers. Most of us -- even me --
are guilty at one point or another of coding Java into our JSPs. We want to avoid this
situation.
Enter the View Helper pattern. The solution is to enforce this pattern so that the View
contains formatting code, delegating its processing responsibilities to its helper
classes. These classes might be implemented in a number of ways, including
JavaBeans or custom tags. Helpers also store the View's intermediate data model
and serve as business data adapters. It is important not to confuse a solution with its
implementation strategy. For example, it's possible to implement this type of solution
using a JSP View strategy, which uses a JSP as the View component. While this is a
common strategy, it's also possible to take a Servlet View strategy, which uses a
Servlet as a view instead. While we all know that a JSP actually becomes a Servlet,
the strategy chosen becomes a matter of preference among the teams involved, as
well as of the requirements of your project.
We have identified that we will use the View Helper pattern in our project. The class
diagram for this pattern is shown in Figure 1.
Figure 1.
The class diagram tells us what components we will need to create in order to realize
this pattern. Remember the naming conventions we spoke about earlier! (As a side
note, if you are working with Rational Rose, there are a number of patterns included
in v2002 that allow for the actual classes that need to be realized for a pattern to be
generated for you. Depending on the pattern you select, the appropriate classes are
created. It's quite convenient.) By using a sequence diagram that represents the
View Helper pattern, we are quickly able to see the logic flow.
Figure 2.
The View represents and displays information to the client. Dynamic data is required,
and the display is retrieved from a model. The helpers are used to support the View
by encapsulating and adapting the model for displaying. A Helper is called a
ValueBean if it is storing intermediate data from the model needed by the View. How
the helper is implemented doesn't really matter; it could be a JavaBean or a custom
tag, as we previously discussed. It could also be an XSL transformer, if XSL is being
used for converting the model into the appropriate output for a specific client device.
The Business Service is the service the client is trying to access -- the Business
Service would typically branch off into another pattern specific to handling control
and protection of the business service.
By using this pattern, we improve the tier partitioning and maintainability of our
application by using helpers, as well as providing JavaBeans, custom tags, or XSL
files that could very well be reused on other projects. The View Helper is also a good
example of when a pattern is commonly used in conjunction with other patterns.
Take note: this is just the beginning. While this simple sequence diagram is a starting
point, you will have to learn how to adapt your system modelling to your own
development. Transforming patterns and strategies into an implementation is non-
trivial. The more familiar you become with patterns, their strategies, and their
implementation, the quicker you will be able to determine what you need for a
specific project.