You are on page 1of 270

Sun Certified Enterprise Architect for J2EE 5 (CX-310-052)

[Step: 1 of 3]

0. Introduction (7 pages)..................................................................................................2
1. Application Design Concepts and Principles (19 pages)..............................................9
2. Common Architectures (28 pages).............................................................................28
3. Integration and Messaging (17 pages).......................................................................56
4. Business Tier Technologies (32 pages).....................................................................73
5. Web Tier Technologies (15 pages)..........................................................................105
6. Applicability of Java EE Technology (19 pages).......................................................120
7. Patterns (113 pages)................................................................................................139
8. Security (30 pages)..................................................................................................252
9. Bibliography (1 pages).............................................................................................282
9. Bibliography ( pages3)

Note: This document is derived from a published material by Mikalai Zaikin and updated last on 16-1-2010
from the sited references and from the web by Adel Almoshaigah (v-adel.al-moshaigah@riyadbank.com).
0. Introduction ( pages3)

Java Certification

- Sun Certified Java Associate (SCJA)


- Sun Certified Java Programmer (SCJP)
- Sun Certified Java Developer (SCJD)
- Sun Certified Web Component Developer (SCWCD)
- Sun Certified Business Component Developer (SCBCD)
- Sun Certified Developer For Java Web Services (SCDJWS)
- Sun Certified Mobile Application Developer (SCMAD)
- Sun Certified Enterprise Architect (SCEA)

Architect Exam
To achieve this certification, candidates must successfully complete three elements:
1) a knowledge-based multiple choice exam,
2) an assignment and
3) Essay exam.
Sun Certified Enterprise Architect for the Java Platform, Enterprise Edition 5 (Step 1 of
3) (CX-310-052)

Page
- Sun Certified Enterprise Architect for the Java Platform, Enterprise Edition 5:
Assignment (Step 2 of 3) (CX-310-301A)
2
- Sun Certified Enterprise Architect for the Java Platform, Enterprise Edition 5: Essay
(Step 3 of 3) (CX-310-062)
- Sun Certified Enterprise Architect for the Java Platform, Enterprise Edition 5:
Assignment Resubmission (CX-310-301R)
- Upgrade Exam: Sun Certified Enterprise Architect for the Java Platform, Enterprise
Edition 5 (CX-310-053)

Section 30. Introduction (7 pages)


Sun Certified Enterprise Architect for the Java Platform,
Enterprise Edition 5 (Step 1 of 3) (CX-310-052)

Authorized Worldwide Prometric Testing


Delivered at
Centers (Register online.)
Prerequisites None

Other exams/assignments required Step 2 (CX-310-301A) &


for this certification Step 3 (CX-310-062)
Exam type Multiple choice & and drag-n-drop
Number of questions 64
57%
Pass score
(i.e. 37 out of 64 questions)
120 minutes
Time limit
(i.e. 1 min & 52 sec for each question)

Software Life cycle

To define a software product, you will have the following 2 types of requirements:
1) Business requirements
2) Quality of service requirements (aka capabilities are constraint or the
nonfunctional and observable system qualities)

In the requirement phase, what the product is to do is defined.


In the design phase, how the product is to be done is specified in term of high level

Page
system architecture (components and interfaces among these components) and low level
design of each component spelled in algorithms and data structure regardless of the used 3
programming language. The key difference between the terms architecture and
design is in the level of details. Architecture operates at a high level of
abstraction with less detail. Design operates at a low level of abstraction,
obviously with more of an eye to the details of implementation.
In implementation phase, the coding is done.
In verification phase, various kind of testing like requirement/interface/unit/regressing
(sanity) tests are performed.
In the maintenance phase, the product is deployed in the operation and maintained until
its retirements.

Section 30. Introduction (7 pages)


Architect & software architecture
As an architect, it is your job to work with stakeholders of the system during incepting
and elaboration phases to define the quality of service measurement (i.e. How to make
them testable?). An architect makes choices and trade-offs focusing on the service-level
requirement and the cost to attain these requirements. The software architecture one
creates must address the following service-level requirements:

1) Performance
2) Scalability,
3) Reliability,
4) Availability,
5) Extensibility,
6) Maintainability,
7) Manageability; and
8) Security.
An architect has to tradeoffs among these requirements. For example, if the most
important service-level requirement is the performance of the system, you should
sacrifice the maintainability and extensibility of the system to ensure you meet the
performance quality of the service.

Performance Page
4
The performance requirements is usually measured in terms of response time for a given
screen transaction per user. In addition to response time, performance can be measured in
transaction throughput, which is the number of transactions in a given time period,
usually one second. For example, you could have a performance measurement that could
be no more than 3 seconds for each screen form or a transaction throughput of one
hundred transactions in 1 second. Regardless of the measurement, you need to create an
architecture that allows the designers and developers to complete the system without
considering the performance measurement.

Scalability

Section 30. Introduction (7 pages)


Scalability is the ability to support the required quality of service as the system load
increases without changing the system. A system can be considered scalable if as the load
increases, the system still responds within acceptable limits. It might be that you have a
performance measurement of a response time between 2 and 5 seconds. If the system
load increases and the system can maintain the performance quality of service of less than
5 seconds response time, then your system is scalable. To understand scalability, you
must understand capacity of a system, which is defined as the maximum number of
processes or users a system can handle and still maintain the quality of service. If a
system is running at its capacity and can no longer respond within acceptable time frame,
then it has reached its maximum scalability. To scale a system that met its capacity, you
must add additional hardware. This additional hardware can be added vertically or
horizontally. Vertically scaling involves adding additional processors, memory, or disks
to the current environment, thus increasing the overall system capacity. Horizontal
scaling involves adding more machines to the environment, thus increasing the overall
system capacity. The architecture you create must be able to handle the vertical or
horizontal scaling of the hardware. Vertical scaling of a software architecture is easier
than the horizontal scaling. Why? Adding more processors or memory typically does not
have impact on your architecture, but having your architecture run on multiple machines
and still appear to be one system is more difficult.

Reliability
Reliability is the ability to ensure that the product is trustworthy and dependable for all of
its transaction. The system load, for example, should not have any effect on the
correctness of the system transaction for a system to be reliable.

Availability
Availability is the ability to ensure the system is always accessible. The degree to which a
system is accessible can be termed as 24×7 to describe total availability. This aspect of a
system is often coupled with performance. The availability of a system is improved by
setting up an environment of redundant components and failover.

Extensibility
Extensibility is the ability to add or modify additional functionality without impacting
existing system functionality. Without the need of modifying or adding new
functionality, extensibility cannot be evaluated. To ensure extensibility, the product
design needs to have: low coupling and encapsulation.

Maintainability Page
5
Maintainability is the ability to correct flows in the existing functionality without
impacting other components of the system. Characteristics of the products that ensure
maintainability are: low coupling, modularity and documentation.

Manageability
Manageability is the ability to ensure that the system has continued health with respect to
other quality of services requirements such as performance, scalability, reliability,
availability and security. Manageability deals with system monitoring of the QoS
requirements and the ability to change system configuration to improve the QoS
dynamically without changing the system.

Section 30. Introduction (7 pages)


Security
Security is the ability to ensure the system cannot be compromised. Creating an
architecture that is separated into functional components makes it easier to secure the
system because you can build security zones around the components. If a component is
compromised, it is easier to contain the security violation to that component.
A goal of information security is to protect resources and assets from loss.
Resources may include information, services, and equipment such as servers and
networking components. Each resource has several assets that require protection:
■ Privacy Preventing information disclosure to unauthorized persons
■ Integrity Preventing corruption or modification of resources
■ Authenticity Proof that a person has been correctly identified or that a message is
received as transmitted
■ Availability Assurance that information, services, and equipment are working and
available for use

The classes of threats includes accidental threats, intentional threats, passive threats
(those that do not change the state of the system but may include loss of confidentiality
but not of integrity or availability), and active threats (those that change the state of the
system, including changes to data and to software).
A security policy is an enterprise’s statement defining the rules that regulate how it will
provide security, handle intrusions, and recover from damage caused by security
breaches. Based on a risk analysis and cost considerations, such policies are most
effective when users understand them and agree to abide by them.
Security services are provided by a system for implementing the security policy of an
organization. A standard set of such services includes the following:
■ Identification and authentication Unique identification and verification of users via
certification servers and global authentication services (single sign-on services)
■ Access control and authorization Rights and permissions that control how users can
access resources
■ Accountability and auditing Services for logging activities on network systems and
linking them to specific user accounts or sources of attacks
■ Data confidentiality Services to prevent unauthorized data disclosure
■ Data integrity and recovery Methods for protecting resources against corruption and
unauthorized modification—for example, mechanisms using checksums and encryption
technologies
Page
■ Data exchange Services that secure data transmissions over communication channels
■ Object reuse Services that provide multiple users secure access to individual resources
6
■ Non-repudiation of origin and delivery Services to protect against attempts by the
sender to falsely deny sending the data, or subsequent attempts by the recipient to falsely
deny receiving the data
■ Reliability Methods for ensuring that systems and resources are available and protected
against failure
Architecture is the fundamental organization of a system embodied in its components,
their relationships to each other, and to the environment, and the principles guiding its
design and evolution. [IEEE 1471] What is a software architecture?

Section 30. Introduction (7 pages)


The software architecture of a system or a collection of systems consists of all the
important design decisions about the software structures and the interactions between
those structures that comprise the systems. The design decisions support a desired set
of qualities that the system should support to be successful. The design decisions
provide a conceptual basis for system development, support, and maintenance.
[McGovern]7

An architecture defines structure

An architecture defines behavior


As well as defining structural elements, an architecture defines the interactions between
these structural elements. It is these interactions that provide the desired system behavior.
An architecture focuses on significant elements
While an architecture defines structure and behavior, it is not concerned with defining all
of the structure and all of the behavior. It is only concerned with those elements that are
deemed to be significant. Significant elements are those that have a long and lasting
effect, such as the major structural elements, those elements associated with essential
behavior, and those elements that address significant qualities such as reliability and
scalability. In general, the architecture is not concerned with the fine-grained details of
these elements. Architectural significance can also be phrased as economical
significance, since the primary driver for considering certain elements over others is the
cost of creation and cost of change.
Since an architecture focuses on significant elements only, it provides us with a particular
perspective of the system under consideration -- the perspective that is most relevant to
the architect. In this sense, an architecture is an abstraction of the system that helps an
architect manage complexity.
It is also worth noting that the set of significant elements is not static and may change
over time. As a consequence of requirements being refined, risks identified, executable
software built, and lessons learned, the set of significant elements may change. However,
the relative stability of the architecture in the face of change is, to some extent, the sign
of a good architecture, the sign of a well-executed architecting process, and the sign of a
good architect. If the architecture needs to be continually revised due to relatively minor
changes, then this is not a good sign. However, if the architecture is relatively stable, then
the converse is true.

Page
An architecture embodies decisions based on rationale 7
An important aspect of an architecture is not just the end result, the architecture itself, but
the rationale for why it is the way it is. Thus, an important consideration is to ensure that
you document the decisions that have led to this architecture and the rationale for those
decisions.
An architecture balances stakeholder needs
An architecture is created to ultimately address a set of stakeholder needs. However, it is
often not possible to meet all of the needs expressed. For example, a stakeholder may ask
for some functionality within a specified timeframe, but these two needs (functionality
and timeframe) are mutually exclusive. Either the scope can be reduced in order to meet
the schedule or all of the functionality can be provided within an extended timeframe.

Section 30. Introduction (7 pages)


Similarly, different stakeholders may express conflicting needs and, again, an appropriate
balance must be achieved. Making tradeoffs is therefore an essential aspect of the
architecting process, and negotiation, an essential characteristic of the architect.
Just to give you an idea of the task at hand, consider the following needs of a set of
stakeholders:
• The end user is concerned with intuitive and correct behavior, performance,
reliability, usability, availability, and security.
• The system administrator is concerned with intuitive behavior, administration,
and tools to aid monitoring.
• The marketer is concerned with competitive features, time to market, positioning
with other products, and cost.
• The customer is concerned with cost, stability, and schedule.
• The developer is concerned with clear requirements, and a simple and consistent
design approach.
• The project manager is concerned with predictability in the tracking of the
project, schedule, productive use of resources, and budget.
• The maintainer is concerned with a comprehensible, consistent, and documented
design approach, and the ease with which modifications can be made.
As we can see from this list, another challenge for the architect is that the stakeholders
are not only concerned that the system provides the required functionality. Many of the
concerns listed are nonfunctional in nature in that they do not contribute to the
functionality of the system (e.g., the concerns regarding costs and scheduling). Such
concerns nevertheless represent system qualities or constraints.
Nonfunctional requirements are quite often the most significant
requirements as far as an architect is concerned.

Page
8

Section 30. Introduction (7 pages)


1. Application Design Concepts and Principles ( pages3)

Explain the main advantages of an object-oriented approach to system


1 design including the effect of encapsulation, inheritance, and use of
interfaces on architectural characteristics.

Describe how the principle of "separation of concerns" has been applied


to the main system tiers of a Java Platform, Enterprise Edition
2 application. Tiers include client (both GUI and web), web (web
container), business (EJB container), integration, and resource tiers.

Describe how the principle of "separation of concerns" has been applied


to the layers of a Java EE application. Layers include application, virtual
3 platform (component APIs), application infrastructure (containers),
enterprise services (operating system and virtualization), compute and
storage, and the networking infrastructure layers.

Explain the main advantages of an object-oriented approach to system


1.1 design including the effect of encapsulation, inheritance, and use of
interfaces on architectural characteristics.

• [JAVA_DESIGN] Chapter 1.
• Three Sources of a Solid Object-Oriented Design By Gene
Shadrin
Ref • Object Oriented Basic Concepts and Advantages
• Advantages of an Object-Oriented Approach (for new programmers)
• [SCEA-051]

The most basic OO principles include encapsulation, inheritance, and polymorphism. Page
9
Along with abstraction, association, aggregation, and composition, they form the
foundation of the OO approach. These basic principles rest on a concept of objects that
depicts real-world entities such as, say, books, customers, invoices, or birds.

Encapsulation encloses data and behavior in a programming module. Encapsulation is


represented by the two close principles of information hiding and implementation hiding.
Information hiding restricts access to the object data using a clearly defined "interface"
that hides the internal details of the object's structure. For each restricted class variable,
this interface appears as a pair of "get" and "set" methods that define read-and-write
access to the variable. Implementation hiding defines the access restrictions to the

Section 30. Introduction (7 pages)


object methods also using a clearly defined interface that hides the internal details of
object implementation and exposes only the methods that comprise object behavior.
Both information and implementation hiding serve the main goal - assuring the highest
level of decoupling between classes.

Inheritance is a relationship that defines one entity in terms of another. It designates the
ability to create new classes (types) that contain all the methods and properties of
another class plus additional methods and properties. Inheritance combines interface
inheritance and implementation inheritance. In this case, interface inheritance describes
a new interface in terms of one or more existing interfaces, while implementation
inheritance defines a new implementation in terms of one or more existing
implementations. Both interface inheritance and implementation inheritance are used to
extend the behavior of a base entity.

The object world has a variety of relationships such as (aggregation, composition,


association, inheritance, etc).

Page
10

Polymorphism is the ability of different objects to respond differently to the same


message. Polymorphism lets a client make the same request of different objects and
assume that the operation will be appropriate to each class of object. There are two
kinds of polymorphism - inheritance polymorphism, which works on an inheritance chain,
and operational polymorphism, which specifies similar operations for non-related out-of-
inheritance classes or interfaces. Because inheritance polymorphism lets a subclass
(subtype) override the operation that it inherits from its superclass (supertype), it creates

Section 30. Introduction (7 pages)


a way to diversify the behavior of inherited objects in an inheritance chain, while keeping
their parent-objects intact. Polymorphism is closely related to inheritance as well as to
encapsulation.

The Single Responsibility Principle specifies that class should have only one reason to
change. It's also known as the cohesion principle and dictates that class should have
only one responsibility, i.e., it should avoid putting together responsibilities that change
for different reasons.

The Open/Closed Principle dictates that software entities should be open to extension
but closed to modification. Modules should be written so that they can be extended
without being modified. In other words, developers should be able to change what the
modules do without changing the modules' source code.

The Liskov Substitution Principle says that subclasses should be able to substitute for
their base classes, meaning that clients that use references to base classes must be
able to use the objects of derived classes without knowing them. This principle is
essentially a generalization of a "design by contract" approach that specifies that a
polymorphic method of a subclass can only replace its pre-condition by a weaker one
and its post-condition by a stronger one.

Dependency Inversion Principle says high-level modules shouldn't depend on low-level


modules. In other words, abstractions shouldn't depend on details. Details should
depend on abstractions.

The Interface Segregation Principle says that clients shouldn't depend on the methods
they don't use. It means multiple client-specific interfaces are better than one general-
purpose interface.

OOD, like other design methods, has advantages and disadvantages.


Some advantages are: Page
 By focusing on objects, composed of states and behavior, it produces highly
11
modular designs with all the resulting benefits.
 In some problem domains, it allows a close relationship between the domain
entities and their behavior and the classes/instances in the design and code.
 It facilitates re-use via composition or inheritance.
 When combined with an object-oriented language, such as Java, design and
implementation can be tightly integrated.
 Easier maintenance.
 Objects may be understood as stand-alone entities.
 Objects are potentially reusable components.

Section 30. Introduction (7 pages)


 Faster Development: OOD has long been touted as leading to faster development.
Many of the claims of potentially reduced development time are correct in
principle, if a bit overstated.
 Reuse of Previous work: This is the benefit cited most commonly in literature,
particularly in business periodicals. OOD produces software modules that can be
plugged into one another, which allows creation of new programs. However, such
reuse does not come easily. It takes planning and investment.
 Increased Quality: Increases in quality are largely a by-product of this program
reuse. If 90% of a new application consists of proven, existing components, then
only the remaining 10% of the code has to be tested from scratch. That
observation implies an order-of-magnitude reduction in defects.
 Modular Architecture: Object-oriented systems have a natural structure for
modular design: objects, subsystems, framework, and so on. Thus, OOD systems
are easier to modify. OOD systems can be altered in fundamental ways without
ever breaking up since changes are neatly encapsulated. However, nothing in
OOD guarantees or requires that the code produced will be modular. The same
level of care in design and implementation is required to produce a modular
structure in OOD, as it is for any form of software development.
 Better Mapping to the Problem Domain: This is a clear winner for OOD,
particularly when the project maps to the real world. Whether objects represent
customers, machinery, banks, sensors or pieces of paper, they can provide a clean,
self-contained implication which fits naturally into human thought processes.
One of the most important characteristics of OOP (Object Oriented Programming) is the
data encapsulation concept, which means that there is a very close attachment between
data items and procedures. The procedures (methods) are responsible for manipulating
the data in order to reflect its behavior. The public interface, formed by the collections of
messages understood by an object, completely defines how to use this object. Programs
that want to manipulate an object, only have to be concerned about which messages this
object understands, and do not have to worry about how these tasks are achieved nor the
internal structure of the object. The hiding of internal details makes an object abstract,
and the technique is normally known as data abstraction. Normally, objects of a given
type are instances of a class, whose definition specifies the private (internal) working of
these objects as well as their public interface. Basically the creation of an object is
referred to as instantiation, and it consists of a class definition with appropriate initial
values.

Page
Another powerful feature of OOP, is the concept of inheritance (derived classes in C++),
meaning the derivation of a similar or related object (the derived object) from a more 12
general based object. The derived class can inherit the properties of its base class and also
adds its own data and routines. The concept above is known as single inheritance, but it is
also possible to derive a class from several base classes, which is known as multiple
inheritance, not allowed in Java.
Polymorphism means the sending of a message to an object without concern about how
the software is going to accomplish the task, and furthermore it means that the task can
be executed in completely different ways depending on the object that receives the
message. When the decision as to which actions are going to be executed is made at run-
time, the polymorphism is referred to as late binding. If they are made at compile time
then it is known as early binding.

Section 30. Introduction (7 pages)


After this brief introduction, we now describe the major advantages of OOP.
• Simplicity: software objects model real world objects, so the complexity is
reduced and the program structure is very clear;
• Modularity: each object forms a separate entity whose internal workings are
decoupled from other parts of the system;
• Modifiability: it is easy to make minor changes in the data representation or the
procedures in an OO program. Changes inside a class do not affect any other part
of a program, since the only public interface that the external world has to a class
is through the use of methods;
• Extensibility: adding new features or responding to changing operating
environments can be solved by introducing a few new objects and modifying
some existing ones;
• Maintainability: objects can be maintained separately, making locating and fixing
problems easier;
• Re-usability: objects can be reused in different programs.
Like structured programming in legacy systems, object-oriented programming (OOP) is
used to manage the complexity of software systems. However, OOP technology
provides several advantages. OOP applications are easier to maintain, have more
reusable components, and are more scalable, to name a few.
Maintainable
OOP methods make code more maintainable. Identifying the source of errors becomes
easier because objects are self-contained (encapsulation). The principles of good OOP
design contribute to an application's maintainability.
Reusable
Because objects contain both data and functions that act on data, objects can be
thought of as self-contained "boxes" (encapsulation). This feature makes it easy to reuse
code in new systems. Messages provide a predefined interface to an object's data and
functionality. If you know this interface, you can make use on an object in any context
you want. OOP languages, such as C# and VB.Net, make it easy to expand on the
functionality of these "boxes" (polymorphism and inheritance), even if you don't know
much about their implementation (again, encapsulation).

Scalable OO applications are more scalable then their structured programming roots. As
an object's interface provides a roadmap for reusing the object in new software, it also

Page
provides you with all the information you need to replace the object without affecting
other code. This makes it easy to replace old and aging code with faster algorithms and 13
newer technology.
There are three major features in object-oriented programming: encapsulation, inheritance
and polymorphism.
1. Encapsulation
Encapsulation enforces modularity.
Encapsulation refers to the creation of self-contained modules that bind processing
functions to the data. These user-defined data types are called "classes," and one
instance of a class is an "object. Encapsulation ensures good code modularity, which
keeps routines (i.e. methods) separate and less prone to conflict with each other.
2. Inheritance

Section 30. Introduction (7 pages)


Inheritance passes "knowledge" down.
Classes are created in hierarchies, and inheritance allows the structure and methods
in one class to be passed down the hierarchy. That means less programming is
required when adding functions to complex systems. If a step is added at the bottom
of a hierarchy, then only the processing and data associated with that unique step
needs to be added. Everything else about that step is inherited. The ability to reuse
existing objects is considered a major advantage of object technology.
3. Polymorphism
Polymorphism takes any shape.
Object-oriented programming allows procedures about objects to be created whose
exact type is not known until runtime. For example, a screen cursor may change its
shape from an arrow to a line depending on the program mode. The routine to move
the cursor on screen in response to mouse movement would be written for "cursor,"
and polymorphism allows that cursor to take on whatever shape is required at
runtime. It also allows new shapes to be easily integrated.

Describe how the principle of "separation of concerns" has been applied


to the main system tiers of a Java Platform, Enterprise Edition
1.2 application. Tiers include client (both GUI and web), web (web
container), business (EJB container), integration, and resource tiers.

Ref • [JEE_5_TUTORIAL] Chapter 1.

A tier is vertical view of a system based on the separation of function on multiple


machines.

The Java EE platform uses a distributed multitier application model for enterprise applications.
In the following description, focus on how the application logic is divided into components
according to function, and the various application components that make up a Java EE
application are installed on different machines depending on the tier in the multitier Java EE
environment to which the application component belongs.

Page
Figure below shows multitier Java EE applications divided into the tiers described in the
following list: 14

Section 30. Introduction (7 pages)


• Client-tier components run on the client machine.
• Web-tier components run on the Java EE server.
• Business-tier components run on the Java EE server. Enterprise information system
(EIS)-tier software runs on the EIS server.
Although a Java EE application can consist of the three or four tiers, Java EE multitier
applications are generally considered to be three-tiered applications because they are
distributed over three locations: client machines, the Java EE server machine, and the
database or legacy machines at the back end. Three-tiered applications that run in this way
extend the standard two-tiered client and server model by placing a multithreaded application
server between the client application and back-end storage.
Java EE applications are made up of components. A Java EE component is a self-contained
functional software unit that is assembled into a Java EE application with its related classes
and files and that communicates with other components. The Java EE specification defines the
following Java EE components:

Page
• Application clients and applets are components that run on the client.
• Java Servlet, JavaServer Faces, and JavaServer Pages (JSP) technology components
15
are web components that run on the server.
• Enterprise JavaBeans (EJB) components (enterprise beans) are business components
that run on the server.
Java EE Clients
• Web Clients
A Web Client consists of two parts:
○ Dynamic web pages containing various types of markup language (HTML,
XML, and so on), which are generated by web components running in the
web tier.

Section 30. Introduction (7 pages)


○ Web browser, which renders the pages received from the server.
A Web Client is sometimes called a thin client. Thin clients usually do not query
databases, execute complex business rules, or connect to legacy applications. When
you use a thin client, such heavyweight operations are off-loaded to enterprise beans
executing on the Java EE server, where they can leverage the security, speed,
services, and reliability of Java EE server-side technologies.
A web page received from the web tier can include an embedded applet. An applet
is a small client application written in the Java programming language that executes
in the Java virtual machine installed in the web browser. However, client systems will
likely need the Java Plug-in and possibly a security policy file for the applet to
successfully execute in the web browser.
Web components (Servlets, JSF or JSP) are the preferred API for creating a web
client program because no plug-ins or security policy files are needed on the client
systems. Also, web components enable cleaner and more modular application design
because they provide a way to separate applications programming from web page
design. Personnel involved in web page design thus do not need to understand Java
programming language syntax to do their jobs.
• Application Clients
An application client runs on a client machine and provides a way for users to handle
tasks that require a richer user interface than can be provided by a markup
language. It typically has a graphical user interface (GUI) created from the Swing or
the Abstract Window Toolkit (AWT) API, but a command-line interface is certainly
possible.
Application clients directly access enterprise beans running in the business tier.
However, if application requirements warrant it, an application client can open an
HTTP connection to establish communication with a servlet running in the web tier.
Application clients written in languages other than Java can interact with Java EE 5
servers, enabling the Java EE 5 platform to interoperate with legacy systems, clients,
and non-Java languages.
Figure below shows the various elements that can make up the client tier.
The client communicates with the business tier running on the Java EE server either directly
or, as in the case of a client running in a browser, by going through JSP pages or servlets
running in the web tier. Your Java EE application uses a thin browser-based client or thick
application client. In deciding which one to use, you should be aware of the trade-offs between
keeping functionality on the client and close to the user (thick client) and off-loading as much
functionality as possible to the server (thin client). The more functionality you off-load to the
server, the easier it is to distribute, deploy, and manage the application; however, keeping
more functionality on the client can make for a better perceived user experience.

Page
16

Section 30. Introduction (7 pages)


Web Components
Java EE web components are either servlets or pages created using JSP technology (JSP
pages) and/or JavaServer Faces technology. Servlets are Java programming language classes
that dynamically process requests and construct responses. JSP pages are text-based
documents that execute as servlets but allow a more natural approach to creating static
content. JavaServer Faces technology builds on servlets and JSP technology and provides a
user interface component framework for web applications.

Page
17

Static HTML pages and applets are bundled with web components during application assembly
but are not considered web components by the Java EE specification. Server-side utility
classes can also be bundled with web components and, like HTML pages, are not considered
web components.
As shown in figure below, the web tier, like the client tier, might include a JavaBeans
component to manage the user input and send that input to enterprise beans running in the
business tier for processing.

Section 30. Introduction (7 pages)


Business Components
Business code, which is logic that solves or meets the needs of a particular business domain
such as banking, retail, or finance, is handled by enterprise beans running in the business tier.
Figure below shows how an enterprise bean receives data from client programs, processes it
(if necessary), and sends it to the enterprise information system tier for storage. An enterprise
bean also retrieves data from storage, processes it (if necessary), and sends it back to the
client program.

The enterprise information system (EIS) tier handles EIS software and includes enterprise
infrastructure systems such as enterprise resource planning (ERP), mainframe transaction
processing, database systems, and other legacy information systems. For example, Java EE
application components might need access to enterprise information systems for database
connectivity.

Describe how the principle of "separation of concerns" has been applied


to the layers of a Java EE application. Layers include (1) application, (2)
1.3 virtual platform (component APIs), (3) application infrastructure
Page
(containers), (4) enterprise services (operating system and virtualization),
(5) compute and storage, and the networking infrastructure layers.
18
• [JEE_5_TUTORIAL] Chapter 1.
Ref • [JEE_5_TUTORIAL] Chapter 2.

A layer is a horizontal and virtual view of a system on which each layer is built on top of
its lower layer.

The followings describe the layers listed in the objective:

Section 30. Introduction (7 pages)


(1) Java EE Application
A Java EE application is packaged into one or more standard units for deployment to any Java
EE platform-compliant system. Each unit contains:
• A functional component or components (enterprise bean, JSP page, servlet, applet,
etc.)
• An optional deployment descriptor that describes its content
Once a Java EE unit has been produced, it is ready to be deployed. Deployment typically
involves using a platform's deployment tool to specify location-specific information, such as a
list of local users that can access it and the name of the local database. Once deployed on a
local platform, the application is ready to run.
A Java EE application is delivered in an Enterprise Archive (EAR) file, a standard Java Archive
(JAR) file with an .ear extension. Using EAR files and modules makes it possible to assemble
a number of different Java EE applications using some of the same components. No extra
coding is needed; it is only a matter of assembling (or packaging) various Java EE modules
into Java EE EAR files.
An EAR file contains Java EE modules and deployment descriptors. A deployment descriptor is
an XML document with an .xml extension that describes the deployment settings of an
application, a module, or a component. Because deployment descriptor information is
declarative, it can be changed without the need to modify the source code. At runtime, the
Java EE server reads the deployment descriptor and acts upon the application, module, or
component accordingly.
A Java EE module consists of one or more Java EE components for the same container type
and one component deployment descriptor of that type. An enterprise bean module
deployment descriptor, for example, declares transaction attributes and security
authorizations for an enterprise bean. A Java EE module without an application deployment
descriptor can be deployed as a stand-alone module. The four types of Java EE modules are as
follows:
• EJB modules, which contain class files for enterprise beans and an EJB deployment
descriptor. EJB modules are packaged as JAR files with a .jar extension.
• Web modules, which contain servlet class files, JSP files, supporting class files, GIF
and HTML files, and a web application deployment descriptor. Web modules are
packaged as JAR files with a .war (Web ARchive) extension.
• Application client modules, which contain class files and an application client
deployment descriptor. Application client modules are packaged as JAR files with a
.jar extension.
• Resource adapter modules, which contain all Java interfaces, classes, native
libraries, and other documentation, along with the resource adapter deployment
descriptor. Together, these implement the Connector architecture (J2EE Connector

Page
Architecture) for a particular EIS. Resource adapter modules are packaged as JAR
files with an .rar (resource adapter archive) extension. 19
(2) Java EE 5 APIs

Section 30. Introduction (7 pages)


• Enterprise JavaBeans Technology
An Enterprise JavaBeans (EJB) component, or enterprise bean, is a body of code
having fields and methods to implement modules of business logic. You can think of
an enterprise bean as a building block that can be used alone or with other
enterprise beans to execute business logic on the Java EE server.
There are two kinds of enterprise beans: session beans and message-driven beans.
A session bean represents a transient conversation with a client. When the client
finishes executing, the session bean and its data are gone. A message-driven bean
combines features of a session bean and a message listener, allowing a business
component to receive messages asynchronously. Commonly, these are Java Message
Service (JMS) messages.
In Java EE 5, entity beans have been replaced by Java persistence API entities. An
entity represents persistent data stored in one row of a database table. If the client
terminates, or if the server shuts down, the persistence manager ensures that the
entity data is saved.
• Java Servlet Technology
Java servlet technology lets you define HTTP-specific servlet classes. A servlet class
extends the capabilities of servers that host applications that are accessed by way of
a request-response programming model. Although servlets can respond to any type

Page
of request, they are commonly used to extend the applications hosted by web
servers.
20
• JavaServer Pages Technology
JavaServer Pages (JSP) technology lets you put snippets of servlet code directly into
a text-based document. A JSP page is a text-based document that contains two
types of text: static data (which can be expressed in any text-based format such as
HTML, WML, and XML) and JSP elements, which determine how the page constructs
dynamic content.
• JavaServer Pages Standard Tag Library
The JavaServer Pages Standard Tag Library (JSTL) encapsulates core functionality
common to many JSP applications. Instead of mixing tags from numerous vendors in
your JSP applications, you employ a single, standard set of tags. This standardization

Section 30. Introduction (7 pages)


allows you to deploy your applications on any JSP container that supports JSTL and
makes it more likely that the implementation of the tags is optimized.
JSTL has iterator and conditional tags for handling flow control, tags for manipulating
XML documents, internationalization tags, tags for accessing databases using SQL,
and commonly used functions.
• JavaServer Faces
JavaServer Faces technology is a user interface framework for building web
applications. The main components of JavaServer Faces technology are as follows:
○ A GUI component framework.
○ A flexible model for rendering components in different kinds of HTML or
different markup languages and technologies. A Renderer object generates
the markup to render the component and converts the data stored in a
model object to types that can be represented in a view.
○ A standard RenderKit for generating HTML/4.01 markup.
The following features support the GUI components:
○ Input validation
○ Event handling
○ Data conversion between model objects and components
○ Managed model object creation
○ Page navigation configuration
All this functionality is available using standard Java APIs and XML-based
configuration files.
• Java Message Service API
The Java Message Service (JMS) API is a messaging standard that allows Java EE
application components to create, send, receive, and read messages. It enables
distributed communication that is loosely coupled, reliable, and asynchronous.
• Java Transaction API
The Java Transaction API (JTA) provides a standard interface for demarcating
transactions. The Java EE architecture provides a default auto commit to handle
transaction commits and rollbacks. An auto commit means that any other
applications that are viewing data will see the updated data after each database read
or write operation. However, if your application performs two separate database
access operations that depend on each other, you will want to use the JTA API to
demarcate where the entire transaction, including both operations, begins, rolls
back, and commits.

Page
• JavaMail API 21
Java EE applications use the JavaMail API to send email notifications. The JavaMail
API has two parts: an application-level interface used by the application components
to send mail, and a service provider interface. The Java EE platform includes
JavaMail with a service provider that allows application components to send Internet
mail.
• JavaBeans Activation Framework
The JavaBeans Activation Framework (JAF) is included because JavaMail uses it. JAF
provides standard services to determine the type of an arbitrary piece of data,
encapsulate access to it, discover the operations available on it, and create the
appropriate JavaBeans component to perform those operations.
• Java API for XML Processing

Section 30. Introduction (7 pages)


The Java API for XML Processing (JAXP), part of the Java SE platform, supports the
processing of XML documents using Document Object Model (DOM), Simple API for
XML (SAX), and Extensible Stylesheet Language Transformations (XSLT). JAXP
enables applications to parse and transform XML documents independent of a
particular XML processing implementation.
JAXP also provides namespace support, which lets you work with schemas that might
otherwise have naming conflicts. Designed to be flexible, JAXP lets you use any XML-
compliant parser or XSL processor from within your application and supports the
W3C schema.
• Java API for XML Web Services (JAX-WS)
The JAX-WS specification provides support for web services that use the JAXB API for
binding XML data to Java objects. The JAX-WS specification defines client APIs for
accessing web services as well as techniques for implementing web service
endpoints. The Web Services for J2EE specification describes the deployment of JAX-
WS-based services and clients. The EJB and servlet specifications also describe
aspects of such deployment. It must be possible to deploy JAX-WS-based
applications using any of these deployment models.
The JAX-WS specification describes the support for message handlers that can
process message requests and responses. In general, these message handlers
execute in the same container and with the same privileges and execution context as
the JAX-WS client or endpoint component with which they are associated. These
message handlers have access to the same JNDI java:comp/env namespace as
their associated component. Custom serializers and deserializers, if supported, are
treated in the same way as message handlers.
• Java Architecture for XML Binding (JAXB)
The Java Architecture for XML Binding (JAXB) provides a convenient way to bind an
XML schema to a representation in Java language programs. JAXB can be used
independently or in combination with JAX-WS, where it provides a standard data
binding for web service messages. All Java EE application client containers, web
containers, and EJB containers support the JAXB API.
• SOAP with Attachments API for Java
The SOAP with Attachments API for Java (SAAJ) is a low-level API on which JAX-WS
and JAXR depend. SAAJ enables the production and consumption of messages that
conform to the SOAP 1.1 specification and SOAP with Attachments note. Most
developers do not use the SAAJ API, instead using the higherlevel JAX-WS API.
• Java API for XML Registries
The Java API for XML Registries (JAXR) lets you access business and general-purpose
registries over the web. JAXR supports the ebXML Registry and Repository standards
and the emerging UDDI specifications. By using JAXR, developers can learn a single

Page
API and gain access to both of these important registry technologies.
Additionally, businesses can submit material to be shared and search for material
22
that others have submitted. Standards groups have developed schemas for particular
kinds of XML documents; two businesses might, for example, agree to use the
schema for their industry’s standard purchase order form. Because the schema is
stored in a standard business registry, both parties can use JAXR to access it.
• J2EE Connector Architecture (JCA)
The J2EE Connector architecture is used by tools vendors and system integrators to
create resource adapters that support access to enterprise information systems that
can be plugged in to any Java EE product. A resource adapter is a software
component that allows Java EE application components to access and interact with
the underlying resource manager of the EIS. Because a resource adapter is specific
to its resource manager, typically there is a different resource adapter for each type
of database or enterprise information system.

Section 30. Introduction (7 pages)


The J2EE Connector architecture also provides a performance-oriented, secure,
scalable, and message-based transactional integration of Java EE-based web services
with existing EISs that can be either synchronous or asynchronous. Existing
applications and EISs integrated through the J2EE Connector architecture into the
Java EE platform can be exposed as XML-based web services by using JAX-WS and
Java EE component models. Thus JAX-WS and the J2EE Connector architecture are
complementary technologies for enterprise application integration (EAI) and end-to-
end business integration.
• Java Database Connectivity (JDBC) API
The Java Database Connectivity (JDBC) API lets you invoke SQL commands from
Java programming language methods. You use the JDBC API in an enterprise bean
when you have a session bean access the database. You can also use the JDBC API
from a servlet or a JSP page to access the database directly without going through
an enterprise bean.
The JDBC API has two parts: an application-level interface used by the application
components to access a database, and a service provider interface to attach a JDBC
driver to the Java EE platform.
• Java Persistence API (JPA)
The Java Persistence API is a Java standards-based solution for persistence.
Persistence uses an object-relational mapping approach to bridge the gap between
an object oriented model and a relational database. Java Persistence consists of
three areas:
○ The Java Persistence API
○ The query language
○ Object/relational mapping metadata

• Java Naming and Directory Interface (JNDI)


The Java Naming and Directory Interface (JNDI) provides naming and directory
functionality, enabling applications to access multiple naming and directory services,
including existing naming and directory services such as LDAP, NDS, DNS, and NIS.
It provides applications with methods for performing standard directory operations,
such as associating attributes with objects and searching for objects using their
attributes. Using JNDI, a Java EE application can store and retrieve any type of
named Java object, allowing Java EE applications to coexist with many legacy
applications and systems.
Java EE naming services provide application clients, enterprise beans, and web
components with access to a JNDI naming environment. A naming environment
allows a component to be customized without the need to access or change the

Page
component's source code. A container implements the component's environment and
provides it to the component as a JNDI naming context. 23
• Java Authentication and Authorization Service
The Java Authentication and Authorization Service (JAAS) provides a way for a Java
EE application to authenticate and authorize a specific user or group of users to run
it.
JAAS is a Java programming language version of the standard Pluggable
Authentication Module (PAM) framework, which extends the Java Platform security
architecture to support user-based authorization.

(3) Container Services

Section 30. Introduction (7 pages)


Containers are the interface between a component and the low-level platform-specific
functionality that supports the component. Before a web, enterprise bean, or application client
component can be executed, it must be assembled into a Java EE module and deployed into
its container.
The assembly process involves specifying container settings for each component in the Java
EE application and for the Java EE application itself. Container settings customize the
underlying support provided by the Java EE server, including services such as security,
transaction management, Java Naming and Directory Interface (JNDI) lookups, and remote
connectivity. Here are some of the highlights:
• The Java EE security model lets you configure a web component or enterprise bean
so that system resources are accessed only by authorized users.
• The Java EE transaction model lets you specify relationships among methods that
make up a single transaction so that all methods in one transaction are treated as a
single unit.
• JNDI lookup services provide a unified interface to multiple naming and directory
services in the enterprise so that application components can access these services.
• The Java EE remote connectivity model manages low-level communications between
clients and enterprise beans. After an enterprise bean is created, a client invokes
methods on it as if it were in the same virtual machine.
Because the Java EE architecture provides configurable services, application components
within the same Java EE application can behave differently based on where they are deployed.
For example, an enterprise bean can have security settings that allow it a certain level of
access to database data in one production environment and another level of database access
in another production environment.
The container also manages non-configurable services such as enterprise bean and servlet life
cycles, database connection resource pooling, data persistence, and access to the Java EE
platform APIs.

(5) Communication Technologies


Communication technologies provide mechanisms for communication between clients and
servers and between collaborating objects hosted by different servers. The J2EE specification
requires support for the following types of communication technologies:
• Internet protocols
Internet protocols define the standards by which the different pieces of the J2EE
platform communicate with each other and with remote entities. The J2EE platform
supports the following Internet protocols:
○ TCP/IP - Transport Control Protocol over Internet Protocol. These two

Page
protocols provide for the reliable delivery of streams of data from one host to
another. Internet Protocol (IP), the basic protocol of the Internet, enables
the unreliable delivery of individual packets from one host to another. IP
24
makes no guarantees as to whether the packet will be delivered, how long it
will take, or if multiple packets will arrive in the order they were sent. The
Transport Control Protocol (TCP) adds the notions of connection and
reliability.
○ HTTP 1.0 - Hypertext Transfer Protocol. The Internet protocol used to fetch
hypertext objects from remote hosts. HTTP messages consist of requests
from client to server and responses from server to client.
○ SSL 3.0 - Secure Socket Layer. A security protocol that provides privacy
over the Internet. The protocol allows client-server applications to
communicate in a way that cannot be eavesdropped or tampered with.
Servers are always authenticated and clients are optionally authenticated.

Section 30. Introduction (7 pages)


• Remote Method Invocation Protocols
Remote Method Invocation (RMI) is a set of APIs that allow developers to build
distributed applications in the Java programming language. RMI uses Java language
interfaces to define remote objects and a combination of Java serialization
technology and the Java Remote Method Protocol (JRMP) to turn local method
invocations into remote method invocations. The J2EE platform supports the JRMP
protocol, the transport mechanism for communication between objects in the Java
language in different address spaces.
• Object Management Group Protocols
Object Management Group (OMG) protocols allow objects hosted by the J2EE
platform to access remote objects developed using the OMG's Common Object
Request Broker Architecture (CORBA) technologies and vice versa. CORBA objects
are defined using the Interface Definition Language (IDL). An application component
provider defines the interface of a remote object in IDL and then uses an IDL
compiler to generate client and server stubs that connect object implementations to
an Object Request Broker (ORB), a library that enables CORBA objects to locate and
communicate with one another. ORBs communicate with each other using the
Internet Inter-ORB Protocol (IIOP). The OMG technologies required by the J2EE
platform are Java IDL and RMI-IIOP.
○ Java IDL
Java IDL allows Java clients to invoke operations on CORBA objects that
have been defined using IDL and implemented in any language with a
CORBA mapping. Java IDL is part of the J2SE platform. It consists of a
CORBA API and ORB. An application component provider uses the idlj IDL
compiler to generate a Java client stub for a CORBA object defined in IDL.
The Java client is linked with the stub and uses the CORBA API to access the
CORBA object.
○ RMI-IIOP
RMI-IIOP is an implementation of the RMI API over IIOP. RMI-IIOP allows
application component providers to write remote interfaces in the Java
programming language. The remote interface can be converted to IDL and
implemented in any other language that is supported by an OMG mapping
and an ORB for that language. Clients and servers can be written in any
language using IDL derived from the RMI interfaces. When remote interfaces
are defined as Java RMI interfaces, RMI over IIOP provides interoperability
with CORBA objects implemented in any language.
• Messaging Technologies

Page
Messaging technologies provide a way to asynchronously send and receive
messages. The Java Message Service API provides an interface for handling
asynchronous requests, reports, or events that are consumed by enterprise
25
applications. JMS messages are used to coordinate these applications. The JavaMail
API provides an interface for sending and receiving messages intended for users.
Although either API can be used for asynchronous notification, JMS is preferred when
speed and reliability are a primary requirement.
○ Java Message Service API
The Java Message Service (JMS) API allows J2EE applications to access
enterprise messaging systems such as IBM MQ Series and TIBCO
Rendezvous. JMS messages contain well-defined information that describe
specific business actions. Through the exchange of these messages,
applications track the progress of enterprise activities. The JMS API supports
both point-to-point and publish-subscribe styles of messaging.

Section 30. Introduction (7 pages)


○ JavaMail API
The JavaMail API provides a set of abstract classes and interfaces that
comprise an electronic mail system. The abstract classes and interfaces
support many different implementations of message stores, formats, and
transports. Many simple applications will only need to interact with the
messaging system through these base classes and interfaces.
• Data Formats
Data formats define the types of data that can be exchanged between components.
The J2EE platform requires support for the following data formats:
○ HTML - The markup language used to define hypertext documents
accessible over the Internet. HTML enables the embedding of images,
sounds, video streams, form fields, references to other HTML documents,
and basic text formatting. HTML documents have a globally unique location
and can link to one another.
○ Image files - The J2EE platform supports two formats for image files: GIF
(Graphics Interchange Format), a protocol for the online transmission and
interchange of raster graphic data, and JPEG (Joint Photographic Experts
Group), a standard for compressing gray-scale or color still images.
○ JAR file - A platform-independent file format that permits many files to be
aggregated into one file.
○ Class file - The format of a compiled Java file as specified in the Java Virtual
Machine specification. Each class file contains one Java language type -
either a class or an interface - and consists of a stream of 8-bit bytes.
○ XML - A text-based markup language that allows you to define the markup
needed to identify the data and text in structured documents. As with HTML,
you identify data using tags. But unlike HTML, XML tags describe the data,
rather than the format for displaying it. In the same way that you define the
field names for a data structure, you are free to use any XML tags that make
sense for a given application. When multiple applications share XML data,
they have to agree on the tag names they intend to use.

Page
26

Section 30. Introduction (7 pages)


2. Common Architectures ( pages3)

Explain the advantages and disadvantages of two-tier architectures when


examined under the following topics: scalability, maintainability,
1 reliability, availability, extensibility, performance, manageability, and
security.

Explain the advantages and disadvantages of three-tier architectures


when examined under the following topics: scalability, maintainability,
2 reliability, availability, extensibility, performance, manageability, and
security

Explain the advantages and disadvantages of multi-tier architectures


when examined under the following topics: scalability, maintainability,
3 reliability, availability, extensibility, performance, manageability, and
security.

Explain the benefits and drawbacks of rich clients and browser-based


4 clients as deployed in a typical Java EE application.

Explain appropriate and inappropriate uses for web services in the Java
5 EE platform.

Explain the advantages and disadvantages of two-tier architectures when


examined under the following topics: scalability, maintainability,
2.1 reliability, availability, extensibility, performance, manageability, and
security.

Ref. • [SUN_SL_425]

A two-tier architecture is also known as client server model. The most basic type of Page
27
client-server architecture employs only two types of hosts: clients and servers. This type
of architecture is sometimes referred to as two-tier. It allows devices to share files and
resources. The two tier architecture means that the client acts as one tier and application
in combination with server acts as another tier.
Client-server describes the relationship between two computer programs in which one
program, the client program, makes a service request to another, the server program.
Standard networked functions such as email exchange, web access and database
access, are based on the client-server model.
Two Tier Software Architectures
Two tier architectures consist of three components distributed in two tiers: client
(requester of services) and server (provider of services). The three components are:

Section 30. Introduction (7 pages)


• User System Interface (such as session, text input, dialog, and display
management services)
• Processing Management (such as process development, process enactment,
process monitoring, and process resource services)
• Database Management (such as data and file services)

The two tier design allocates the user system interface exclusively to the client. It places
database management on the server and splits the processing management between
client and server, creating two layers.
In general, the user system interface client invokes services from the database
management server. In many two tier designs, most of the application portion of
processing is in the client environment. The database management server usually
provides the portion of the processing related to accessing data (often implemented in
store procedures). Clients commonly communicate with the server through SQL
statements or a call-level interface. It should be noted that connectivity between tiers can
be dynamically changed depending upon the user's request for data and services.
Two tier software architectures are used extensively in non-time critical information
processing where management and operations of the system are not complex. This
design is used frequently in decision support systems where the transaction load is light.
Two tier software architectures require minimal operator intervention. The two tier
architecture works well in relatively homogeneous environments with processing rules
(business rules) that do not change very often and when workgroup size is expected to
be fewer than 100 users, such as in small businesses.

Page
Two-Tier Architecture
In the early 1980s, personal computers (PCs) became very popular. They were less 28
expensive and had more processing power than the dumb terminal counterparts. This
paved the way for true distributed, or client-server, computing. The client or the PCs now
ran the user interface programs. It also supported graphical user interfaces (GUIs),
allowing the users to enter data and interact with the mainframe server. The mainframe
server now hosted only the business rules and data. Once the data entry was complete,
the GUI application could optionally perform validations and then send the data to the
server for execution of the business logic. Oracle Forms–based applications are a good
example of two-tier architecture. The forms provide the GUI loaded on the PCs, and the
business logic (coded as stored procedures) and data remain on the Oracle database
server.

Section 30. Introduction (7 pages)


Then there was another form of two-tier architecture in which not only the UI but even
the business logic resided on the client tier. This kind of application typically connected
to a database server to run various queries. These clients are referred to as thick or fat
clients because they had a significant portion of the executable code in the client tier
(see Figure 2).

• The application is expected to support a limited number of users (e.g. no more


than a few hundred)
• The application is networked and databases are "local" (i.e. not over WAN or
Internet)
• A normal level of security is required (data is not overly sensitive)
• Data access from outside applications is minimal

• Scalability

Page
The most important limitation of the two-tier architecture is that it is not scalable,
because each client requires its own database session. The two tier design will scale- 29
up to service 100 users on a network. It appears that beyond this number of users,
the performance capacity is exceeded. This is because the client and server
exchange "keep alive" messages continuously, even when no work is being done,
thereby saturating the network.
Implementing business logic in stored procedures can limit scalability because as
more application logic is moved to the database management server, the need for
processing power grows. Each client uses the server to execute some part of its
application code, and this will ultimately reduce the number of users that can be
accommodated.
The most important limitation of the two-tier architecture is that it is not scalable,
because each client requires its own database session.

Section 30. Introduction (7 pages)


• Interoperability
The two tier architecture limits interoperability by using stored procedures to
implement complex processing logic (such as managing distributed database
integrity) because stored procedures are normally implemented using a commercial
database management system's proprietary language. This means that to change or
interoperate with more than one type of database management system, applications
may need to be rewritten. Moreover, database management system's proprietary
languages are generally not as capable as standard programming languages in that
they do not provide a robust programming environment with testing and debugging,
version control, and library management capabilities.
• System administration and configuration
Two tier architectures can be difficult to administer and maintain because when
applications reside on the client, every upgrade must be delivered, installed, and
tested on each client. The typical lack of uniformity in the client configurations and
lack of control over subsequent configuration changes increase administrative
workload.
• Batch jobs
The two tiered architecture is not effective running batch programs. The client is
typically tied up until the batch job finishes, even if the job executes on the server;
thus, the batch job and client users are negatively affected.

Performance:
• Inadequate performance for medium to high volume environments,

The main disadvantages of the 2-tier model are as follows:

• Scalability: a key concern with the 2-tier model is scalability.


• Application performance can be expected to degrade rapidly when the number of
concurrent users reaches a threshold between a few hundred and one thousand
users. This is true even for large database servers. The chief reason is that each
client requires its own connection and each connection requires CPU and
memory. As the number of connections increases, the database performance
degrades.
• Poor Logic Sharing: Traditional two-tier architectures keep business logic on the
client. When logic is in the client, it is usually more difficult to re-use logic
between applications and amongst tools.

Page
• Application Distribution: Application changes have to be distributed to each client.
When there are a large number of users, this entails considerable administrative 30
overhead.
• Remote Usage: Remote users (e.g. customers), probably do not want to install
your application on their clients -- they would prefer "thin" clients where minimal
(or no) client software installation is required.
• Database Structure: other applications that access your database will become
dependent on the existing database structure. This means that it is more difficult
to redesign the database since other applications are intimate with the actual
database structure
Advantages of client server architecture:
• Centralization - access, resources, and data security are controlled through the
server.

Section 30. Introduction (7 pages)


• Accessibility - server can be accessed remotely and across multiple platforms.
• Ease of application development; simple to build and use.
• Lower total costs than “mainframe legacy systems”.
• Advantage is understandability and maintainability would be better
In future business logic changes also only business layer changes

Modified 2-tier Model


A common approach that is used to improve business logic reusability is to place the
business logic into triggers or stored procedures on the database. Validations are
performed by calling an appropriate database stored procedure. In addition, dependent
logic can be initiated by a trigger in the database. For example, the business logic might
dictate that whenever a requisition is updated to "approved", a purchase order should
automatically be created. This business rule could be effectively implemented with a
database trigger on the requisition table.

The approach provides several advantages compared to the traditional 2-tier model:

• Better Re-use: The same logic (in stored procedures & triggers) can be initiated
from many client applications and tools.
• Better Data Integrity: when validation logic is unconditionally initiated in database
triggers (e.g. before inserts and updates), then business integrity of the data can
be ensured.
• Improved Performance for Complex Validations: When the business logic
requires many accesses back-and-forth to the database to perform its
processing, network traffic is significantly reduced when the entire validation is
encapsulated in a stored procedure.
• Improved Security: Stored procedures can improve security since detailed
business logic is encapsulated in a more secure central server.
• Reduced Distribution: Changes to business logic only need to be updated in the
database and do not have to be distributed to all the clients.
• The modified 2-tier approach addresses some of the concerns with the traditional
2-tier model but it still suffers from inherent 2-tier drawbacks. The most notable
continued drawback is scalability which is addressed by the 3-tier model.
• Performance: Adequate performance for low to medium volume environments

Explain the advantages and disadvantages of three-tier architectures Page


31
when examined under the following topics: scalability, maintainability,
2.2 reliability, availability, extensibility, performance, manageability, and
security.

• [SUN_SL_425]
Ref. • Multitier architecture

Section 30. Introduction (7 pages)


Three-tier'[2] is a client-server architecture in which the user interface, functional process
logic ("business rules"), computer data storage and data access are developed and
maintained as independent modules, most often on separate platforms.
The three-tier model is considered to be a software architecture and a software design
pattern.
Apart from the usual advantages of modular software with well defined interfaces, the
three-tier architecture is intended to allow any of the three tiers to be upgraded or
replaced independently as requirements or technology change. For example, a change
of operating system in the presentation tier would only affect the user interface code.

Three-Tier Architecture
The three tier software architecture emerged to overcome the limitations of the two tier
architecture. The third tier (middle tier server) is between the user interface (client) and
the data management (server) components. This middle tier provides process
management where business logic and rules are executed and can accommodate
hundreds of users (as compared to only 100 users with the two tier architecture) by
providing functions such as queuing, application execution, and database staging.
The three tier architecture is used when an effective distributed client/server design is
needed that provides (when compared to the two tier) increased performance, flexibility,
maintainability, reusability, and scalability, while hiding the complexity of distributed
processing from the user.

Page
32

Section 30. Introduction (7 pages)


A three tier distributed client/server architecture includes a user system interface top tier
where user services (such as session, text input, dialog, and display management)
reside.
The middle tier provides process management services (such as process development,
process enactment, process monitoring, and process resourcing) that are shared by
multiple applications.
The middle tier server (also referred to as the application server) improves performance,
flexibility, maintainability, reusability, and scalability by centralizing process logic.
Centralized process logic makes administration and change management easier by
localizing system functionality so that changes must only be written once and placed on
the middle tier server to be available throughout the systems. With other architectural
designs, a change to a function (service) would need to be written into every application.
The third tier provides database management functionality and is dedicated to data and
file services that can be optimized without using any proprietary database management
system languages. The data management component ensures that the data is
consistent throughout the distributed environment through the use of features such as
data locking, consistency, and replication. It should be noted that connectivity between
tiers can be dynamically changed depending upon the user's request for data and
services.

Page
33

Section 30. Introduction (7 pages)


In addition, the middle process management tier controls transactions and asynchronous
queuing to ensure reliable completion of transactions. The middle tier manages
distributed database integrity by the two phase commit process. It provides access to
resources based on names instead of locations, and thereby improves scalability and
flexibility as system components are added or moved.
The benefits of the 3-tier model are as follows:

• Scalability: The key 3-tier benefit is improved scalability since the application
servers can be deployed on many machines. Also, the database no longer
requires a connection from every client -- it only requires connections from a
smaller number of application servers. In addition, TP monitors or ORBs can be Page
34
used to balance loads and dynamically manage the number of application
server(s) available.

• Better Re-use: The same logic can be initiated from many clients or applications.
If an object standard like COM/DCOM or CORBA is employed, then the specific
language implementation of the middle tier can be made transparent.

Section 30. Introduction (7 pages)


• Improved Data Integrity: since all updates go through the middle tier, the middle
tier can ensure that only valid data is allowed to be updated in the database and
the risk of a rogue client application corrupting data is removed.

• Improved Security: Security is improved since it can be implemented at multiple


levels (not just the database). Security can be granted on a service-by-service
basis. Since the client does not have direct access to the database, it is more
difficult for a client to obtain unauthorized data. Business logic is generally more
secure since it is placed on a more secure central server.

• Centralized logic: Changes to business logic only need to be updated on the


application servers and do not have to be distributed to all the clients.

• Improved Availability: mission-critical applications can make use of redundant


application servers and redundant database servers. With redundant servers, it is
possible to architect an application so that it can recover from network or server
failures.

• Hidden Database Structure: since the actual structure of the database is hidden
from the caller, it is possible that many database changes can be made
transparently. Therefore, a service in the middle tier that exchanges
information/data with other applications could retain its original interface while the
underlying database structure was enhanced during a new application release.

The drawbacks of the 3-tier model are as follows:

• Increased Complexity/Effort: In general, it is more difficult to build a 3-tier


application compared to a 2-tier application. The points of communication are
doubled (client to middle tier to server, instead of simply client to server) and
many handy productivity enhancements provided by client tools (e.g. Visual

Page
Basic, PowerBuilder, Delphi) will be foregone or their benefit will be reduced with
a 3-tier architecture.
35
• Fewer Tools: There are many more tools available for a 2-tier model (e.g. most
reporting tools). It is likely that additional programming effort will be required to
manage tasks that an automated tool might handle in a 2-tier environment.

Advantages and Disadvantages

The three-tier Web application architecture offers the following advantages:


• High performance, lightweight persistent objects
• High degree of flexibility in deployment platform and configuration

Section 30. Introduction (7 pages)


The disadvantage of this architecture is it is less standard than EJB.

Explain the advantages and disadvantages of multi-tier architectures


when examined under the following topics: scalability, maintainability,
2.3 reliability, availability, extensibility, performance, manageability, and
security.

• [SUN_SL_425]
Ref. • Multitier architecture

In software engineering, multi-tier architecture (often referred to as n-tier architecture)


is a client-server architecture in which the presentation, the application processing, and
the data management are logically separate processes. For example, an application that
uses middleware (from any of the type listed below) to service data requests between a
user and a database employs multi-tier architecture. The most widespread use of "multi-
tier architecture" refers to three-tier architecture.
Types of middleware
A classification system organizes the many types of middleware that are currently
available.[10]. These classifications are based on scalability and recoverability:
• Remote Procedure Call — Client makes calls to procedures running on remote
systems. Can be asynchronous or synchronous.
• Message Oriented Middleware — Messages sent to the client are collected and
stored until they are acted upon, while the client continues with other processing.
• Object Request Broker — This type of middleware makes it possible for
applications to send objects and request services in an object-oriented system.
• SQL-oriented Data Access — middleware between applications and database
servers.
• Embedded Middleware — communication services and integration interface
software/firmware that operates between embedded applications and the real
time operating system.

Page
Other sources include these additional classifications:
• Transaction processing monitors — Provides tools and an environment to
36
develop and deploy distributed applications.
• Application servers — software installed on a computer to facilitate the serving
(running) of other applications.
• Enterprise Service Bus — An abstraction layer on top of an Enterprise
Messaging System.

N-Tier Architecture
With the widespread growth of Internet bandwidth, enterprises around the world have
web-enabled their services. As a result, the application servers are not burdened

Section 30. Introduction (7 pages)


anymore with the task of the presentation layer. This task is now off-loaded to the
specialized web servers that generate presentation content. This content is transferred
to the browser on the client tier, which takes care of rendering the user interfaces. The
application servers in n-tier architecture host remotely accessible business components.
These are accessed by the presentation layer web server over the network using native
protocols. Figure 4 shows the n-tier application.

The concepts of layer and tier are often used interchangeably. However, one fairly
common point of view is that there is indeed a difference, and that a layer is a logical
structuring mechanism for the elements that make up the software solution, while a tier
is a physical structuring mechanism for the system infrastructure.
A J2EE platform (and application) is a multitier system; we view the system in terms of
tiers. A tier is a logical partition of the separation of concerns in the system. Each tier is
assigned its unique responsibility in the system. We view each tier as logically separated
from one another. Each tier is loosely coupled with the adjacent tier. We represent the
whole system as a stack of tiers:

Page
37

Section 30. Introduction (7 pages)


5 tiers of the J2EE Architecture
• Client Tier
This tier represents all device or system clients accessing the system or the
application. A client can be a Web browser, a Java or other application, a Java
applet, a WAP phone, a network application, or some device introduced in the
future. It could even be a batch process.
• Presentation Tier
This tier encapsulates all presentation logic required to service the clients that
access the system. The presentation tier intercepts the client requests, provides
single sign-on, conducts session management, controls access to business
services, constructs the responses, and delivers the responses to the client.
Servlets and JSP reside in this tier. Note that servlets and JSP are not
themselves UI elements, but they produce UI elements.
• Business Tier
This tier provides the business services required by the application clients. The
Page
tier contains the business data and business logic. Typically, most business
processing for the application is centralized into this tier. It is possible that, due to
38
legacy systems, some business processing may occur in the resource tier.
Enterprise bean components are the usual and preferred solution for
implementing the business objects in the business tier.
• Integration Tier
This tier is responsible for communicating with external resources and systems
such as data stores and legacy applications. The business tier is coupled with
the integration tier whenever the business objects require data or services that
reside in the resource tier. The components in this tier can use JDBC, J2EE
connector technology, or some proprietary middleware to work with the resource
tier.

Section 30. Introduction (7 pages)


• Resource Tier
This is the tier that contains the business data and external resources such as
mainframes and legacy systems, business-to-business (B2B) integration
systems, and services such as credit card authorization.
Advantages of multi-tier architectures:
● better scalability
● higher fault tolerance
● higher throughput for less cost
Disadvantages
● too much middleware involved
● redundant functionality
● difficulty and cost of development

with growing number of tiers one gains:


● flexibility
● functionality
● possibilities for distribution
but:
● each tier increases communication costs
● complexity rises
● higher complexity of management and tuning

Explain the benefits and drawbacks of rich clients and browser-based


2.4 clients as deployed in a typical Java EE application.

Ref. • [DESIGNING_ENTERPRISE_APPLICATIONS] Chapter 3.

Client Considerations
• Network Considerations
The client depends on the network, and the network is imperfect. Although the client
appears to be a stand-alone entity, it cannot be programmed as such because it is
part of a distributed application. Three aspects of the network:
Latency is non-zero.

Page

○ Bandwidth is finite. 39
○ The network is not always reliable.
A well-designed enterprise application must address these issues, starting with the
client. The ideal client connects to the server only when it has to, transmits only as
much data as it needs to, and works reasonably well when it cannot reach the
server.
• Security Considerations
Different networks have different security requirements, which constrain how clients
connect to an enterprise. For example, when clients connect over the Internet, they
usually communicate with servers through a firewall. The presence of a firewall that
is not under your control limits the choices of protocols the client can use. Most
firewalls are configured to allow Hypertext Transfer Protocol (HTTP) to pass across,

Section 30. Introduction (7 pages)


but not Internet Inter-Orb Protocol (IIOP). This aspect of firewalls makes Web-based
services, which use HTTP, particularly attractive compared to RMI- or CORBA-based
services, which use IIOP.
Security requirements also affect user authentication. When the client and server are
in the same security domain, as might be the case on a company intranet,
authenticating a user may be as simple as having the user log in only once to obtain
access to the entire enterprise, a scheme known as Single Sign On. When the client
and server are in different security domains, as would be the case over the Internet,
a more elaborate scheme is required for single sign on, such as that proposed by the
Liberty Alliance.
• Platform Considerations
Every client platform's capabilities influence an application's design. For example, a
browser client cannot generate graphs depicting financial projections; it would need
a server to render the graphs as images, which it could download from the server. A
programmable client, on the other hand, could download financial data from a server
and render graphs in its own interface.
• Session and state management
• Input validation
Design Issues and Guidelines for Browser Clients
Browsers are the thinnest of clients; they display data to their users and rely on servers for
application functionality. From a deployment perspective, browser clients are attractive for a
couple of reasons. First, they require minimal updating. When an application changes, server-
side code has to change, but browsers are almost always unaffected. Second, they are
ubiquitous. Almost every computer has a Web browser and many mobile devices have a
microbrowser.
• Presenting the User Interface
Browsers have a couple of strengths that make them viable enterprise application
clients. First, they offer a familiar environment. Browsers are widely deployed and
used, and the interactions they offer are fairly standard. This makes browsers
popular, particularly with novice users. Second, browser clients can be easy to
implement. The markup languages that browsers use provide high-level abstractions
for how data is presented, leaving the mechanics of presentation and event-handling
to the browser.
The trade-off of using a simple markup language, however, is that markup
languages allow only limited interactivity. For example, HTML's tags permit
presentations and interactions that make sense only for hyperlinked documents. You
can enhance HTML documents slightly using technologies such as JavaScript in
combination with other standards, such as Cascading Style Sheets (CSS) and the
Document Object Model (DOM). However, support for these documents, also known

Page
as Dynamic HTML (DHTML) documents, is inconsistent across browsers, so creating a
portable DHTML-based client is difficult. 40
Another, more significant cost of using browser clients is potentially low
responsiveness. The client depends on the server for presentation logic, so it must
connect to the server whenever its interface changes. Consequently, browser clients
make many connections to the server, which is a problem when latency is high.
Furthermore, because the responses to a browser intermingle presentation logic with
data, they can be large, consuming substantial bandwidth.
• Validating User Inputs
Consider an HTML form for completing an order, which includes fields for credit card
information. A browser cannot single-handedly validate this information, but it can
certainly apply some simple heuristics to determine whether the information is
invalid. For example, it can check that the cardholder name is not null, or that the
credit card number has the right number of digits. When the browser solves these

Section 30. Introduction (7 pages)


obvious problems, it can pass the information to the server. The server can deal with
more esoteric tasks, such as checking that the credit card number really belongs to
the given cardholder or that the cardholder has enough credit.
When using an HTML browser client, you can use the JavaScript scripting language,
whose syntax is close to that of the Java programming language. Be aware that
JavaScript implementations vary slightly from browser to browser; to accommodate
multiple types of browsers, use a subset of JavaScript that you know will work across
these browsers. (For more information, see the ECMAScript Language Specification.)
It may help to use JSP custom tags that autogenerate simple JavaScript that is
known to be portable.
Validating user inputs with a browser does not necessarily improve the
responsiveness of the interface. Although the validation code allows the client to
instantly report any errors it detects, the client consumes more bandwidth because it
must download the code in addition to an HTML form. For a non-trivial form, the
amount of validation code downloaded can be significant. To reduce download time,
you can place commonly-used validation functions in a separate source file and use
the script element's src attribute to reference this file. When a browser sees the
src attribute, it will cache the source file, so that the next time it encounters
another page using the same source file, it will not have to download it again.
Also note that implementing browser validation logic will duplicate some server-side
validation logic. The EJB and EIS tiers should validate data regardless of what the
client does. Client-side validation is an optimization; it improves user experience and
decreases load, but you should NEVER rely on the client exclusively to enforce data
consistency.
• Communicating with the Server
Browser clients connect to a J2EE application over the Web, and hence they use
HTTP as the transport protocol.
When using browser interfaces, users generally interact with an application by
clicking hyperlinked text or images, and completing and submitting forms. Browser
clients translate these gestures into HTTP requests for a Web server, since the server
provides most, if not all, of an application's functionality.
User requests to retrieve data from the server normally map to HTTP GET requests.
The URLs of the requests sometimes include parameters in a query string that
qualify what data should be retrieved.
User requests to update data on the server normally map to HTTP POST requests.
Each of these requests includes a MIME envelope of type application/x-www-
form-urlencoded, containing parameters for the update.
After a server handles a client request, it must send back an HTTP response; the
response usually contains an HTML document. A J2EE application should use JSP

Page
pages to generate HTML documents.
• Managing Conversational State
41
Because HTTP is a request-response protocol, individual requests are treated
independently. Consequently, Web-based enterprise applications need a mechanism
for identifying a particular client and the state of any conversation it is having with
that client.
The HTTP State Management Mechanism specification introduces the notion of a
session and session state. A session is a short-lived sequence of service requests by
a single user using a single client to access a server. Session state is the information
maintained in the session across requests. For example, a shopping cart uses session
state to track selections as a user chooses items from a catalog. Browsers have two
mechanisms for caching session state: cookies and URL rewriting.

Section 30. Introduction (7 pages)


○ A cookie is a small chunk of data the server sends for storage on the client.
Each time the client sends information to a server, it includes in its request
the headers for all the cookies it has received from that server. Cookie
support is inconsistent: some users disable cookies, some firewalls and
gateways filter them, and some browsers do not support them. Furthermore,
you can store only small amounts of data in a cookie; to be portable across
all browsers, you should use four kilobytes at most.
○ URL rewriting involves encoding session state within a URL, so that when
the user makes a request on the URL, the session state is sent back to the
server. This technique works almost everywhere, and can be a useful
fallback when you cannot use cookies. Unfortunately, pages containing
rewritten URLs consume much bandwidth. For each request the server
receives, it must rewrite every URL in its response (the HTML page), thereby
increasing the size of the response sent back to the client.
Both cookies and pages containing rewritten URLs are vulnerable to unauthorized
access. Browsers usually retain cookies and pages in the local file system, so any
sensitive information (passwords, contact information, credit card numbers, etc.)
they contain is vulnerable to abuse by anyone else who can access this data.
Encrypting the data stored on the client might solve this problem, as long as the
data is not intended for display.
Because of the limitations of caching session state on browser clients, these clients
should not maintain session state. Rather, servers should manage session state for
browsers. Under this arrangement, a server sends a browser client a key that
identifies session data (using cookies or URL rewriting), and the browser sends the
key back to the server whenever it wants to use the session data. If the browser
caches any information beyond a session key, it should be restricted to items like the
user's login and preferences for using the site; such items do not need to be
manipulated, and they can be easily stored on the client.
Design Issues and Guidelines for Java Clients
Java clients can be divided into three categories: applications, applets, and MIDlets.
• Application Clients
Application clients execute in the Java 2 Runtime Environment, Standard Edition
(JRE). They are very similar to the stand-alone applications that run on traditional
desktop computers. As such, they typically depend much less on servers than do
browsers.
Application clients are packaged inside JAR files and may be installed explicitly on a
client's machine or provisioned on demand using Java Web Start technology.
Preparing an application client for Java Web Start deployment involves distributing
its JAR with a Java Network Launching Protocol (JNLP) file. When a user running Java
Web Start requests the JNLP file (normally by clicking a link in a Web browser), Java

Page
Web Start automatically downloads all necessary files. It then caches the files so the
user can relaunch the application without having to download them again (unless
42
they have changed, in which case Java Web Start technology takes care of
downloading the appropriate files).
• Applet Clients
Applet clients are user interface components that typically execute in a Web browser,
although they can execute in other applications or devices that support the applet
programming model. They are typically more dependent on a server than are
application clients, but are less dependent than browser clients.
Like application clients, applet clients are packaged inside JAR files. However, applets
are typically executed using Java Plug-in technology. This technology allows applets
to be run using Sun's implementation of the Java 2 Runtime Environment, Standard
Edition (instead of, say, a browser's default JRE).

Section 30. Introduction (7 pages)


• MIDlet Clients
MIDlet clients are small applications programmed to the Mobile Information Device
Profile (MIDP), a set of Java APIs which, together with the Connected Limited Device
Configuration (CLDC), provides a complete Java 2 Micro Edition (J2ME) runtime
environment for cellular phones, two-way pagers, and palmtops.
A MIDP application is packaged inside a JAR file, which contains the application's
class and resource files. This JAR file may be pre-installed on a mobile device or
downloaded onto the device (usually over the air). Accompanying the JAR file is a
Java Application Descriptor (JAD) file, which describes the application and any
configurable application properties.
• Presenting the User Interface
Java applet and application clients may use the Java Foundation Classes (JFC)/Swing
API, a comprehensive set of GUI components for desktop clients.
Implementing the user interface for a Java client usually requires more effort to
implement than a browser interface, but the benefits are substantial. First, Java
client interfaces offer a richer user experience; with programmable GUI components,
you can create more natural interfaces for the task at hand. Second, and perhaps
more importantly, full programmability makes Java clients much more responsive
than browser interfaces.
When a Java client and a browser client request the same data, the Java client
consumes less bandwidth. For example, when a browser requests a list of orders and
a Java client requests the same list, the response is larger for the browser because it
includes presentation logic. The Java client, on the other hand, gets the data and
nothing more.
Furthermore, Java clients can be programmed to make fewer connections than a
browser to a server.
• Validating User Inputs
Like presentation logic, input validation logic may also be programmed on Java
clients, which have more to gain than browser clients from client-side input
validation. Recall that browser clients have to trade off the benefit of fewer
connections (from detecting bad inputs before they get to the server) for the cost of
using more bandwidth (from downloading validation code from the server). In
contrast, Java clients realize a more responsive interface because they do not have
to download validation logic from the server.
With Java clients, it is straightforward to write input validation logic.
• Communicating with the Server
Java clients may connect to a J2EE application as Web clients (connecting to the Web
tier), EJB clients (connecting to the EJB tier), or EIS clients (connecting to the EIS

Page
tier).
○ Web Clients
43
Like browser clients, Java Web clients connect over HTTP to the Web tier of a
J2EE application. This aspect of Web clients is particularly important on the
Internet, where HTTP communication is typically the only way a client can
reach a server. Many servers are separated from their clients by firewalls,
and HTTP is one of the few protocols most firewalls allow through.
Whereas browsers have built-in mechanisms that translate user gestures into
HTTP requests and interpret HTTP responses to update the view, Java clients
must be programmed to perform these actions. A key consideration when
implementing such actions is the format of the messages between client and
server.

Section 30. Introduction (7 pages)


Unlike browser clients, Java clients may send and receive messages in any
format.
Binary messages consume little bandwidth. This aspect of binary messages is
especially attractive in low-bandwidth environments (such as wireless and
dial-up networks), where every byte counts.
Java technologies for XML alleviate some of the burdens experienced with
binary messaging. These technologies, which include the Java API for XML
Processing (JAXP), automate the parsing and aid the construction of XML
messages.
A side benefit of using XML messages is that alternate clients are easier to
support, as XML is a widely-accepted open standard. You can use XML to
encode messages from a variety of clients. A C++ client, for example, could
use a SOAP toolkit to make remote procedure calls (RPC) to a J2EE
application.
Like browser clients, Java Web clients carry out secure communication over
HTTPS.
○ EJB Clients
When using Web clients, you must write code for translating user gestures
into HTTP requests, HTTP requests into application events, event responses
into HTTP responses, and HTTP responses into view updates. On the other
hand, when using EJB clients, you do not need to write such code because
the clients connect directly to the EJB tier using Java Remote Method
Invocation (RMI) calls.
Unfortunately, connecting as an EJB client is not always possible. First, only
applet and application clients may connect as EJB clients. (At this time,
MIDlets cannot connect to the EJB tier because RMI is not a native
component of MIDP.) Second, RMI calls are implemented using IIOP, and
most firewalls usually block communication using that protocol. So, when a
firewall separates a server and its clients, as would be the case over the
Internet, using an EJB client is not an option. However, you could use an EJB
client within a company intranet, where firewalls generally do not intervene
between servers and clients.
When deploying an applet or application EJB client, you should distribute it
with a client-side container and install the container on the client machine.
This container (usually a class library) allows the client to access middle-tier
services (such as the JMS, JDBC, and JTA APIs) and is provided by the
application server vendor. However, the exact behavior for installing EJB
clients is not completely specified for the J2EE platform, so the client-side
container and deployment mechanisms for EJB clients vary slightly from
application server to application server.
Clients should be authenticated to access the EJB tier, and the client
Page
container is responsible for providing the appropriate authentication
44
mechanisms.
○ EIS Clients
Generally, Java clients should not connect directly to a J2EE application's EIS
tier. EIS clients require a powerful interface, such as the JDBC API, to
manipulate data on a remote resource. When this interface is misused (by a
buggy client you have implemented or by a malicious client someone else
has hacked or built from scratch), your data can be compromised.
Furthermore, non-trivial EIS clients must implement business logic. Because
the logic is attached to the client, it is harder to share among multiple types
of clients.

Section 30. Introduction (7 pages)


In some circumstances, it may be acceptable for clients to access the EIS tier
directly, such as for administration or management tasks, where the user
interface is small or nonexistent and the task is simple and well understood.
For example, a simple Java program could perform maintenance on database
tables and be invoked every night through an external mechanism.
• Managing Conversational State
Whereas browser clients require a robust server-side mechanism for maintaining
session state, Java clients can manage session state on their own, because they can
cache and manipulate substantial amounts of state in memory. Consequently, Java
clients have the ability to work while disconnected, which is beneficial when latency
is high or when each connection consumes significant bandwidth.
To support a disconnected operation, a Java client must retrieve enough usable data
for the user before going offline. The initial cost of downloading such data can be
high, but you can reduce this cost by constraining what gets downloaded, by filtering
on user preferences, or requiring users to enter search queries at the beginning of
each session. Many applications for mobile devices already use such strategies; they
also apply well to Java clients in general.
When Java clients manipulate enterprise data, they need to know about the model
and some or all of the business rules surrounding the data model. For example, the
client must understand the concept of booked and unbooked seats, and model that
concept just like the server does. For another example, the client must also prevent
users from trying to select booked seats, enforcing a business rule also implemented
on the server. Generally, clients manipulating enterprise data must duplicate logic on
the server, because the server must enforce all business rules regardless of what its
clients do.
When Java clients manipulate enterprise data, applications need to implement data
synchronization schemes. For example, between the time when the user downloads
the seating plan and the time when the user decides what seats he or she wants to
buy, another user may buy some or all of those seats. The application needs rules
and mechanisms for resolving such a conflict. In this case, the server's data trumps
the client's data because whoever buys the tickets first - and hence updates the
server first - gets the tickets. The application could continue by asking the second
user if he or she wants the seats that the first user did not buy. Or, it could refresh
the second user's display with an updated seating plan and have the user pick seats
all over again.

Explain appropriate and inappropriate uses for web services in the Java
2.5
Page
EE platform.
45
Ref. • Book Excerpt: When to Use Web Services

Appropriate uses for web services in Java EE platform:

1) Heterogeneous Integration
The first and most obvious bell ringer is the need to connect applications from
incompatible environments, such as Windows and UNIX, or .NET and J2EE. Web
services support heterogeneous integration. They support any programming language
on any platform. One thing that's particularly useful about Web services is that you can
use any Web services client environment to talk to any Web services server environment

Section 30. Introduction (7 pages)


2) Multichannel Client Formats
A third bell ringer is the need to support many types of client formats, such as browser
clients, rich desktop clients, spreadsheets, wireless devices, interactive voice response
(IVR) systems, and other business applications. A Web service returns its results in
XML, and XML can be transformed into any number of formats to support different client
formats.

As shown in Figure 7-3, Einstein was developed as a multitier Web service application.
The backend business functions and data sources are legacy applications implemented
in CICS and DB2 on the mainframe. The middle tier, which accesses and aggregates
the customer information, is implemented as a set of J2EE Web services using IBM
WebSphere. The client environments are implemented using Microsoft .NET. The
browser client is implemented using Microsoft .NET WebForms, and the desktop client is
implemented using Microsoft .NET WinForms. Einstein's architecture also allows
Wachovia to implement other types of client interfaces to support IVR systems, wireless
handsets, twoway pagers, and other devices.

3) Point-to-Point Integration
The first and most basic way to use Web services is for simple point-to-point integration.
For example, Cape Clear uses Web services to connect employees' e-mail clients with
its CRM solution. Cape Clear is a Web services software startup. It uses Salesforce.com
Page
as its CRM solution. Salesforce.com provides a hosted CRM solution using an ASP-style
model. Users typically interface with the CRM solution through a browser, recording
46
customer contact information and correspondence.

Like most software startups, Cape Clear provides e-mail-based customer support. As a
result, quite a bit of customer correspondence takes place via e-mail. But
Salesforce.com didn't provide a simple, easy way for Cape Clear employees to log this
correspondence in the Salesforce.com database. Users had to copy and paste the e-
mail from Outlook into the Salesforce.com browser interface. Cape Clear found that lots
of correspondence wasn't getting recorded.

Salesforce.com provides a programming API, so Cape Clear decided to eat its own dog
food and address this problem using Web services. First Cape Clear used Cape Clear

Section 30. Introduction (7 pages)


Studio to develop a Web service adapter for the Salesforce.com native programming
API. This adapter accepts a SOAP request and translates it into the Salesforce.com
native API. Next Cape Clear developed an Outlook macro using Microsoft SOAP Toolkit
that takes an Outlook e-mail and uses SOAP to pass it to the Salesforce Adapter Web
service. The Adapter service then passes the message to Salesforce.com using the
Salesforce API.

Figure 7-4: Cape Clear developed a VBA macro using Microsoft SOAP Toolkit that takes
an Outlook e-mail and uses SOAP to pass it to the Salesforce Adapter Web service. The
adapter service then passes the message to Salesforce.com using the Salesforce API

This Outlook macro adds a button to the standard Outlook tool bar labeled "Save to
Salesforce." As shown in Figure 7-4, when the user clicks on this button, the Outlook
macro captures the e-mail message, packages it as a SOAP message, and sends it to
the Salesforce.com adapter Web service. The Web service then forwards the e-mail
using the native API to Salesforce.com, which logs it.

4) Consolidated View
One of the most popular internal integration projects is enabling a consolidated view of
information to make your staff more effective. For example, you probably have many
people in your organization who interact with customers. Each time your staffs interact
with the customer, you want to let them have access to all aspects of the customer
relationship. Unfortunately, the customer relationship information is probably maintained
in variety of systems. The good news is that a consolidated customer view provides a
single point of access to all these systems.

You can use Web services to implement this type of consolidated view. For example,
Coloplast is using Web services to improve its sales and customer support functions.
Coloplast is a worldwide provider of specialized healthcare products and services. As
part of an initiative to improve customer relationships, Coloplast wanted to set up a
state-of-the-art call center system that would give customer representatives real-time
access to complete customer histories and product information. The company selected
Siebel Call Center as the base application, but it needed to connect this system to its
backend AS/400-based ERP systems, which manage the sales, manufacturing, and

Page
distribution functions. It did so using Web services. Coloplast used Jacada Integrator to
create Web services adapters for the legacy AS/400 application systems. Siebel Call 47
Center uses these Web services to deliver a 360 degree view of customer relationships,
including access to backend processes such as open order status, inventory information,
customer credit checking, and special pricing. This solution improves efficiency and
enhances employee and customer satisfaction.

5) Managing Legacy Assets


Web services can make it easier to manage and maintain your legacy application
assets. For example, AT&T estimates that Web services technology has reduced the
time it takes to make modifications to some of its oldest application systems by 78
percent.

Section 30. Introduction (7 pages)


TIRKS (Trunks Inventory Record Keeping System) is a critical application for AT&T.
TIRKS was first developed in the 1960s, and it is connected to more than 100 other
application systems. Because of the brittleness of these application connections, every
time AT&T makes a modification to TIRKS, it must also make a corresponding
modification to the other systems. Using Web services, AT&T has developed much more
flexible application connections that don't break every time a change is made to the
application. AT&T is using IONA XMLBus to replace the more than 100 brittle application
connections with a much smaller set of flexible, reusable Web APIs to TIRKS. Now each
modification to TIRKS no longer requires the associated changes in all the other
application systems.

6) Reducing Duplicative Applications


One of the more popular ways to use Web services is to reduce redundant applications.
A service can support many types of application clients. If you need to perform the same
type of function via multiple applications, it makes a lot of sense to develop a single
service shared by all these applications rather than duplicate the functionality in each
application.

Reduction of redundant applications is a key objective of the U.S. government's E-Gov


initiative. The U.S. government encompasses hundreds of federal agencies and
bureaus, and there is significant overlap and redundancy of systems across these
agencies. A 2001 study by the E-Gov Task Force analyzed the agencies to identify the
various business activities performed by the government. The study identified 30 general
lines of business, such as economic development, public safety, environmental
management, and tax collection. On average, each agency is involved in 17 lines of
business, and each line of business is performed by 19 agencies. Some lines of
business-such as payroll, travel, HR, procurement, logistics, administration, and finance-
are performed by every agency.

The U.S. government spent $48 billion on information technology in 2002 and will spend
$52 billion in 2003. The Office of Management and Budget estimates that the
government can save more than $1 billion annually in IT expenditures by aligning
redundant IT investments across federal agencies. In addition, this alignment will save
taxpayers several billion dollars annually by reducing operational inefficiencies,
redundant spending, and excessive paperwork.

Page
In October 2001, the President's Management Council approved 24 high-payoff
government-wide initiatives that integrate agency operations and IT investments. One of
those initiatives is E-Travel, which is being run by the U.S. General Services
48
Administration (GSA). E-Travel delivers an integrated, government-wide, Webbased
travel management service. Federal government employees make approximately four
million air and rail trips each year, and until recently each agency and bureau managed
its own travel department. Cumulatively, these various departments used four travel
charge card providers, six online self-service reservation systems, 25 authorization and
voucher processing systems, 40 travel agencies, and a unique payment reimbursement
system for almost every bureau.
By consolidating these travel systems into a single, centralized travel management
system, the U.S. government expects to save $300 million annually, achieving a 649
percent return on investment. In addition, the consolidated system will deliver a 70
percent reduction in the time it takes to process vouchers and reimbursements.

Section 30. Introduction (7 pages)


GSA delivered the first phase of E-Travel in December 2002-an online self-service
reservation system. The total end-to-end travel management system is scheduled to be
complete by December 2003. The system will use a service-oriented architecture, based
on XML and Web services, to ensure easy integration with existing agency systems and
future adaptability. The E-Travel team refers to this architecture as "Velcro integration,"
indicating that modules can be easily replaced when necessary.

7) Managing Portal Initiatives


Web services can also be very useful as a means to manage and coordinate your portal
initiatives. A portal is an integrated, Web-based view into a host of application systems.
A portal contains a piece of application code (a portlet) for each backend application. A
portlet contains the code that talks to the backend application as well as the code that
displays the application in the portal.

Web services technology enhances portals in two ways. First, Web services deliver
content to the portal as XML. It's then easy for a portal engine to take this XML content
and display the information in a portal frame. It's also easy for the portal engine
toreformat the XML content to support other client devices, such as wireless handsets or
PDAs. Second, Web services technology defines a simple, consistent mechanism that
portlets can use to access backend applications. This consistency allows you to create a
framework to make it quicker and easier to add new content to your portal. Furthermore,
the new OASIS WSRP specification will allow you to add new content to the portal
dynamically. Figure 7-6 shows an overview of WSRP.
Another goal of the U.S. government's E-Gov program is to get a handle on government
portals. As of February 2003, the U.S. government was managing more than 22,000
Web sites with more then 35 million Web pages. These Web sites have been developed,
organized, and managed using the same stovepipe mentality as used in the backend
agency applications. Such decentralization and duplication make it difficult for citizens
and communities to do business with the government. For example, a community that is
attempting to obtain economic development grants must do a tremendous amount of
research to learn about federal grants. There's no single source of information. More
than 250 agencies administer grants, and you would have to file more than 1,000 forms
(most with duplicate information) to apply for all of them. Some of these forms are
available online; others aren't. Currently all forms must be filed by postal mail.

Page
49

Section 30. Introduction (7 pages)


Figure 7-6: WSRP lets you add new content to a portal dynamically. A content provider
makes content available as a WSRP service, and the portal accesses the content using
SOAP and delivers the result in the appropriate markup format.

The government is working to consolidate this myriad of Web sites into a much more
manageable number of portals, each providing asingle point of entry to a particular line
of business. Each portal will use Web services to access the backend applications that
implement the business process. In many cases the government will consolidate
backend applications to reduce redundant systems and to ensure a simpler experience
for the portal users. For example, the forthcoming E-Grants portal will provide a single
point of entry for anyone looking to obtain or administer federal grants. This site will help
citizens learn about all available grants and allow them to apply for these grants online.
The government expects to save $1 billion by simplifying grant administration as well as
saving $20 million in postage.

8) Collaboration and Information Sharing


Web services can make it easier for your employees to share information and
collaborate. For example, the University of Texas M.D. Anderson Cancer Center used
Web services to implement a shared information retrieval system called ClinicStation.

Page
The center uses a unique collaborative approach to cancer treatment that makes it one
of the most respected cancer centers in the United States. Rather than rely on a single 50
physician to manage a patient's case, M.D. Anderson brings together a team of
multidisciplinary specialists to collaborate on the best treatment for each individual.
Such collaboration requires a means to dynamically share patient information, such as
the patient's chart, test results, x-rays, andother diagnostic images. Because the clinic
spans multiple buildings, it's inefficient to try to assemble everyone in the same room to
view physical images and discuss a course of treatment. Instead the clinical data is
digitized so that it can be viewed electronically. One challenge, though, is that this
clinical information is stored in 10 systems on a wide range of platforms. To bring all
these systems together, ClinicStation uses Web services built with Microsoft .NET to
provide access to all patient information from any browser throughout the center.
Physicians can now collaborate over the phone while looking at patient records online.

Section 30. Introduction (7 pages)


M.D. Anderson developed this application in nine months using three in-house
developers. The total hardware and software costs were less than $200,000. The center
expects to save $6 million over the next three years, largely through increased clinical
efficiency.

9) B2B Electronic Procurement


One of the most popular B2B applications is electronic procurement. Companies have
been using Electronic Data Interchange (EDI) to automate purchasing applications for
years. But EDI is expensive and often requires extensive customization. Web services
technology can reduce the cost and time required to create these B2B connections.

Premier Farnell uses Web services technology to implement a B2B Web procurement
system for its customers. Based in London, Premier Farnell is a small-order distributor of
electronic components and industrial products to the design, maintenance, and
engineering industries throughout Europe, North America, and Asia Pacific.

The Premier Farnell B2B trading solution, implemented using IONA Orbix E2A Web
Services Integration Platform, supports customers using any electronic procurement
system, including SAP, Oracle, Ariba, Commerce One, and custom systems. Even if
each of these systems sends a slightly different purchase order format, the Web service
can handle the situation. It automatically converts all incoming purchase orders into the
format required by the Premier Farnell systems.

10) Software-as-a-Service
You can also use Web services to provide a programmatic interface to a business
service that you license using the software-as-a-service business model. For the most
part, I'm leery of promoting the association of Web services and software-as-a-service.
Web services are Web APIs. You don't sell APIs. Instead you sell the business function
that customers access through the APIs. As I mentioned in Chapter 6, it's hard to be
successful using an ASP-style business model. Looking at history, we can see the
secrets to a successful ASP model:
• The service must be based on strategic intellectual property, something that your
customers can't easily do themselves.
• The service must provide a disruptive value proposition-a new and unique

Page
advantage that's dependent on the service provider model, such as aggregate
information gained through collaboration.[1] 51
• The service provider must establish and maintain a reputation for neutrality and
trustworthiness.
• The service provider must devise a reasonable revenue model that is comfortable
for the customer.
My general take on software-as-a-service is that the business model must be viable on
its own without Web services. A Web API is simply a better way to provide programmatic
access to the service. If you think you have a new, viable idea for an ASP-style service,
then you should provide Web APIs for that service. As I mentioned earlier in this chapter,
Salesforce.com has added Web APIs to its already successful ASP model, making it

Section 30. Introduction (7 pages)


easier for clients to integrate their in-house systems with the hosted CRM solution. Now
let's look at another example.

Yahoo is an excellent example of a company that has been successful using the ASP
model. Yahoo is the world's leading aggregator of content. The vast majority of Yahoo's
clients access this content for free through the Yahoo public portal. As with most public
portals, Yahoo generates revenue through advertising. But Yahoo also licenses this
content to other businesses as a service. If you are a Yahoo enterprise service
customer, you can display Yahoo content in your corporate portal, and users can
personalize their corporate portal just as public users can personalize their
my.yahoo.com portal.

Yahoo is extending its enterprise software service offering by adding a set of Web APIs.
These Web APIs will let you integrate Yahoo content with your business applications.
For example, you could integrate Yahoo content with your CRM application. When a
salesperson looks up a customer contact in the contact management system, the
application can send a query to Yahoo to retrieve and display the latest headlines about
the customer. Although it's true that this information is available for free through the
Yahoo portal, there's an obvious value to being able to integrate news with a CRM
solution. Yahoo plans to license the Web APIs as part of the Yahoo enterprise service
offering. Yahoo may also license these APIs to CRM application providers to enable a
prepackaged Yahoo-ready solution.

When Not to Use Web Services


As much as I like Web services, I want to caution you that they aren't always the
appropriate solution. XML is tremendously versatile, but it isn't the most compact or
efficient mechanism for transferring data. A SOAP message is much bigger than a
comparable native binary message used with RPC, RMI, CORBA, or DCOM. It also
takes a lot more time to process an XML message than a binary message. Even with the
best-performing implementations, SOAP messaging can take 10 to 20 times longer than
RMI or DCOM.[2] The performance differential gets worse as the message grows in size
and complexity. You want to be cautious when trying to use Web services in situations
with stringent requirements for real-time performance. I don't recommend using Web
services to transfer very large files (> 10MB).
You shouldn't view SOAP as a total replacement for traditional middleware technologies,
such as RPC, RMI, CORBA, and DCOM. These technologies still have an important
place in application development. They were designed to provide a seamless, high-
performance mechanism to communicate among various components within a
Page
homogeneous application system or service. 52
It's quite appropriate to use these technologies to build individual applications. For
example, if you were developing in Java, you would probably want to build the
application using J2EE component technologies, such as servlets and Enterprise
JavaBeans. These components communicate with each other using RMI. After you have
developed your application, you probably want to expose it to the rest of the world using
SOAP and WSDL. The traditional technologies weren't designed to integrate
heterogeneous application systems, and they certainly weren't designed to communicate
across the Internet. The point is that you want to use the right tool for the job. Web
services were designed to support heterogeneous integration and Internet
communication.

Section 30. Introduction (7 pages)


You might consider using SOAP in place of some proprietary messaging infrastructures,
particularly if you are using these technologies to perform simple point-to-point
integration. Keep in mind, though, that as of this writing, basic SOAP doesn't provide the
same level of reliability as message queuing software, nor does it give you inherent
notification facilities to support publish and subscribe functionality.[3]

A number of folks position Web services as the death knell for EAI (Enterprise
application integration, the integration of data between applications in a company)
software. My view is that if Web services can replace your EAI software, then EAI
software is overkill for your project. EAI software does many things that SOAP, WSDL,
and UDDI simply can't do by themselves. The software category known as EAI consists
of a collection of various types of software that work together to deliver a comprehensive
integration solution. EAI software includes messaging infrastructure, application
adapters, data extraction and transformation tools, message brokers, and rules engines.
Web services technology could replace the messaging infrastructure, but it can't replace
the rest of the pieces. These other pieces, particularly the application adapters, are
complementary to Web services technology. Most EAI vendors are adding support for
Web.

Page
53

Section 30. Introduction (7 pages)


Page
54

Section 30. Introduction (7 pages)


3. Integration and Messaging ( pages3)

Explain possible approaches for communicating with an external system


1 from a Java EE technology-based system given an outline description of
those systems and outline the benefits and drawbacks of each approach.

Explain typical uses of web services and XML over HTTP as


2 mechanisms to integrate distinct software components.

Explain how JCA (Java Connector Architecture) and JMS are used to
3 integrate distinct software components as part of an overall Java EE
application.

Explain possible approaches for communicating with an external system


3.1 from a Java EE technology-based system given an outline description of
those systems and outline the benefits and drawbacks of each approach.

• [SCEA-051]
• [.Net J2EE]
Ref. • Designing Enterprise Applications with the J2EETM Platform,
Second Edition

Introduction to Legacy Connectivity


Communicating Possible Protocols

Page
55

Before the JEE Connector Architecture (JCA) was defined, no specification for the Java
platform addressed the problem of providing a standard architecture for integrating an
EIS. We used JNI (Java Native Interface) and RMI (Remote Method Invocation) to
create a Java interface to a process running in its native domain. For example, a Java
program using JNI, RMI, or CORBA (Common Object Request Broker) can call a C++
program running on a Windows NT machine. Most EIS vendors as well as application
server vendors use nonstandard proprietary architectures to provide connectivity

Section 30. Introduction (7 pages)


between application servers and enterprise information systems that provide services
such as messaging, legacy database access, and mainframe transaction or batch
processing. Figure 6-1 illustrates the complexity of an EIS environment.

Page
56

Legacy Connectivity Using Java: the Classic Approach


Thus far, the classic approach to legacy connectivity is based on the two-tier
client/server model, which is typical of applications that are not based on the web. With
this approach, an EIS provides an adapter that defines an application programming

Section 30. Introduction (7 pages)


interface (API) for accessing the data and functions of the EIS—basically, you “black-
box” the target system and create a Java API. A typical client application accesses data
and functions exposed by an EIS through this adapter interface. The client uses the
programmatic API exposed by the adapter to connect to and access the EIS. The
adapter implements the support for communication with the EIS and provides access to
EIS data and functions.
Communication between an adapter and the EIS typically uses a protocol specific to the
EIS. This protocol provides support for security and transactions, along with support for
content propagation from an application to the EIS. Most adapters expose an API to the
client that abstracts out the details of the underlying protocol and the distribution
mechanism between the EIS and the adapter. In most cases, a resource adapter is
specific to a particular EIS. However, an EIS may provide more than one adapter that a
client can use to access the EIS. Because the key to EIS adapters is their reusability,
independent software vendors try to develop adapters that employ a widely used
programming language to expose a client programming model that has the greatest
degree of reusability.

Using a Simple EIS Java Adapter


An EIS may provide a simple form of an adapter, where the adapter maps an API that is
specific to the EIS to a reusable, standard API. Often, such an adapter is developed as a
library, whereby the application developer can use the same programming language to
access the adapter as she uses to write the application, and no modifications are
required to the EIS. For example, a Java application developer can use a Java-based
adapter—an adapter written in the Java programming language—to access an EIS that
is based on some non-Java language or platform.
An EIS adapter may be developed as a C library.

Page
57

For example, the code in Figure 6-2 illustrates a Java application that uses a JNI to
access this C library or C-based resource adapter. The JNI is the native programming
interface for Java, and it is part of the Java Developers Kit (JDK). The JNI allows Java
code that runs within a Java Virtual Machine (JVM) to operate with applications and
libraries written in other languages, such as C and C++. Programmers typically use the
JNI to write native methods when they cannot write the entire application in Java. This is
the case when a Java application needs to access an existing library or application
written in another programming language. While the JNI was especially useful before the
advent of the JEE platform, some of its uses may now be replaced by the JEE

Section 30. Introduction (7 pages)


Connector Architecture. As you can see in Figure 6-2, the JNI to the resource adapter
enables the Java application to communicate with the adapter’s C library. While this
approach does work, it is complex to use. The Java application has to understand how
to invoke methods through the JNI. This approach also provides none of the JEE
support for transactions, security, and scalability. The developer is exposed to the
complexity of managing these system-level services, and must do so through the
complex JNI.

Distributed EIS Adapters


Another, more complex, form of an EIS adapter might do its “adaptation” work across
diverse component models, distributed computing platforms, and architectures. For
example, an EIS may develop a distributed adapter that includes the ability to perform
remote communication with the EIS using Java RMI. This type of adapter exposes a
client programming model based on component-model architecture. Adapters use
different levels of abstraction and expose different APIs based on those abstractions,
depending on the type of the EIS. For example, with certain types of EISs, an adapter
may expose a remote function call API to the client application. If so, a client application
uses this remote function call API to execute its interactions with the EIS. An adapter
can expose either a synchronous or an asynchronous mode of communication between
the client applications and the EIS.

Legacy Connectivity Using JEE Connector Architecture


The evolving Java Connector Architecture (JCA) standard will obviate most of the need
to build JNI and RMI code by providing a mechanism to store and retrieve enterprise
data in JEE. The latest versions of many application servers, including BEA WebLogic
and IBM WebSphere, support JCA adapters for enterprise connectivity. Using JCA to
access an EIS is analogous to using JDBC to access a database. By using the JCA, EIS
vendors no longer need to customize their products for each application server.
Application server vendors who conform to the JCA need not add custom code
whenever they want to obtain connectivity to a new EIS.

Before JCA, each enterprise application integration (EAI) vendor created a proprietary
resource adapter interface for its own EAI product, requiring a resource adapter to be
developed for each EAI vendor and EIS combination (for instance, you need a SAP
resource adapter to use the messaging tools of Tibco). To solve that problem, as one of

Page
its main thrusts, JCA attempts to standardize the resource adapter interfaces. The JCA
provides a Java solution to the problem of connectivity between the many application 58
servers and EISs already in existence. The JCA is based on the technologies that are
defined and standardized as part of JEE.
The JCA defines a standard architecture for connecting the JEE platform to
heterogeneous EISs. Examples of EISs include mainframe transaction processing, such
as IBM CICS; database systems, such as IBM DB2; and legacy applications not written
in the Java programming language, such as IBM COBOL. By defining a set of scalable,
secure, and transactional mechanisms, the JCA enables the integration of EISs with
application servers and enterprise applications.

The JCA enables a vendor to provide a standard resource adapter for its EIS. The
resource adapter is integrated with the application server, thereby providing connectivity
between the EIS and the enterprise application. An EIS vendor provides a standard

Section 30. Introduction (7 pages)


resource adapter that has the ability to plug into any application server that supports the
JCA. Multiple resource adapters, i.e., one per type of EIS, can be added into an
application server. This ability enables application components deployed on the
application server to access the underlying EISs. Figure 6-4 illustrates the JCA.

Page
59

Resource Adapter
Deployable JCA components are called resource adapters. Basically, resource adapters
manage connections or other resources for interaction with some facility. The definition
is open ended, as resource adapters can be used for almost anything. A resource
adapter manifests itself as an implementation of interfaces in the javax.resource.cci and
javax.resource.spi packages. It will require a system-level software library when you are

Section 30. Introduction (7 pages)


accessing a resource that uses native libraries to connect to an EIS. EIS vendors,
middleware or application server vendors, or even end users of legacy systems provide
a resource adapter. A resource adapter implements the EIS adapter-side of the
connector system contracts. In JCA version 1.0, these contracts include connection
management, transaction management, and security. In JCA version 1.5, there are
additional contracts that we will discuss later in the chapter. A resource adapter also
provides a client-level API that applications use to access an EIS. The client-level API
can be the common client interface (CCI) or an API specific to the resource adapter or
the EIS. A resource adapter can also be used within an application server environment,
which is referred to as a managed environment. The application server interacts with the
resource adapter using the system contracts, while JEE components use the client API
to access the EIS. A resource adapter can also be used in a two-tier or nonmanaged
scenario. In a nonmanaged scenario, an application directly interacts with the resource
adapter, using both the system contracts and the client API to connect to the EIS.

Distributed Object Frameworks


The current distributed object frameworks are CORBA, RMI, DCOM, and EJB. The EJB
specification is intended to support compliance with the range of CORBA standards,
current and proposed. The two technologies can function in a complementary manner.
CORBA provides a great standards-based infrastructure on which to build EJB
containers. The EJB framework makes it easier to build an application on top of a
CORBA infrastructure. Additionally, the recently released CORBA components
specification refers to EJB as the architecture when building CORBA components in
Java.

Page
60
CORBA
CORBA is a language independent, distributed object model specified by the OMG. This
architecture was created to support the development of object-oriented applications
across heterogeneous computing environments that might contain different hardware
platforms and operating systems. CORBA relies on IIOP for communications between
objects. The center of the CORBA architecture lies in the Object Request Broker (ORB).
The ORB is a distributed programming service that enables CORBA objects to locate
and communicate with one another. CORBA objects have interfaces that expose sets of

Section 30. Introduction (7 pages)


methods to clients. To request access to an objects method, a CORBA client acquires
an object reference to a CORBA server object. Then the client makes method calls on
the object reference as if the CORBA object were local to the client. The ORB finds the
CORBA object and prepares it to receive requests, to communicate requests to it, and
then to communicate replies back to the client. A CORBA object interacts with ORBs
either through an ORB interface or through an Object Adapter.

Page
Native Language Integration 61
By using IIOP, EJBs can interoperate with native language clients and servers. IIOP
facilitates integration between CORBA and EJB systems. EJBs can access CORBA
servers, and CORBA clients can access EJBs. Also, if a COM/CORBA internetworking
service is used, ActiveX clients can access EJBs, and EJBs can access COM servers.
Eventually there may also be a DCOM implementation of the EJB framework.

Java/RMI

Section 30. Introduction (7 pages)


Since a Bean’s remote and home interfaces are RMI compliant, they can interact with
CORBA objects via RMI/IIOP, Sun, and IBM’s adaptation of RMI that conforms to the
CORBA-standard IIOP protocol. The Java Transaction API (JTA), which is the
transaction API prescribed by the EJB specification for bean-managed transactions, was
designed to be well integrated with the OMG Object Transaction Service (OTS)
standard. Java/RMI relies on a protocol called the Java Remote Method Protocol
(JRMP). Java relies heavily on Java Object Serialization, which allows objects to be
marshaled (or transmitted) as a stream. Since Java Object Serialization is specific to
Java, both the Java/RMI server object and the client object have to be written in Java.
Each Java/RMI server object defines an interface, which can be used to access the
server object outside of the current JVM and on another machine’s JVM. The interface
exposes a set of methods, which are indicative of the services offered by the server
object. For a client to locate a server object for the first time, RMI depends on a naming
mechanism called an RMIRegistry that runs on the server machine and holds
information about available server objects. A Java/RMI client acquires an object
reference to a Java/RMI server object by performing a lookup for a server object
reference and invokes methods on the server object as if the Java/RMI server object
resided in the client’s address space. Java/RMI server objects are named using URLs,
and for a client to acquire a server object reference, it should specify the URL of the
server object as you would with the URL to a HTML page. Since Java/RMI relies on
Java, it also can be used on diverse operating system platforms from IBM mainframes to
UNIX boxes toWindows machines to hand-held devices, as long as a JVM
implementation exists for that platform.

Distributed Component Object Model DCOM


DCOM supports remote objects by running on a protocol called the Object Remote
Procedure Call (ORPC). This ORPC layer is built on top of Distributed Computing
Environment’s (DCE) Remote Procedure Call (RPC) and interacts with Component Page
62
Object Model’s (COM) runtime services. A DCOM server is a body of code that is
capable of serving up objects of a particular type at runtime. Each DCOM server object
can support multiple interfaces, each representing a different behavior of the object. A
DCOM client calls into the exposed methods of a DCOM server by acquiring a pointer to
one of the server object’s interfaces. The client object then starts calling the server
object’s exposed methods through the acquired interface pointer as if the server object
resided in the client’s address space. As specified by COM, a server object’s memory
layout conforms to the C++ vtable layout. Since the COM specification is at the binary

Section 30. Introduction (7 pages)


level, it allows DCOM server components to be written in diverse programming
languages such as C++, Java, Object Pascal (Delphi), Visual Basic, and even COBOL.
As long as a platform supports COM services, DCOM can be used on that platform.
DCOM is now heavily used on the Windows platform.

To address the EIS integration problem, the J2EE platform provides the following EIS
integration technologies:

• J2EE Connector architecture--The J2EE Connector architecture provides a


standard architecture for integrating J2EE applications with existing EISs and
applications. The Connector architecture enables adapters for external EISs to be
plugged into the J2EE application server. Enterprise applications can then be
developed using these adapters to support and manage secure, transactional, and
scalable integration with EISs. The 1.0 version of the Connector architecture
focuses on synchronous integration with EISs. The 2.0 version (under
development as part of J2EE 1.4) extends the core functionality to add support for
asynchronous integration with EISs.
• Java Message Service (JMS)--JMS is a standard Java API defined for enterprise
messaging systems. It is meant to be a common messaging API that can be used
across different types of messaging systems. A Java application uses the JMS API
to connect to an enterprise messaging system. Once connected, the application
uses the facilities of the underlying enterprise messaging system (through the API)
to create messages and to communicate asynchronously with one or more peer
applications.
• JDBCTM API--The JDBC API defines a standard Java API for integration with
relational database systems. A Java application uses the JDBC API for obtaining a
database connection, retrieving database records, executing database queries and
stored procedures, and performing other database functions.

Explain typical uses of web services and XML over HTTP as


3.2 mechanisms to integrate distinct software components.

• [web services J2EE] Page


63
Ref. • [.Net J2EE]
• SOAP

Web Services Architecture Overview

Web services is a service oriented architecture which allows for creating an abstract
definition of a service, providing a concrete implementation of a service, publishing and
finding a service, service instance selection, and interoperable service use. In general a
Web service implementation and client use may be decoupled in a variety of ways.

Section 30. Introduction (7 pages)


Client and server implementations can be decoupled in programming model. Concrete
implementations may be decoupled in logic and transport.

The service provider defines an abstract service description using the Web Services
Description Language (WSDL). A concrete Service is then created from the abstract
service description yielding a concrete service description in WSDL. The concrete
service description can then be published to a registry such as Universal Description,
Discovery and Integration (UDDI). A service requestor can use a registry to locate a
service description and from that service description select and use a concrete
implementation of the service.
The abstract service description is defined in a WSDL document as a PortType. A
concrete Service instance is defined by the combination of a PortType, transport &
encoding binding and an address as a WSDL port. Sets of ports are aggregated into a
WSDL service.

Web Service

There is no commonly accepted definition for a Web service. For the purposes of this

Page
specification, a Web service is defined as a component with the following characteristics:
• A service implementation implements the methods of an interface that is
64
describable by WSDL. The methods are implemented using a Stateless Session
EJB or JAX-RPC web component.
• A Web service may have its interface published in one or more registries for Web
services during deployment.
• A Web Service implementation, which uses only the functionality described by this
specification, can be deployed in any Web Services for J2EE compliant
application server.
• A service instance, called a Port, is created and managed by a container.

Section 30. Introduction (7 pages)


• Run-time service requirements, such as security attributes, are separate from the
service implementation. Tools can define these requirements during assembly or
deployment.
• A container mediates access to the service.
JAX-RPC defines a programming model mapping of a WSDL document to Java which
provides a factory (Service) for selecting which aggregated Port a client wishes to use.
See Figure 2 for a logical diagram. In general, the transport, encoding, and address of
the Port are transparent to the client. The client only needs to make method calls on the
Service Endpoint Interface, as defined by JAX-RPC, (i.e. PortType) to access the
service. See Chapter 4 for more details.

Understanding Web Services


In a typical Web services scenario, a client application can learn about what functionality
a Web service provides and how to call this functionality by querying the service’s WSDL
file. Next, the client sends a request to the service at its given URL using the SOAP
protocol over HTTP. The service receives the request, processes it, and returns a
response. The request and the response are XML formatted using the SOAP protocol.

It is worth examining the protocols and specifications (or stack) that make Web services
possible. The Web services stack consists of five layers, as Figure 4.8 illustrates.

Page
65

Section 30. Introduction (7 pages)


These layers consist of the following elements arranged bottom-up first:
1. Transport (HTTP)
2. Encoding (XML)
3. Standard structure (SOAP)
4. Description (WSDL)
5. Discovery (UDDI)
The next sections describe each of these elements.

1) Transport (HTTP)
At the lowest level, two components in a distributed architecture must agree on a
common transport mechanism. Because of the near universal acceptance of port 80 as
a less risky route through a firewall, HTTP became the standard for the transport layer.
However, Web services implementations can run on other transport protocols such as
FTP and SMTP, or even other network stacks, such as Sequenced Packet Exchange
(SPX) or non-routable protocols such as NetBEUI. Changing from the dependence on
HTTP or HTTPS (for encrypted connections) is possible within the bounds of the current
Page
specification.
66
2) Encoding (XML)
After agreeing on the transport, components must deliver messages as correctly
formatted XML documents. This XML dependence ensures the success of the transfer,
because both provider and consumer know to parse and interpret the XML standard.

3) Standard Structure (SOAP)


Although XML defines message encoding, it does not cover the structure and format of
the document itself. To guarantee interoperability, both provider and consumer must
know what to send and what to expect. SOAP is a lightweight, message-based protocol
built on XML (XSD version 2) and standard Internet protocols, such as HTTP and SMTP.
The SOAP protocol specification defines an XML structure for messages (the SOAP

Section 30. Introduction (7 pages)


envelope), data type definitions, and a set of conventions that implement remote
procedure calls and the format of any returned data (the SOAP body).
SOAP, originally defined as Simple Object Access Protocol, is a protocol specification
for exchanging structured information in the implementation of Web Services in
computer networks. It relies on Extensible Markup Language (XML) as its message
format, and usually relies on other Application Layer protocols (most notably Remote
Procedure Call (RPC) and HTTP) for message negotiation and transmission. SOAP can
form the foundation layer of a web services protocol stack, providing a basic messaging
framework upon which web services can be built.

As a layman's example of how SOAP procedures can


be used, a SOAP message could be sent to a web
service enabled web site (for example, a house price
database) with the parameters needed for a search.
The site would then return an XML-formatted
document with the resulting data (prices, location,
features, etc). Because the data is returned in a
standardized machine-parseable format, it could then
be integrated directly into a third-party site.

SOAP structure

4) Description (WSDL)
The description layer provides a mechanism for informing interested parties of the
particular bill of fare that a Web service offers. Web Services Description Language
(WSDL) provides this contract, setting out for each exposed component:
• Component name
• Data types
• Methods
• Parameters
This WDSL description enables a developer for a remote component to query your Web
service and find out what the service can do and how to get it to do it. The WSDL file is

Page
an XSD-based XML document that defines the details of your Web service. It also stores
your Web service contract. The WSDL file is usually the first point of entry for any client
attaching to your Web service so that the client knows how to use it.
67
1) Discovery (UDDI)
Discovery attempts to answer the question “Where.” If you want to connect to a Web
service at an Internet location (for example,
www.nwtraders.msft/services/WeatherService.aspx), you can enter the URL manually.
However, URLs are somewhat unwieldy and not very user friendly, so it would be better
if you could just request the NWTraders Weather Web Service. To do this, NWTraders
could publish their weather service on a Universal Description, Discovery and Integration
(UDDI) server. Finding their weather service is now just a question of connecting to the
UDDI server using an agreed message format to locate the URL for the service.

Section 30. Introduction (7 pages)


Figure 4.9 shows the how the basic architectural elements of a typical Web service work
together.

Refer to Objective 5.2 for more information.

Explain how JCA (Java Connector Architecture) and JMS are used to
3.3 integrate distinct software components as part of an overall Java EE
application.

Ref. • [DESIGNING_ENTERPRISE_APPLICATIONS] Sections 6.2.1, 6.2.2.

Page
68
J2EE Connector Architecture
The J2EE Connector architecture is the standard architecture for integrating J2EE products and
applications with heterogeneous enterprise information systems. The Connector architecture
enables an EIS vendor to provide a standard resource adapter for its enterprise information
system. Because a resource adapter conforms to the Connector architecture specification, it
can be plugged into any J2EE-compliant application server to provide the underlying
infrastructure for integrating with that vendor's EIS. The EIS vendor is assured that its adapter
will work with any J2EE-compliant application server. The J2EE application server, because of
its support for the Connector architecture, is assured that it can connect to multiple EISs.
The J2EE application server and EIS resource adapter collaborate to keep all system-level
mechanisms - transactions, security, connection management - transparent to the application

Section 30. Introduction (7 pages)


components. This enables an application component developer to focus on a component’s
business and presentation logic without getting involved in the system-level issues related to
EIS integration.
Through its contracts, the J2EE Connector architecture establishes a set of programming
design guidelines for EIS access. The J2EE Connector architecture defines two types of
contracts: system and application level. The system-level contracts exist between a J2EE
application server and a resource adapter. An application-level contract exists between an
application component and a resource adapter.

The application-level contract defines the client API that an application component uses for EIS
access. The Connector architecture does not require that an application component use a
specific client API. The client API may be the Common Client Interface (CCI), which is an API
for accessing multiple heterogeneous EISs, or it may be an API specific to the particular type
of resource adapter and its underlying EIS. There are advantages to using CCI, principally that
tool vendors can build their tools on top of this API. Although the CCI is targeted primarily
towards application development tools and EAI vendors, it is not intended to discourage
vendors from using JDBC APIs. An EAI vendor will typically combine JDBC with CCI by using
the JDBC API to access relational databases and using CCI to access other EISs.
The system-level contracts define a "pluggability" standard between application servers and
EISs. By developing components that adhere to these contracts, an application server and an
EIS know that connecting is a straight-forward operation of plugging in the resource adapter.
The EIS vendor or resource adapter provider implements its side of the system-level contracts
in a resource adapter, which is a system library specific to the EIS. The resource adapter is the
component that plugs into an application server. Examples of resource adapters include an
adapter that connects to an ERP system and one that connects to a mainframe transaction

Page
processing system.
There is also an interface between a resource adapter and its particular EIS. This interface is
69
specific to the EIS, and it may be a native interface or some other type of interface. The
Connector architecture does not define this interface.
The Connector architecture defines the services that the J2EE-compliant application server
must provide. These services - transaction management, security, and connection pooling -
are delineated in the three Connector system-level contracts. The application server may
implement these services in its own specific way. The three system contracts, which together
form a Service Provider Interface (SPI), are as follows:
• Connection management contract - This contract enables an application server to
pool connections to an underlying EIS, while at the same time it enables application
components to connect to an EIS. Pooling connections is important to create a
scalable application environment, particularly when large numbers of clients require
access to the underlying EIS.

Section 30. Introduction (7 pages)


• Transaction management contract - This contract is between the application
server's transaction manager and an EIS that supports transactions. It gives the
transaction manager the ability to manage transactions across multiple EIS resource
managers. (A resource manager provides access to a set of shared resources.) The
contract also supports local transactions, which are transactions that an EIS resource
manager handles internally.
• Security contract - The security contract enables secure access to an EIS and
protects the EIS-managed resources.

Java Message Service API


The Java Message Service (JMS) API is a standard Java API defined for enterprise messaging
systems. It is a common messaging API that can be used across different types of messaging
systems. A Java application uses the JMS API to connect to an enterprise messaging system.
Once connected, the application uses the facilities of the underlying enterprise messaging
system (through the API) to create messages and to communicate asynchronously with one or
more peer applications.
A JMS provider implements the JMS API for an enterprise messaging system and provides
access to the services provided by the underlying message system. Application server vendors
include a JMS provider with the application server. Currently, vendors plug a JMS provider into
an application server in their own vendor-specific manner. The Connector architecture 2.0
version defines a standard for plugging a JMS provider into an application server, allowing a
JMS provider to be treated similarly to a resource adapter. However, a JMS provider will have
a JMS API as a client API for its underlying enterprise messaging system.
A client application, called a JMS client, uses the JMS API to access the asynchronous
messaging facilities provided by the enterprise messaging system. The EJB tier is the best
place to implement JMS clients in J2EE applications. Since JMS supports peer-to-peer
messaging, both source (or producer) and destination (or consumer) applications act as clients
to the JMS provider.
A JMS domain identifies the type of asynchronous message-based communication supported
by a JMS provider and an enterprise messaging system. There are two domain types: queue-
based point-to-point domains and publish-subscribe domains. A Java application using JMS
has a different application programming model depending on the JMS domain. For example, a
Java application uses the JMS-defined queue-based interfaces QueueConnectionFactory
and MessageQueue, among other queue-based interfaces, to interact with a point-to-point
domain.
The EIS integration approaches may be classified as shown below:

Page
70

Section 30. Introduction (7 pages)


• Data integration using the JDBC API (for relational databases) or Connector
architecture (for non-relational databases)
• Asynchronous, message-based, loosely-coupled integration using the JMS and J2EE
Connector architecture
• Synchronous, tightly-coupled integration using the Connector architecture
• Legacy connectivity using the Connector architecture

Page
71

Section 30. Introduction (7 pages)


4. Business Tier Technologies ( pages3)

Explain and contrast uses for entity beans, entity classes, stateful and
1 stateless session beans, and message-driven beans, and understand the
advantages and disadvantages of each type.

Explain and contrast the following persistence strategies: container-


managed persistence (CMP) BMP, JDO, JPA, ORM and using DAOs
2 (Data Access Objects) and direct JDBC technology-based persistence
under the following headings: ease of development, performance,
scalability, extensibility, and security.

Explain how Java EE supports the deployment of server-side


3 components implemented as web services and the advantages and
disadvantages of adopting such an approach.

Explain the benefits of the EJB 3 development model over previous EJB
4 generations for ease of development including how the EJB container
simplifies EJB development.

Explain and contrast uses for entity beans, entity classes, stateful and
4.1 stateless session beans, and message-driven beans, and understand the
advantages and disadvantages of each type.
• [EJB_3.0_CORE]
Ref. • [J2EE Tutorial] Ch.20 & 24

The characteristics of the Enterprise JavaBeans Technology for version 3 of the EJB

Page
specifications are that:

• The objects they implement contain business logic that operates on the
72
enterprise’s data.
• The objects they implement are managed at runtime by a container.
• The objects they implement can be customized at deployment time by editing its
environment entries.
• The objects they implement have various service information, such as
transaction and security attributes, may be specified together with the business
logic of the enterprise bean class in the form of metadata annotations, or
separately, in an XML deployment descriptor. This service information may be
extracted and managed by tools during application assembly and deployment.
• The client access is mediated by the container in which the enterprise bean is
deployed.

Section 30. Introduction (7 pages)


• If an enterprise bean uses only the services defined by the EJB specification, the
enterprise bean can be deployed in any compliant EJB container. Specialized
containers can provide additional services beyond those defined by the EJB
specification. An enterprise bean that depends on such a service can be
deployed only in a container that supports that service.
• An enterprise bean can be included in an assembled application without requiring
source code changes or recompilation of the enterprise bean.
• The Bean Provider defines a client view of an enterprise bean. The Bean
Provider can manually define the client view or it can be generated automatically
by application development tools. The client view is unaffected by the container
and server in which the bean is deployed. This ensures that both the beans and
their clients can be deployed in multiple execution environments without changes
or recompilation.

The essential characteristics of an enterprise bean are:


• An enterprise bean typically contains business logic that operates on the
enterprise’s data.
• An enterprise bean’s instances are managed at runtime by a container.
• An enterprise bean can be customized at deployment time by editing its
environment entries.
• Various service information, such as transaction and security attributes, may be
specified together with the business logic of the enterprise bean class in the form of
metadata annotations, or separately, in an XML deployment descriptor. This service
information may be extracted and managed by tools during application assembly
and deployment.
• Client access is mediated by the container in which the enterprise bean is deployed.
• If an enterprise bean uses only the services defined by the EJB specification, the
enterprise bean can be deployed in any compliant EJB container. Specialized
containers can provide additional services beyond those defined by the EJB
specification. An enterprise bean that depends on such a service can be deployed
only in a container that supports that service.
• An enterprise bean can be included in an assembled application without requiring
source code changes or recompilation of the enterprise bean.
• The Bean Provider defines a client view of an enterprise bean. The Bean Provider
can manually define the client view or it can be generated automatically by
application development tools. The client view is unaffected by the container and
server in which the bean is deployed. This ensures that both the beans and their
clients can be deployed in multiple execution environments without changes or
recompilation.
Page
73
Entities
An entity is a lightweight persistence domain object. Typically an entity represents a
table in a relational database, and each entity instance corresponds to a row in that
table. The primary programming artifact of an entity is the entity class, although entities
can use helper classes.

A typical entity object has the following characteristics:


• It is part of a domain model, providing an object view of data in the database.
• It can be long-lived (lives as long as the data in the database).
• The entity and its primary key survive the crash of the EJB container. If the state of
an entity was being updated by a transaction at the time the container crashed, the

Section 30. Introduction (7 pages)


entity’s state is restored to the state of the last committed transaction when the
entity is next retrieved.
A typical EJB container and server provide a scalable runtime environment for a large
number of concurrently active entity objects.

Advantages and Disadvantages

The use of entity beans offers the following advantages:


• POJO persistence–in JPA, persistent objects are POJOs.
• Object-relational mapping is completely metadata-driven.
• The persistence API exists as a separate layer form the persistent objects
and does not intrude upon them.
• Using the query framework you can query across entities and their
relationships without having to use concrete foreign keys or database
columns. Also, you can define queries statically in metadata or create
them dynamically by passing query criteria on construction. Queries can
return entities as results.
• Entities are mobile–objects are able to move from one JVM to another and
back, and at the same time be usable by the application.
• You can configure persistence features through the use of Java SE 5
annotations, or XML, or a combination of both. You may also rely on
defaults.
• If your application is running inside a container, the container provides
support and ease of use; you can configure the same application to run
outside a container.
The Entity Bean Class
The entity bean class contains several types of methods:
• Getter and setter methods for all persistent fields: For example, a Stock
class might have a field named tickerSymbol, with a getter method named
getTickerSymbol() and a setter method named setTickerSymbol(). We
sometimes call entity bean fields virtual fields, because having a field in the entity
bean actually named tickerSymbol is not required. The getter and setter method
names just imply the name of a field, similar to JavaBean properties.
• Methods that contain business logic: These methods typically access and

Page
manipulate the fields of the entity bean. For example, if you had an entity bean
named StockTransactionBean with a price field and a quantity field, a method 74
named getTransactionAmount() could be created to multiply the two fields and
return the amount of the transaction.
• Lifecycle methods that are called by the EJB container: For example, as with
session beans, the method annotated by the @PostConstruct descriptor is called
after the entity bean has finished its instantiation, but before any of its business
methods are called. These callback methods can be overridden to pass in
initialization values.

When you use Entity beans you dont need to worry about database transaction
handling, database connection pooling etc. which are taken care by the ejb container.
But in case of JDBC you have to explicitly do the above features. what suresh told is

Section 30. Introduction (7 pages)


exactly perfect. ofcourse, this comes under the database transations, but i want to add
this. the great thing about the entity beans of container managed, whenever the
connection is failed during the transaction processing, the database consistancy is
mantained automatically. the container writes the data stored at persistant storage of the
entity beans to the database again to provide the database consistancy. where as in
jdbc api, we, developers has to do manually.

Session Objects
A typical session object has the following characteristics:
• Executes on behalf of a single client.
• Can be transaction-aware.
• Updates shared data in an underlying database.
• Does not represent directly shared data in the database, although it may access
and update such data.
• Is relatively short-lived.
• Is removed when the EJB container crashes. The client has to re-establish a new
session object to continue computation.

There are advantages and disadvantages to making a session bean stateful. The
following are some of the advantages:

• Transient information, such as that described in the stock trading scenario, can be
stored easily in the instance variables of the session bean, as opposed to defining and
using entity beans (or JDBC) to store it in a database.

• Since this transient information is stored in the session bean, the client doesn’t need to
store it and potentially waste bandwidth by sending the session bean the same
information repeatedly with each call to a session bean method. This bandwidth issue is
a big deal when the client is installed on a user’s machine that invokes the session bean
methods over a phone modem, for example. Bandwidth is also an issue when the data is
very large or needs to be sent many times repeatedly.

Advantages:

• It supports transaction service, security service, life cycle management, RMI,


instance cache, thread safe etc. You need not write code for these services.

Page
• It can be used by both web based and non web based clients (like swing etc.).
• It can be used for multiple operations for a single http request.
75
• Stateless Beans Offer Performance and Scalability Advantages.

Disadvantages:

• Since it supports a number of services mentioned above it takes more time to


process a request.

A typical EJB container provides a scalable runtime environment to execute a large


number of session objects concurrently.

Section 30. Introduction (7 pages)


Message-Driven Objects
A typical message-driven object has the following characteristics:
• Executes upon receipt of a single client message.
• Is asynchronously invoked.
• Can be transaction-aware.
• May update shared data in an underlying database.
• Does not represent directly shared data in the database, although it may access
and update such data.
• Is relatively short-lived.
• Is stateless.
• Is removed when the EJB container crashes. The container has to re-establish a
new message-driven object to continue computation.

A typical EJB container provides a scalable runtime environment to execute a large


number of message-driven objects concurrently.

One of the advantages of message-driven beans as compared to non-EJB message


listeners is that the EJB container might create multiple beans to handle incoming
messages. The EJB container dispatches queue messages to as many message-driven
beans as it sees fit (the EJB container might only use a single bean if it desires).

One of the disadvantages of message-driven beans is that they can only listen to a
single queue or topic. A single message-driven bean can't listen to messages from two
different queues.

One of the most important aspects of message-driven beans is that they can consume and
process messages concurrently. This capability provides a significant advantage over
traditional JMS clients, which must be custom-built to manage resources, transactions,
and security in a multithreaded environment. The message-driven bean containers
provided by EJB manage concurrency automatically, so the bean developer can focus on
the business logic of processing the messages. The MDB can receive hundreds of JMS
messages from various applications and process them all at the same time, because
numerous instances of the MDB can execute concurrently in the container.
One of the principal advantages of JMS messaging is that it's asynchronous. In other
words, a JMS client can send a message without having to wait for a reply. Contrast this
flexibility with the synchronous messaging of Java RMI. RMI is an excellent choice for

Page
assembling transactional components, but is too restrictive for some uses. Each time a
client invokes a bean's method it blocks the current thread until the method completes
76
execution. This lock-step processing makes the client dependent on the availability of
the EJB server, resulting in a tight coupling between the client and enterprise bean.

The advantages of using this approach is that it performs faster than either of the
stateless session bean scenarios because the MDB does not need to dispatch incoming
requests to another EJB.

When to Use Message-Driven Beans

Section 30. Introduction (7 pages)


Session beans allow you to send JMS messages and to receive them synchronously,
but not asynchronously. To avoid tying up server resources, do not to use blocking
synchronous receives in a server-side component, and in general JMS messages should
not be sent or received synchronously. To receive messages asynchronously, use a
message-driven bean.

You should consider using enterprise beans if your application has any of the following
requirements:
• The application must be scalable. To accommodate a growing number of users,
you may need to distribute an application’s components across multiple
machines. Not only can the enterprise beans of an application run on different
machines, but also their location will remain transparent to the clients.
• Transactions must ensure data integrity. Enterprise beans support transactions,
the mechanisms that manage the concurrent access of shared objects.
• The application will have a variety of clients. With only a few lines of code,
remote clients can easily locate enterprise beans. These clients can be thin,
various, and numerous.

What Makes Message-Driven Beans Different from Session Beans?

The most visible difference between message-driven beans and session beans is that
clients do not access message-driven beans through interfaces. Unlike a session bean,
a message-driven bean has only a bean class.
In several respects, a message-driven bean resembles a stateless session bean.
• A message-driven bean’s instances retain no data or conversational state for a
specific client.
• All instances of a message-driven bean are equivalent, allowing the EJB
container to assign a message to any message-driven bean instance. The
container can pool these instances to allow streams of messages to be
processed concurrently.
• A single message-driven bean can process messages from multiple clients.
• The instance variables of the message-driven bean instance can contain some
state across the handling of client messages (for example, a JMS API
connection, an open database connection, or an object reference to an
enterprise bean object).
• Client components do not locate message-driven beans and invoke methods
directly on them. Instead, a client accesses a message-driven bean through, for
example, JMS by sending messages to the message destination for which the

Page
message-driven bean class is the MessageListener.
You assign a message-driven bean’s destination during deployment by using Application 77
Server resources.

The benefits of the Enterprise JavaBeans Technology for version 3.0 of the EJB
specifications are that it: they simplify the development of large, distributed applications
for the following reasons:

First, because the EJB container provides system-level services to enterprise beans,
the bean developer can concentrate on solving business problems. The EJB container,
rather than the bean developer, is responsible for system-level services such as
transaction management and security authorization.

Section 30. Introduction (7 pages)


Second, because the beans rather than the clients contain the application’s business
logic, the client developer can focus on the presentation of the client. The client
developer does not have to code the routines that implement business rules or access
databases. As a result, the clients are thinner, a benefit that is particularly important for
clients that run on small devices.

Third, because enterprise beans are portable components, the application assembler
can build new applications from existing beans. These applications can run on any
compliant Java EE server provided that they use the standard APIs.

Explain and contrast the following persistence (i.e. storage) strategies


(i.e. mechanisms): (1) container-managed persistence (CMP), (2) BMP,
(3) JDO, (4) JPA, (5) ORM and using (6) DAOs (Data Access Objects)
4.2 and (7) direct JDBC technology-based persistence under the following
headings: a. ease of development, b. performance, c. scalability, d.
extensibility, and e. security.

• Container Managed Persistence (CMP) versus Bean Managed


Persistence (BMP) Entity Beans
• Starting with Java Persistence Architecture
Ref. • JDO Architectures
• DAO Pattern
• Best Practices for Object/Relational Mapping and Persistence
APIs

(1) Container-managed persistence (CMP): A storage method in which an entity bean


delegates the responsibility for it to its container.

(2) Bean--managed persistence (BMP): A storage method in which an entity bean is


responsible for, typically through JDBC code.

Page
BMP versus CMP
78
a. Performance
BMP should always win out over CMP in performance.

Ease of development
Bean-managed beans also offer greater flexibility with respect to the type of data they are
representing. The data comprising an entity bean need not be from a single table or a single database,
or even from any database, for that matter. When you are in charge of managing the persistence of
your data, you are at complete liberty to do anything you want when the EJB container notifies you;
this also provides you with the ability to offer an EJB front-end to an existing legacy system.

Section 30. Introduction (7 pages)


Sometimes, the nature of your database schema does not mix well with the object-oriented
representation that you are trying to develop, leaving you with no choice other than to use BMP.

Maintainability
Bean-managed beans have their downside too: maintainability and convenience. BMP beans are tied
very closely to a database schema (the code to access specific tables and column names is hard-
coded in the bean itself.) CMP beans can be configured at deployment time; a bean can be mapped to
a specific table and its fields to that table's columns. A change in the database schema does not
necessarily equate to a change in the bean; a smart object-mapping tool can hide some of the
changes, but a major overhaul will affect the bean code, regardless.

(3) JDO Java Data Object is a specification of Java object persistence. One of its features is a
transparency of the persistent services to the domain model. JDO persistent objects are ordinary
Java programming language classes; there's no requirement for them to implement certain
interfaces or extend from special classes. JDO 1.0 was developed under the Java Community
Process as JSR 12. JDO 2.0 was developed under JSR 243 and was released on May 10th,
2006. JDO 2.1 is now underway, being developed by the Apache JDO project.

Object persistence is defined in the external XML metafiles, which may have vendor-specific
extensions. JDO vendors provide developers with enhancers, which modify compiled Java class
files so they can be transparently persisted. (Note that byte-code enhancement is not mandated
by the JDO specification, although it is the commonly used mechanism for implementing the JDO
specification's requirements.) Currently, JDO vendors offer several options for persistence, e.g. to
RDBMS, to OODB, to files.

JDO enhanced classes are portable across different vendors' implementation. Once enhanced, a
Java class can be used with any vendor's JDO product.

JDO is integrated with Java EE in several ways. First of all, the vendor implementation may be
provided as a JEE Connector. Secondly, JDO may work in the context of JEE transaction
services.

Page
79

The main goal of the JDO specification are to provide:


 For application developers: A consistent Java-object-model-centric view of permanent data stores,
whether local or networked.

Section 30. Introduction (7 pages)


JDO addresses not only enterprise data, but also persistent data in general (even in
embedded systems). In a two-tier environment, JDO hides the details of data storage,
types, relationships, and retrieval. In the more complex managed multi-tier environment
JDO also handles concurrency issues, transactions, scalability, security, and connection
management.

JDO relies on the J2EE Connector architecture (JCA) for EIS access and uses the Java
Transaction API (JTA) for distributed transactions. A JDO instance is a persistence-
capable (implements PersistenceCapable interface) class, each instance of which
represents some form of persistent data. You can make a Java class persistence-
capable by either explicitly implementing the PersistenceCapable interface or using
a JDO enhancer during or after compile time. (A JDO enhancer is a byte code enhancer
program that modifies the byte codes of Java class files to enable transparent loading
and storing of the persistent instances' fields.) You can make almost any user-defined
class PersistenceCapable; however, some system classes, such as Thread, Socket,
and File, among others, can never be persistence capable.

One of JDO's primary objectives is to provide you with a transparent, Java-centric view
of persistent information stored in a wide variety of datastores. You can use the Java
programming model to represent the data in your application domain and transparently
retrieve and store this data from various systems, without needing to learn a new data-
access language for each type of datastore. The JDO implementation provides the
necessary mapping from your Java objects to the special datatypes and relationships of
the underlying datastore. Chapter 4 discusses Java modeling capabilities you can use in
your applications. This chapter provides a high-level overview of the architectural
aspects of JDO, as well as examples of environments in which JDO can be used. We
cannot enumerate all such environments in this book, because JDO is capable of
running in a wide variety of architectures.

A JDO implementation is a collection of classes that implement the interfaces defined in


the JDO specification. The implementation may be provided by an Enterprise
Information System (EIS) vendor or a third-party vendor; in this context, we refer to both
as JDO vendors. A JDO implementation provided by an EIS vendor will most likely be
optimized for the specific EIS.

Page
The JDO architecture simplifies the development of scalable, secure, and transactional
JDO implementations that support the JDO interface. You can access a wide variety of
80
storage solutions that have radically different architectures and data models, but you can
use a single, consistent, Java-centric view of the information from all the datastores.

The JDO architecture can be used to access and manage data contained in local
storage systems and heterogeneous EISs, such as enterprise resource planning (ERP)
systems, mainframe transaction processing systems, and database systems. JDO was
designed to be suitable for a wide range of uses, from embedded small-footprint
systems to large-scale enterprise application servers. A JDO implementation may
provide an object-relational mapping tool that supports a broad array of relational

Section 30. Introduction (7 pages)


databases. JDO vendors can build implementations directly on the filesystem or as a
layer on top of a protocol stack with multiple components.

JDO has been designed to work in three primary environments:

• Nonmanaged, single transaction


Involves a single transaction and a single JDO implementation, where
compactness is the primary concern. Nonmanaged refers to the lack of distribution
and security within the JVM. The security of the datastore is implemented by
name/password controls.
• Nonmanaged, multiple transactions
Identical to the first, except that the application uses extended features, such as
concurrent transactions.
• Managed
Uses the full range of capabilities of an application server, including distributed
components and coordinated transactions. Security policies are applied to
components based on user roles and security domains.

You can focus on developing your application's business and presentation logic without
having to get involved in the issues related to connecting to a specific EIS. The JDO
implementation hides the EIS-specific issues, such as datatype mapping, relationship
mapping, and the retrieval and storage of data. Your application sees only a Java view
of the data, organized as classes using native Java constructs. EIS-specific issues are
important only during deployment of your application.

In a nonmanaged environment, you do not rely on the managed services of security,


transaction, and connection management offered by a middle-tier application server.
Chapters through cover the uses of JDO in a nonmanaged environment, most of which
also apply to a managed environment.

When JDO is deployed in a managed environment, it uses the J2EE Java Connector
Architecture, which defines a set of portable, scalable, secure, and transactional
mechanisms for integrating an EIS with an application server. These mechanisms focus
on important aspects of integration with heterogeneous systems: instance management,
connection management, and transaction management. The Java Connector

Page
Architecture enables a standard JDO implementation to be pluggable across application
servers from multiple vendors. 81
Managed environments also provide transparency for application components' use of
system-level mechanisms--distributed transactions, security, and connection
management--by hiding the contracts between JDO implementation and the application
server. Chapter 16 covers the use of JDO in the web server environment. Chapter 17
explains how to use JDO to provide persistence services in a J2EE application-server
environment, which supports the Enterprise JavaBeans (EJB) architecture.

Multiple JDO implementations--possibly multiple implementations per type of EIS or local


storage--can be plugged into an application server concurrently, or they can be used
directly in a two-tier or embedded architecture. JDO also allows a persistent class to be

Section 30. Introduction (7 pages)


used concurrently with multiple JDO implementations in the same Java Virtual Machine
(JVM) or application-server environment. This enables application components--
deployed on a middle-tier application server or client-tier--to access the underlying
datastores using the same consistent, Java-centric view of data.

The persistent classes that you define can migrate easily from one environment to
another. This also allows you to debug persistent classes and parts of your application
code in a simple one- or two-tier environment and deploy them in another tier of the
system architecture.

Ease of development. The JDO API allows application developers to focus on their
domain object model (DOM) and leave the details of the persistence to the JDO
implementation.
High performance. Java application developers do not need to worry about
performance optimization for data access because this task is delegated to JDO
implementations that can improve data access patterns for best performance.

Extensibility: applications can take advantage of EJB features such as remote


message processing, automatic distributed transaction coordination, and security using
the same DOMs throughout the enterprise. Moreover, applications written using the JDO
API can be run on multiple implementations available from different vendors without
changing a single line of code or even recompiling.

Scalability and security: since JDO can run within a managed environment such as
J2EE environment, it can take advantages of the scalability and security capabilities of
J2EE architecture.

Use of JDO vs. JDBC and EJB


JDO is not meant to replace JDBC. They are complementary approaches with unique
strengths, and developers with different skill sets and different development objectives
can use either. For example, JDBC offers developers greater flexibility by giving them
direct control over database access and cache management. JDBC is a more mature
technology with wide industry acceptance. JDO, on the other hand, offers developers
convenience by hiding SQL. This frees Java platform developers to focus on the DOM
without necessarily knowing or having to learn SQL, while JDO takes care of the details
Page
of the field-by-field storage of objects in a persistent store. 82
JDO is designed to complement EJB. EJB CMP provides portable persistence for
containers, and JDO can be integrated with EJB in two ways: (1) through Session Beans
with JDO persistence-capable classes to implement dependent objects (persistent helper
classes for Session Beans) and (2) through Entity Beans with JDO persistence-capable
classes used as delegates for both Bean Managed Persistence (BMP) and Container
Managed Persistence (CMP).

JDO use JDBC. So JDBC must be faster (sometimes very faster) then JDO.

But... JDO has implemented cache (and connection pool). Hence, in some cases JDO will

Section 30. Introduction (7 pages)


bite "nude" JDBC.

As You can see, JDO is more higher layer of Persistent API then JDBC. There are exist
some parallels with CMP & BMP.
JDO and CMP more complex but can be faster in development (sounds good). BMP and
JDBC allow more freedom. You may try to create own pooling and cache and achieve
greates performance (but this will take time + time + time).

(4) JPA Java Persistence API

The Java Persistence API, sometimes referred to as JPA, is a Java programming


language framework that allows developers to manage relational data in applications
using Java Platform, Standard Edition and Java Platform, Enterprise Edition.
Motivation for creating the Java Persistence API
Many enterprise Java developers use lightweight persistent objects provided by open-
source frameworks or Data Access Objects instead of entity beans: entity beans and
enterprise beans had a reputation of being too heavyweight and complicated, and one
could only use them in Java EE application servers. Many of the features of the third-
party persistence frameworks were incorporated into the Java Persistence API, and As of
2009[update] projects like Hibernate and Open-Source Version TopLink Essentials have
become implementations of the Java Persistence API.

(5) ORM

Object-relational mapping (ORM, O/RM, and O/R mapping) is a programming


technique for converting data between incompatible type systems in relational databases
and object-oriented programming languages. This creates, in effect, a "virtual object
database" that can be used from within the programming language. There are both free
and commercial packages available that perform object-relational mapping, although
some programmers opt to create their own ORM tools.
Another solution to ORM is to use an object-oriented database management system
(OODBMS), which, as the name implies, is a database designed specifically for working
with object-oriented values. Using an OODBMS eliminates the need for converting data
to and from its SQL form, as the data is stored in its original object representation and

Page
relationships are directly represented, rather than requiring join tables/operations.
The root of the problem is that objects can't be directly saved to and retrieved from
83
relational databases. While objects have identity, state, and behavior in addition to data,
an RDBMS stores data only. Even data alone can present a problem, since there is often
no direct mapping between Java and RDBMS data types. Furthermore, while objects are
traversed using direct references, RDBMS tables are related via like values in foreign and
primary keys. Additionally, current RDBMS have no parallel to Java's object inheritance
for data and behavior. Finally, the goal of relational modeling is to normalize data (i.e.,
eliminate redundant data from tables), whereas the goal of object-oriented design is to
model a business process by creating real-world objects with data and behavior. Robust
object-oriented application development requires a mapping strategy built on a solid
understanding of the similarities and differences in these models.

Section 30. Introduction (7 pages)


Pros and cons in performance and ease of development
ORM often reduces the amount of code needed to be written,[1] making the software more
robust (the fewer the lines of code in a program, the fewer the errors contained within
them).
There are cons as well as benefits for using O/R mapping. For instance, some O/R
mapping tools do not perform well during bulk deletions of data. Stored procedures may
have better performance but are not portable.

(6) DAO Data Access Objects a data access object (DAO) is an object that provides an
abstract interface to some type of database or persistence mechanism, providing some
specific operations without exposing details of the database. It provides a mapping from
application calls to the persistence layer. This isolation separates the concerns of what
data accesses the application needs, in terms of domain-specific objects and data types
(the public interface of the DAO), and how these needs can be satisfied with a specific
DBMS, database schema, etc. (the implementation of the DAO).

The DAO pattern is one of the standard J2EE design patterns. Developers use this pattern
to separate low-level data access operations from high-level business logic.
This design pattern is equally applicable to most programming languages, most types of
software with persistence needs and most types of database, but it is traditionally
associated with Java EE applications and with relational databases accessed via the JDBC
API because of its origin in Sun Microsystems' best practice guidelines[1] ("Core J2EE
Patterns") for that platform.
The advantage of using data access objects is the relatively simple and rigorous
separation between two important parts of an application which can and should know
almost nothing of each other, and which can be expected to evolve frequently and
independently. Changing business logic can rely on the same DAO interface, while
changes to persistence logic does not affect DAO clients as long as the interface remains
correctly implemented.
In the specific context of the Java programming language, Data Access Objects as a
design concept can be implemented in a number of ways. This can range from a fairly
simple interface that separates the data access parts from the application logic, to
frameworks and commercial products. DAO coding paradigms can require some skill.
Page
Use of technologies like Java persistence technologies and JDO ensures to some extent
that the design pattern is implemented. Technologies like EJB CMP come built into
84
application servers and can be used in applications that use a JEE application server.
Commercial products like TopLink are available based on Object-relational mapping.
Popular open source ORM products include Hibernate, iBATIS and Apache OpenJPA.

Performance
DAO is a good design pattern. But performance wise CMP is more suitable for updating
DB. You should use DAO for data base read like listing all customers ets because reading
huge number of records is not advisable through CMP.

Section 30. Introduction (7 pages)


Ease of development

• My experience is that the DOA makes for tight coupling of your object with the
persistence. I've been called upon to maintain systems that used DOA's. And
those systems where significantly harder to maintain than those with an other
form of persistence.
• It is also expected that in case the DAO implementation were to change the other
parts of the application would be unaffected.
• Resources are dedicated to develop and implement this layer which converts into
better software in this layer.

• DAO pattern is applicable to the data layer. VO pattern will be required along with
it. DAO+VO (Visual Object) combination is good , it is what we can call POJOs
(Plain old java objects).

The alternatives to it are:


1. CMP bean
2. BMP bean

The criteria for selection can be:

• Use CMP if There are not too many relationships between DB tables
• If DB portability is of importance and to minimizing maintainability use BMP
• If Scalability is of importance. Remember EJB container provides lots of features
for this , e.g. bean pool, serialization support for excess beans
• If scalability is not a concern, use DAO+VO (visual Object), so this will eliminate
the need of a J2EE server. Plain web server like Tomcat will do. Good cost-
cutting , if that is what you/company is looking for and the anticipated load is not
much.
• If you need the flexibility of CMP , in DAO pattern , try using Factory pattern for
the databases required to be supported.

DAO is a pattern for linking persistency in your model. CMP and BMP are ways of
making an object persistent. The DAO object can be CMP, BMP or simple JDBC. The
pattern does not specify how persistency is realized.

Page
85
Why JPA?

Java developers who need to store and retrieve persistent data already have several
options available to them: serialization, JDBC, JDO, proprietary object-relational
mapping tools, object databases, and EJB 2 entity beans. Why introduce yet another
persistence framework? The answer to this question is that with the exception of JDO,
each of the aforementioned persistence solutions has severe limitations. JPA attempts to
overcome these limitations, as illustrated by the table below.
Table 2.1. Persistence Mechanisms

Section 30. Introduction (7 pages)


Supports: Serializatio
n JDBC ORM ODB EJB 2 JDO JPA
Java
Objects Yes No Yes Yes Yes Yes Yes

Advanced
OO Concepts
Yes No Yes Yes No Yes Yes
Transaction
al Integrity
No Yes Yes Yes Yes Yes Yes
Concurrency
No Yes Yes Yes Yes Yes Yes
Large Data
Sets
No Yes Yes Yes Yes Yes Yes
Existing
Schema No Yes Yes No Yes Yes Yes

Relational
and Non-
Relational
Stores
No No No No Yes Yes No
Queries No Yes Yes Yes Yes Yes Yes

Strict
Standards /
Portability
Yes No No No Yes Yes Yes
Simplicity
Yes Yes Yes Yes No Yes Yes

 Serialization is Java's built-in mechanism for transforming an object graph into a


series of bytes, which can then be sent over the network or stored in a file.
Serialization is very easy to use, but it is also very limited. It must store and
Page
retrieve the entire object graph at once, making it unsuitable for dealing with large
amounts of data. It cannot undo changes that are made to objects if an error occurs
86
while updating information, making it unsuitable for applications that require strict
data integrity. Multiple threads or programs cannot read and write the same
serialized data concurrently without conflicting with each other. It provides no
query capabilities. All these factors make serialization useless for all but the most
trivial persistence needs.
 Many developers use the Java Database Connectivity (JDBC) APIs to manipulate
persistent data in relational databases. JDBC overcomes most of the shortcomings
of serialization: it can handle large amounts of data, has mechanisms to ensure
data integrity, supports concurrent access to information, and has a sophisticated

Section 30. Introduction (7 pages)


query language in SQL. Unfortunately, JDBC does not duplicate serialization's
ease of use. The relational paradigm used by JDBC was not designed for storing
objects, and therefore forces you to either abandon object-oriented programming
for the portions of your code that deal with persistent data, or to find a way of
mapping object-oriented concepts like inheritance to relational databases yourself.
 There are many proprietary software products that can perform the mapping
between objects and relational database tables for you. These object-relational
mapping (ORM) frameworks allow you to focus on the object model and not
concern yourself with the mismatch between the object-oriented and relational
paradigms. Unfortunately, each of these product has its own set of APIs. Your
code becomes tied to the proprietary interfaces of a single vendor. If the vendor
raises prices, fails to fix show-stopping bugs, or falls behind in features, you
cannot switch to another product without rewriting all of your persistence code.
This is referred to as vendor lock-in.
 Rather than map objects to relational databases, some software companies have
developed a form of database designed specifically to store objects. These object
databases (ODBs) are often much easier to use than object-relational mapping
software. The Object Database Management Group (ODMG) was formed to create
a standard API for accessing object databases; few object database vendors,
however, comply with the ODMG's recommendations. Thus, vendor lock-in
plagues object databases as well. Many companies are also hesitant to switch from
tried-and-true relational systems to the relatively unknown object database
technology. Fewer data-analysis tools are available for object database systems,
and there are vast quantities of data already stored in older relational databases.
For all of these reasons and more, object databases have not caught on as well as
their creators hoped.
 The Enterprise Edition of the Java platform introduced entity Enterprise Java
Beans (EJBs). EJB 2.x entities are components that represent persistent
information in a datastore. Like object-relational mapping solutions, EJB 2.x
entities provide an object-oriented view of persistent data. Unlike object-relational
software, however, EJB 2.x entities are not limited to relational databases; the
persistent information they represent may come from an Enterprise Information
System (EIS) or other storage device. Also, EJB 2.x entities use a strict standard,
making them portable across vendors. Unfortunately, the EJB 2.x standard is

Page
somewhat limited in the object-oriented concepts it can represent. Advanced
features like inheritance, polymorphism, and complex relations are absent. 87
Additionally, EBJ 2.x entities are difficult to code, and they require heavyweight
and often expensive application servers to run.
 The JDO specification uses an API that is strikingly similar to JPA. JDO,
however, supports non-relational databases, a feature that some argue dilutes the
specification.
JPA combines the best features from each of the persistence mechanisms listed above.
Creating entities under JPA is as simple as creating serializable classes. JPA supports the
large data sets, data consistency, concurrent use, and query capabilities of JDBC. Like
object-relational software and object databases, JPA allows the use of advanced object-
oriented concepts such as inheritance. JPA avoids vendor lock-in by relying on a strict

Section 30. Introduction (7 pages)


specification like JDO and EJB 2.x entities. JPA focuses on relational databases. And like
JDO, JPA is extremely easy to use.
(7) Direct JDBC
Java Database Connectivity (JDBC), the traditional mechanism for accessing relational
data bases from Java programs, is getting pretty close to being “deprecated”.
The JDBC API is low-level. It is too concerned with the mechanics – getting connections,
creating statements, composing strings with SQL directives, arranging for the SQL to be
run, and pulling data out of result sets one datum at a time. Such JDBC code breaks the
predominant “invoke method on object” style of Java in much the same way as the
java.net (sockets) API breaks the object model when invoking operations of an object in
another process.
Java’s Remote Method Invocation API was created years ago to hide the network level
and provide a consistent object model for distributed programs. The new Java Persistence
Architecture (JPA) APIs and supporting technologies will in future provide a similar
insulating layer that will conceal low level details of data persistence mechanisms so
allowing an application developer to work in a consistent “object programming” style.
Of course, the JDBC libraries will remain in use. All that is happening is that a new level
of software is being introduced. This JPA level automates the object to relational-table
mapping. The JDBC code is still present – it is simply in the supplied JPA
implementation library.
JPA implementations
There are several implementations available including those based on Hibernate, Kodo,
an OpenJPA implementation for Apache Geronimo, and an implementation for SAP’s
application server. The “standard” implementation is currently Oracle’s Toplink. The
Toplink libraries are included in the Java enterprise development kits downloaded from
Sun.
In a traditional JDBC program, it is commonplace to use simple data classes that
correspond to the rows of database tables. Instances of these classes have data members
that hold data copied to and from corresponding columns in a relational table. Code
utilizing such classes is simple, but relatively long winded and clumsy. As example, the
code needed to create an object with the data for a record with a given key involves
parameterising and running a query, getting a ResultSet row, creating an empty object,
and then successively extracting data elements for each column and using these to set
Page
members in the new object. Such code is very regular, and consequently can easily be
handled automatically if given a mapping from table columns to data members.
88
Ease of development in JPA
A JPA application makes use of an “entity manager” that serves as its “intelligent”
connection to the relational data store. The code needed to instantiate an in-memory
object with the data associated with a primary key is simplified down to a request that
this entity manger “find” that object – one line of code rather than an entire procedure
that runs a query and processes the results in the ResultSet.
JPA code simplicity – leave the complexities to the Entity Manager
The “entity manager” must of course be supplied with data identifying the data source
and specific details of which table must be used and how the table columns relate to data

Section 30. Introduction (7 pages)


members of the Java class used to represent the in-memory object. These necessary
“meta-data” can be supplied separately from the Java code; an XML file can contain
details of the data source and the table mappings.
Meta-data relate “EntityManager” and database
“Annotations” simplify meta-data
The JPA system provides an additional facility for aiding rapid application development
through the use of Java code annotations that allow the programmer to define most of the
mapping details with the code. (While the annotation approach facilitates development,
the use of an XML file with all the meta-data may be more appropriate for production
environments.)
The JPA goes beyond handling simple object-to-table mappings. It also handles relations
among different entities. Consider the classic case of an “order” that is composed of some
order data and a collection of many “line-items”. With JPA one can define an Order class
one of whose members is a Collection (e.g. java.util.ArrayList) of LineItem objects and
provide supplementary meta-data that identify how orders and lineitems are related in the
database (typically, lineitems would have the order number as a foreign key). JPA will
allow an application to retrieve an Order object; when the application first attempts to
access the collection of line items, JPA will arrange to run the JDBC code needed to
retrieve these records as well.
JPA implementations typically give you a choice. You can use existing tables in some
data base; the JPA implementation will automatically generate the basic definitions of the
entity classes that will be used when instantiating in-memory copies of persistent data.
(The developer will generally need to add other business methods to these generated
classes and provide effective definitions for methods such as toString()). Alternatively,
you can start by defining Java classes and the JPA system will create the database tables
from your class definitions.

Explain how Java EE supports the deployment of server-side


4.3 components implemented as web services and the advantages and
disadvantages of adopting such an approach.

Page
• Developing Web Services Using EJB 3.0
Ref. • Developing Web Services Using JAX-WS
89
• Web Services Technology-- Deployment Issues

Developing Web Services Using EJB 3.0


The specification Web Services for Java EE, JSR 109 defines two ways of implementing
a web service. One way is based on the Java class programming model -- the web service
is implemented by a Java class that runs in a web container. The other way is based on
the Enterprise JavaBeans (EJB) programming model -- the web service is implemented as
a stateless session bean that runs in an EJB container. A previous Tech Tip, Developing

Section 30. Introduction (7 pages)


Web Services Using JAX-WS, described how to develop a web service using the Java
class programming model and Java API for XML Web Services (JAX-WS) 2.0, JSR 224.
In the following tip, you'll learn how to develop a web service using JAX-WS and the
EJB programming model. In particular, you'll learn how to use an EJB 3.0 stateless
session bean to implement a Calculator service -- the same Calculator service that was
covered in the previous Tech Tip.
A sample package accompanies this tip. It demonstrates a standalone Java client
accessing the EJB 3.0-based Calculator service. The example uses an open source
reference implementation of Java EE 5 called GlassFish. You can download GlassFish
from the GlassFish Community Downloads page.
Writing the EJB 3.0 Stateless Session Bean Class
Let's start by creating the stateless session bean for the service. One of the significant
improvements in the Java EE 5 platform is a much simpler EJB programming model as
defined in the Enterprise JavaBeans 3.0 Specification, JSR-220. One of the
simplifications is that a bean implementation class is no longer required to implement the
javax.ejb.SessionBean or javax.ejb.EntityBean interface. You can declare a class a
session bean or entity bean simply by annotating it. For example, you can declare a class
a stateless session bean by annotating it with the @Stateless annotation.
EJB 3.0 does specify additional rules for the bean implementation class:
• It must be declared public and must have a default constructor that doesn't take
any arguments.
• It must not be final or abstract and must be a top-level class.
• It must not define a finalize() method.
Another simplification in EJB 3.0 is that a component interface or home interface is no
longer required for a session bean. The one interface a session bean needs is a business
interface that defines the business methods for the bean. Unlike business methods in a
component interface, the business methods in a business interface are not required to
throw java.remote.RemoteException. However the business methods can define throws
clauses for arbitrary application exceptions. The EJB 3.0 business methods should not be
static and final.
Given these simplifications and rules, here is a stateless session bean for the Calculator
class that conforms to the EJB 3.0 programming model (you can find the source code for
the Calculator class in the endpoint directory of the installed sample package):

Page
package endpoint;

import javax.ejb.Stateless;
90
@Stateless
public class Calculator {

public Calculator() {}
public int add(int i, int j) {
int k = i +j ;
System.out.println(i + "+" + j +" = " + k);
return k;
}

Section 30. Introduction (7 pages)


}

Because the EJB 3.0 bean doesn't need to implement the javax.ejb.SessionBean interface,
it no longer needs to include unimplemented lifecycle methods such as ejbActivate and
ejbPassivate. This results in a much simpler and cleaner class. Various annotations
defined in EJB 3.0 reduce the burden on developers and deployers by reducing or
eliminating the need to write a deployment descriptor for the component.
Marking the EJB 3.0 Bean as a Web Service
To mark a bean as a web service, simply annotate the class with the @WebService
annotation. This is an annotation type defined in the javax.jws.WebService package, and
is specified in Web Services Metadata for the Java Platform, JSR 181. Here is the code
for Calculator class marked as a web service:
package endpoint;

import javax.ejb.Stateless;
import javax.jws.WebService;

@Stateless
@WebService
public class Calculator {

public Calculator() {}
public int add(int i, int j) {
int k = i +j ;
System.out.println(i + "+" + j +" = " + k);
return k;
}
}

Marking a Java class with a @WebService annotation makes it a service implementation


class. Note that you don't have to implement a service endpoint interface. According to
the JSR 109, you only need to provide a javax.jws.WebService-annotated service
implementation bean. Deployment tools can then be used to generate a service endpoint
interface, as well as a WSDL document, using JAX-WS rules for Java WSDL mapping.
Packaging the Web Service
A web service based on the EJB programming model needs to be packaged as a JAR file. Page
91
Using the @WebService annotation, you only need to package the service
implementation bean class (with its dependent classes, if any) and the service endpoint
interface class (if explicitly provided). Additionally, the @Stateless annotation frees you
from packaging ejb-jar.xml. By comparison, in the JAX-RPC style of packaging web
services based on EJB 2.0 or earlier, you were responsible for providing the service
endpoint interface class, the service implementation bean class (and dependent classes),
generated portable artifacts, the JAX-RPC mapping file, and a web services deployment
descriptor (webservices.xml and ejb-jar.xml).
With JSR 224, JSR 109, JSR 181 and JSR 220, an application server deployment tool can
generate all the necessary artifacts such as a deployment descriptor (if not explicitly

Section 30. Introduction (7 pages)


provided by the user) for deploying the web service. These artifacts, bundled in the EJB
JAR file, are deployed in an EJB container. A deployer can choose to overwrite values
specified by the @WebService and @Stateless annotations by explicitly providing any of
the previously-mentioned artifacts and packaging them in the EJB module at the time of
deployment. For this tip, the following files are packaged in the EJB module to be
deployed:
endpoint/Calculator.class
endpoint/jaxws/Add.class
endpoint/jaxws/AddResponse.class
The rest of the deployment artifacts are generated by the application server (in this case,
GlassFish).
Writing the Client
After you deploy the web service, you can access it from a client program. A client uses a
@WebServiceRef annotation to declare a reference to an EJB 3.0-based web service. The
@WebServiceRef annotation is in the javax.xml.ws package, and is specified in JAX-WS
2.0 Web Services Metadata for the Java Platform, JSR 181. If you examine the source
code JAXWSClient, the client program used in this tip (you can find the source code for
JAXWSClient in the client directory of the installed sample package), you'll notice the
following:
@WebServiceRef(wsdlLocation=
"http://localhost:8080/CalculatorService/Calculator?WSDL")
static endpoint.CalculatorService service;
The value of the wsdlLocation parameter in @WebServiceRef is a URL that points to the
location of the WSDL file for the referenced service. (The @WebServiceRef annotation
supports additional properties that are optional. These optional properties are specified in
section 7.9 of the JAX-WS 2.0 specification.) The static variable named service will be
injected by the application client container.
Looking further at the source code for JAXWSClient, you'll notice the following line:
endpoint.Calculator port = service.getCalculatorPort();

The service object provides the method, getCalculatorPort, to access the Calculator port
of the web service. Note that both endpoint.CalculatorService and endpoint.Calculator
are portable artifacts that are generated by using the wsimport utility. The wsimport

Page
utility is used to generate JAX-WS artifacts (it is invoked as part of the build-client step
when you run the example program.) 92
After you get the port, you can invoke a business method on it just as though you invoke
a Java method on an object. For example, the following line in JAXWSClient invokes the
add method in Calculator:
int ret = port.add(i, 10);

Running the Sample Code


A sample package accompanies this tip. It demonstrates the techniques covered in the tip.
To install and run the sample:

Section 30. Introduction (7 pages)


1. If you haven't already done so, download GlassFish from the GlassFish Community
Downloads page.
2. Set the following environment variables:

GLASSFISH_HOME: This should point to where you installed GlassFish (for


example C:\Sun\AppServer)
ANT_HOME: This should point to where ant is installed. Ant is included in the
GlassFish bundle that you downloaded. (In Windows, it's in the lib\ant
subdirectory.)
JAVA_HOME: This should point to the location of JDK 5.0 on your system.
Also, add the ant location to your PATH environment variable.
3. Download the sample package for the tip and extract its contents. You should now see the newly
extracted directory as <sample_install_dir>/ttmar2006ejb-ws, where <sample_install_dir> is the
directory in which you installed the sample package. For example, if you extracted the contents to C:\
on a Windows machine, then your newly created directory should be at C:\ttmar2006ejb-ws. The ejb-
techtip directory below ttmar2006ejb-ws contains the source files and other support file for the
sample.
4. Change to the ejb-techtip directory and edit the build.properties files as appropriate. For example, if
the admin host is remote, change the value of admin.host from the default (localhost) to the
appropriate remote host.
5. Start GlassFish by entering the following command:
<GF_install_dir>/bin/asadmin start-domain domain1
where <GF_install_dir> is the directory in which you installed GlassFish.
6. In the ejb-techtip directory, execute the following commands:
ant build
This creates a build directory, compiles the classes, and puts the compiled classes
in the build directory. It also creates an archive directory, creates a JAR file, and
puts the JAR file in the archive directory.
ant deploy
This deploys the JAR file on GlassFish.
ant build-client
This generates portable artifacts and compiles the client source code.
ant run
This runs the application client and invokes the add operation in the Calculator
service 10 times, adding 10 to the numbers 0-to-9. You should see the following
in response:
run: Page
93
[echo] Executing appclient with client class as
client.JAXWSClient
[exec] Retrieving port from the service
endpoint.CalculatorService@159780d
[exec] Invoking add operation on the calculator port
[exec] Adding : 0 + 10 = 10
[exec] Adding : 1 + 10 = 11
[exec] Adding : 2 + 10 = 12
[exec] Adding : 3 + 10 = 13
[exec] Adding : 4 + 10 = 14
[exec] Adding : 5 + 10 = 15
[exec] Adding : 6 + 10 = 16

Section 30. Introduction (7 pages)


[exec] Adding : 7 + 10 = 17
[exec] Adding : 8 + 10 = 18
[exec] Adding : 9 + 10 = 19

7. To undeploy the EJB Module from GlassFish, execute the following command:
ant undeploy

Developing Web Services Using JAX-WS


Java API for XML Web Services (JAX-WS) 2.0, JSR 224, is an important part of the
Java EE 5 platform. A follow-on release of Java API for XML-based RPC 1.1(JAX-
RPC), JAX-WS simplifies the task of developing web services using Java technology. It
addresses some of the issues in JAX-RPC 1.1 by providing support for multiple protocols
such as SOAP 1.1, SOAP 1.2, XML, and by providing a facility for supporting additional
protocols along with HTTP. JAX-WS uses JAXB 2.0 for data binding and supports
customizations to control generated service endpoint interfaces. With its support for
annotations, JAX-WS simplifies web service development and reduces the size of
runtime Jar files.
A sample package accompanies this tip. It demonstrates a simple web service that is
accessed using JAX-WS 2.0 through a standalone Java client. The example is based on an
open source implementation of the Java EE 5 SDK called GlassFish. The sample package
includes the source code for the example, build scripts, and an ant build file.
Let's start by doing some initial setup. If you haven't already done so, download
GlassFish from the GlassFish Downloads page. (This tip was tested with Build 29 of
GlassFish.) Then set the following environment variables:
• GLASSFISH_HOME. This should point to where you installed GlassFish (for example C:\Sun\AppServer)
• ANT_HOME. This should point to where ant is installed. Ant is included in the GlassFish bundle that you
downloaded. (In Windows, it's in the lib\ant subdirectory.) You can also download Ant from the Apache
Ant Project page. The example requires Apache ant 1.6.5.
• JAVA_HOME. This should point to the location of JDK 5.0 on your system.

Also, add the ant location to your PATH environment variable.


Then download the sample package and extract its contents. The main directory for this
tip is jaxws-techtip.
Building the Web Service
With the initial setup done, it's time to build a web service. In this example, the web
Page
service is developed from a Java class. To build the web service: 94
• Write an endpoint implementation class.
• Compile the endpoint implementation class.
• Optionally generate portable artifacts required for web service execution.
• Package the web service as a WAR file and deploy it.
Write an endpoint implementation class
If you navigate down through the <Sample_install_dir>\jaxws-techtip directory structure,
you'll find an endpoint directory. In that directory, you'll see a class named Calculator.
The class is an endpoint implementation of a simple service that adds two integers:
package endpoint;

Section 30. Introduction (7 pages)


import javax.jws.WebService;
import javax.jws.WebMethod;

@WebService(
name="Calculator",
serviceName="CalculatorService",
targetNamespace="http://techtip.com/jaxws/sample"
)
public class Calculator {
public Calculator() {}

@WebMethod(operationName="add", action="urn:Add")
public int add(int i, int j) {
int k = i +j ;
System.out.println(i + "+" + j +" = " + k);

return k;
}
}

JAX-WS 2.0 relies heavily on the use of annotations as specified in A Metadata Facility
for the Java Programming Language (JSR 175) and Web Services Metadata for the Java
Platform (JSR 181), as well as additional annotations defined by the JAX-WS 2.0
specification.
Notice the two annotations in the Calculator class: @WebService and @WebMethod. A
valid endpoint implementation class must include a @WebService annotation. The
annotation marks the class as a web service. The name property value in the
@WebService annotation identifies a Web Service Description Language (WSDL)
portType (in this case, "Calculator"). The serviceName ("CalculatorService") is a WSDL
service. TargetNamespace specifies the XML namespace used for the WSDL. All the
properties are optional. For details on default values of these properties, see section 4.1 of
the specification Web Services Metadata for the Java Platform, JSR 181.
The @WebMethod annotation exposes a method as a web service method. The
operationName property value in the annotation of the Calculator class identifies a

Page
WSDL operation (in this case, add), and the action property value ("urn:Add") specifies
an XML namespace for the WSDL and some of the elements generated from this web 95
service operation. Both properties are optional. If you don't specify them, the WSDL
operation value defaults to method name, and the action value defaults to the
targetNamespace of the service.
Compile the implementation class
After you code the implementation class, you need to compile it. Start GlassFish by
entering the following command:
<GF_install_dir>\bin\asadmin start-domain domain1

Section 30. Introduction (7 pages)


where <GF_install_dir> is the directory where you installed GlassFish. Then navigate to
the jaxws-techtip folder in the <Sample_install_dir>\jaxws directory structure, and
execute the following command.
ant compile
Executing the command is equivalent to executing the following javac command (on one
line):
javac -classpath $GLASSFISH_HOME/lib/javaee.jar -d
./build/classes/service/ endpoint/Calculator.java
Generate portable artifacts for web service execution
This step is optional. GlassFish's deploy tool automatically generates these artifacts if
they are not bundled with a deployable service unit during deployment of the web
service. However if you want to generate these artifacts manually, execute the following
command:
ant generate-runtime-artifacts
This creates a build/generated directory below jaxws-techtip, and executes the following
wsgen command (on one line):
$GLASSFISH_HOME/bin/wsgen -cp ./build/classes/service
-keep -d ./build/classes/service
-r ./build/generated -wsdl endpoint.Calculator
A WSDL file (CalculatorService.wsdl) is generated in the build/generated directory,
along with a schema file (CalculatorService_schema1.xsd), which defines the schema for
CalculatorService.wsdl.
JavaBean technology components (JavaBeans) aid in marshaling method invocations,
responses, and service-specific exceptions. These classes are used during execution of the
web service on an application server. The JavaBean classes are generated in the
/build/classes/service/endpoint/jaxws directory below jaxws-techtip. The classes are:
Add.java
Add.class
AddResponse.java
AddResponse.class
Package and deploy the WAR file
Next you need to package and deploy the service. To do that, you need to specify details
about the service in deployment descriptors. Web services can be bundled as a servlet or
Page
as a stateless session bean. Web services bundled as servlets are packaged in Web
Archive (WAR) files. In this tip, the service is bundled as a servlet.
96
To package the service as a WAR file, navigate to the jaxws-techtip folder and execute
the following command:
ant pkg-war
For the structure of the war file, you can take a look at the pkg-war target in the build.xml
file.
You can deploy the generated war file by executing the following command:
ant deploy-app
This is equivalent to executing the following asadmin deploy command (on one line):

Section 30. Introduction (7 pages)


bash$GLASSFISH_HOME/bin/asadmin deploy --user admin
--passwordfile passwd --host localhost --port 4848
--contextroot jaxws-webservice --upload=true --target server
Building the Client
After you deploy the web service, you can access it from a client program. Here are the
steps to follow to build the client:
1. Write the client.
2. Generate portable artifacts required to compile the client.
3. Compile the client.
4. Run the client.

Write the Client


The following program, JAXWSClient, is a standalone client program provided with the
sample code for this tip. It invokes an add operation on the deployed service ten times,
adding 10 to numbers from 0-to-9.
package client;
import javax.xml.ws.WebServiceRef;
import com.techtip.jaxws.sample.CalculatorService;
import com.techtip.jaxws.sample.Calculator;

public class JAXWSClient {


@WebServiceRef(wsdlLocation=
"http://localhost:8080/jaxws-webservice/CalculatorService?WSDL")

static CalculatorService service;

public static void main(String[] args) {


try {
JAXWSClient client = new JAXWSClient();
client.doTest(args);
} catch(Exception e) {
e.printStackTrace();
}
}

public void doTest(String[] args) {

Page
try {
System.out.println( 97
" Retrieving port from the service " + service);
Calculator port = service.getCalculatorPort();
System.out.println(
" Invoking add operation on the calculator port");
for (int i=0;i>10;i++) {
int ret = port.add(i, 10);
if(ret != (i + 10)) {
System.out.println("Unexpected greeting " + ret);
return;
}
System.out.println(

Section 30. Introduction (7 pages)


" Adding : " + i + " + 10 = " + ret);
}
} catch(Exception e) {
e.printStackTrace();
}
}
}

The @WebServiceRef annotation in JAXWSClient is used to declare a reference to a


web service. The value of the wsdlLocation parameter in @WebServiceRef is a URL
pointing to the location of the WSDL file for the service being referenced. The
@WebServiceRef annotation supports additional optional properties, as specified in
section 7.9 of JSR 224. The static variable named service will be injected by the
application client container.
Notice the import statements in JAXWSClient:
com.techtip.jaxws.sample.CalculatorService and com.techtip.jaxws.sample.Calculator.
These imports are for portable artifacts that will generated in the next step.
CalculatorService is the portable artifact for the service implementation. Calculator is the
Java interface for the service endpoint generated from the WSDL identified by the
wsdlLocation property of @WebServiceRef.
The client retrieves the endpoint Calculator from the CalculatorService through the
method getWebServiceRefNamePort, where WebServiceRefName is the name property
of @WebServiceRef, or the value of the WSDP port in the generated WSDL file. After it
retrieves the endpoint, the client invokes the add operation the required ten times.
Generate portable artifacts for the client
As mentioned earlier, CalculatorService and Calculator are portable artifacts. To generate
all the portable artifacts for the client, navigate to the jaxws-techtip folder and issue the
following command:
ant generate-client-artifacts
This is equivalent to executing following wsimport command (on one line):
$GLASSFISH_HOME/bin/wsimport -keep -d ./build/classes/client
http://localhost:8080/jaxws-webservice/CalculatorService?WSDL
This generates the artifacts in the build/classes/client/com/techtip/jaxws/sample directory

Page
under jaxws-techtip. The artifacts are:
Add.java
98
Add.class
AddResponse.java
AddResponse.class
Calculator.java
Calculator.class
CalculatorService.java
CalculatorService.class
package-info.java
package-info.class
ObjectFactory.class
ObjectFactory.java

Section 30. Introduction (7 pages)


Compile the client
The next step is to compile the client classes. You can do that by entering the following
command:
ant compile-client
The ant compile task compiles client/JAXWSClient and writes the class file to the
build /classes/client subdirectory. It is equivalent to running the following command (on
one line):
javac -d ./build/classes/client
-classpath $GLASSFISH_HOME/lib/javaee.jar:
$GLASSFISH_HOME/lib/appserv-ws.jar:
./build/classes/client client/JAXWSClient.java
Run the client
To see how the sample works, execute following command:
ant runtest-jaxws
which is equivalent to executing following command executed from build/classes/client
folder.:
$GLASSFISH_HOME/bin/appclient -mainclass client.JAXWSClient
You should see output similar to the following:
runtest-jaxws:
[echo] Executing appclient with client class as
client.JAXWSClient
[exec] Retrieving port from the service
com.techtip.jaxws.sample.CalculatorService@162522b
[exec] Invoking add operation on the calculator port
[exec] Adding : 0 + 10 = 10
[exec] Adding : 1 + 10 = 11
[exec] Adding : 2 + 10 = 12
[exec] Adding : 3 + 10 = 13
[exec] Adding : 4 + 10 = 14
[exec] Adding : 5 + 10 = 15
[exec] Adding : 6 + 10 = 16
[exec] Adding : 7 + 10 = 17
[exec] Adding : 8 + 10 = 18
[exec] Adding : 9 + 10 = 19

Page
all:
BUILD SUCCESSFUL
Total time: 6 seconds
99

Advantages of these approaches:

 How the Web service has been implemented is transparent to the Web service
client. A client does not know if the Web service has been deployed in a J2EE or
non-J2EE environment.
 Leverage existing J2EE technology.
 Existing J2EE components can be exposed as Web services.

Section 30. Introduction (7 pages)


Disadvantages of these approaches:

 The performance of Web Services under various business scenarios is not known.
 The effect of Web Services deployment on network bandwidth is uncertain.
 Asynchronous Web Services cannot be built as of now.
 How to ensure that all Web Services that are for both producers and consumers
can meet any and all service levels as defined for them in the target production
environment.
 How to test for scalability of Web Services, especially the externally facing Web
Services where usage loads may be unpredictable.
 How to design web services to ensure transactional integrity.

Explain the benefits of the EJB 3 development model over previous EJB
4.4 generations for ease of development including how the EJB container
simplifies EJB development.

• Simpler Programming Model


Ref. • Ease of Development

Click on and read the article referenced above.

Page
100

Section 30. Introduction (7 pages)


5. Web Tier Technologies ( pages3)

State the benefits and drawbacks of adopting a web framework in


1 designing a Java EE application

Explain standard uses for JSP pages and servlets in a typical Java EE
2 application.

Explain standard uses for JavaServer Faces components in a typical Java


3 EE application.

Given a system requirements definition, explain and justify your


rationale for choosing a web-centric or EJB-centric implementation to
4 solve the requirements. Web-centric means that you are providing a
solution that does not use EJB components. EJB-centric solution will
require an application server that supports EJB components.

State the benefits and drawbacks of adopting a web framework in


5.1 designing a Java EE application

• Comparison of web application frameworks


Ref. • What Web Application framework should you use?
• [Frameworks Driving Innovation]

A web application framework is a software framework that is designed to support the


development of dynamic websites, Web applications and Web services. The framework
aims to alleviate the overhead associated with common activities performed in Web
development. For example, many frameworks provide libraries for database access,
templating frameworks and session management, and often promote code reuse.

Some of the examples of Web framework in designing a Java EE application are Page
101
Apache Cocoon, Apache Struts, Google Web Toolkit, Ajax, JavaServer Faces and
Spring.

Challenges in the J2EE Web Tier While Frameworks Driving Innovation

Over the course of its life, the J2EE Web Tier has faced many challenges in easing Web
application development. While it's a scalable, enterprise-ready platform, it isn't exactly
developer-friendly. Particular challenges to Web developers include the need for a
standard Web framework, compatible expression languages, and availability of
components. Several Web frameworks have been developed to resolve these issues, each
with its own strengths and weaknesses. This article discusses the unique challenges of the

Section 30. Introduction (7 pages)


J2EE Web Tier and how various technologies have attempted resolve them. By learning
from and competing with each other, these Web technologies play an important role in
pushing the limits of excellence to produce ever-higher standards of Web application
development.
Problem: Too Many Frameworks for J2EE
A plethora of frameworks is available for building Java-based Web applications. Most of
the Web frameworks in Java result from the difficulty in using servlets and JSPs, which
are part of the J2EE standard. While servlets and JSPs serve as the underlying APIs for
most frameworks, they don't have any built-in features to ease development. Using plain
servlets and JSPs is cumbersome, and you end up writing a lot of plumbing code, when
you should be writing application code. By using a Web framework, you can concentrate
on coding your application, rather than the architecture that makes it work.
Over 50 Java Web frameworks exist to make the developer's life easier. It begs the
question, "Why not have a standard Web framework as part of J2EE?"
Solution: A Standard Framework Called JavaServer Faces
At the JavaOne Conference in 2002, Sun Microsystems announced the development of a
standard Web framework to include as part of the J2EE bundle. Its name was JavaServer
Faces (JSF) and it was designed to put a pretty face on developing Web applications in a
J2EE environment.
JSF was developed to be tool-friendly, so IDE vendors could create WYSIWYG
environments where developers could drag-and-drop their applications into a usable
system. Microsoft has always had good tools for its technologies, and Sun wanted to fuel
innovation in the Java IDE space to produce tools similar to Visual Studio .NET.
JSF's architecture is similar to ASP.NET, which is more page-centric than controller-
centric. Controller-centric frameworks make requests go through a front-controller
servlets that dispatches requests to smaller "worker" servlets. In Spring MVC, these
workers are controllers, while in Struts and WebWork they're actions. A worker servlet
will typically put things in the request for the view to render. On the other hand, JSF is
page-centric. This means that users will typically navigate to a template page in the
application rather than go through a worker servlet. JSF pages are backed by classes that
contain data and actions that the UI can invoke. These actions are also known as listeners
and contain logic that ties the class to a backend system.
Early Challenges of JSF
The development of JSF went through some challenging times. Sun announced it in June
Page
2002, but didn't release its initial 1.0 version until March 2004. During that time, many
other Java Web frameworks cropped up and JSF got some bad press. The spec changed
102
significantly between iterations, and it often lacked backward compatibility. Authors and
publishers produced books that were out-of-date before they were released. Developers
who tried to work with JSF often found that the functionality didn't work according to the
documentation. Furthermore, since Sun architected JSF with tools vendors in mind, some
developers (who were used to coding their JSPs by hand) became frustrated with JSF's
verboseness.
While JSF was being developed, so were JSP 2.0 and its Expression Language (EL).
JSP's EL made retrieving values and resolving variables easier. Rather than using
<jsp:useBean> and <jsp:getProperty>, a developer could simply use ${bean.property} to
retrieve a property from a JavaBean. Developers could also use the new EL with the Java

Section 30. Introduction (7 pages)


Standard Tag Library to implement iterations, calculations, internationalization (i18n),
and number/date formatting.
Ideally, JSF would have used the JSP 2.0 EL syntax to resolve variables and expressions.
However, JSF's Expression Language didn't have some of the concepts that existed in
JSP's EL. First, it didn't have a page scope like the EL did. Second, JSF Expressions were
often formulas (the left side of the equation) rather than solutions (the right side). For
example, in JSP EL, the following expression means "call getAction() on the userForm
bean in any scope."
${userForm.action}
While the same expression in JSF's EL can mean the same thing, it can also be an
invocation of a method such as "call the action method of the userForm bean."
#{userForm.action}
To complicate things further, developers were demanding a workable version of JSF. The
JSF Expert Group didn't want to wait for JSP 2.0 and J2EE 1.4 to be finished, so they
created their own expression language, hereafter referred to as JSF EL.
Problem: JSF EL vs JSP EL
For the most part, the JSF EL hasn't affected other Web frameworks because it's JSF-
specific; however, the lack of compatibility between the two expression languages (JSTL
and JSF) has been a point of developer contention and frustration. JSTL has been one of
the most widely accepted and praised additions to the Servlet API, but it didn't work with
JSF as many expected.
The JSP EL has also produced some problems for framework developers who support
JSP as a view choice. Mainly, some framework developers can no longer use the ${...}
syntax as a placeholder to indicate variables. Since these placeholders are reserved for
JSP 2.0, if the framework resolves variables outside of the standard scopes (page, request,
session, application), variable resolution will simply fail. Frameworks like WebWork and
Tapestry use OGNL as their expression language, and they resolve variables according to
a ValueStack and component hierarchy, respectively.
Note: OGNL stands for Object-Graph Navigation Language. It is an expression language
for getting and setting the properties of Java objects. You use the same expression for
both getting and setting the value of a property. OGNL is more powerful than the
JSP/JSTL Expression Language. Not only can it get and set values, it can invoke
methods. Furthermore, OGNL expressions can contain almost any Java code.

Page
Solution: A Unified Expression Language
To solve the disconnect between JSF EL and JSP EL, the JSP 2.1 and JSF 1.2
103
specifications have created a Unified Expression Language. If you're using a JSF 1.2
implementation, you can use JSP expressions in your JSF applications. It also adds a
variable resolver to JSP. This means that frameworks like WebWork can control what $
{...} means and tell it to talk to its ValueStack instead of the standard scopes. This is
important for many framework developers because they don't want to invent their own
syntax for resolving expressions. The new JSP and JSF versions are part of J2EE 5.0 and
will be required by any J2EE 5.0-compliant containers.
Problem: JSP Unfriendly to Component-Based Frameworks
JSP is the primary view choice for JSF apps, but it's clunky at best. Most frameworks that
use JSP simply render values as they encounter them when loading a page. Relationships

Section 30. Introduction (7 pages)


between portions of a page will only occur at the HTML level, rather than on the server-
side.
JSF has a different model - it builds a component tree when a page loads. JSP adds page
components to the tree in the order they appear on the page. This makes it difficult to
create relationships between components, such as labels and input fields. To work around
this, you can wrap your forms with an <h:panelGrid> tag, but then you have to remove
any HTML from your form (or wrap it with an <f:verbatim> tag). You end up with a JSP
page that doesn't have a single line of HTML in it. In other words, you're back to the pre-
JSP days with JSF, where Java code, rather than HTML authors produce the entire
HTML.
Solution: HTML Templates
The good news is that JSP isn't the required view technology for JSF. It was simply
chosen because of the proliferation (i.e. rapid growth) of JSP-based apps and because so
many developers were familiar with it. JSF 1.2 lets JSF use HTML Templates like
Tapestry does. These templates will either reuse existing HTML attributes (such as id), or
add an additional one (such as jwcid). Hans Bergsten provides an example of this in his
"Improving JSF by Dumping JSP" article. An Open Source project called Facelets
(http://facelets.dev.java.net) also implements such a solution.
JSF doesn't just render HTML; that's just one of the possible render kits. In theory, it's
possible to create render kits for all kinds of view options: XUL, J2ME, or even Laszlo.
The Future of J2EE's Web Tier
The future of J2EE's Web Tier depends largely on JSF - at least from a standards
perspective. However, many popular Open Source alternative Web frameworks are
available, such as Spring MVC, Struts, Tapestry, and WebWork. These frameworks have
already helped shape the future of J2EE's Web Tier. JSF incorporates many of the ideas
and features of Struts. The managed properties of a JSF Managed Bean are very much
like Spring's Dependency Injection. A JSF Managed Bean looks very similar to a
WebWork Action. JSF mimics many of the concepts in Tapestry to enhance its ease of
use, such as a rich set of components and HTML templating. Furthermore, all of these
frameworks are part of J2EE in a sense, since they build on top of J2EE's Servlet API.
Without J2EE, these frameworks probably wouldn't exist in their current form. Likewise,
J2EE wouldn't be where it is today without these frameworks and the innovation and
competition they've provided.
Having choice among frameworks is a good thing because it forces framework

Page
developers to innovate and compete for users. Not only do the different frameworks show
different ways of doing things, but they're also borrowing from and competing with one
104
another to become better. Competition breeds (causes) innovation. J2EE stands to benefit
from this innovation because the good ideas can be added to JSF and the bad ones can be
removed. The Java Community Process and Expert Groups for J2EE and JSF are very
open to the community, so the community will contribute and help improve it, much like
they have with the many Open Source alternatives.
All of the frameworks mentioned do the same thing, just in different ways. JSF and
Tapestry are making developing Java Web applications easier due to smarter defaults and
better plumbing. They don't require the developer to know much about the Servlet API,
and they handle most input types transparently - without using custom converters.

Section 30. Introduction (7 pages)


Furthermore, their concept of components lets developers create reusable components
that they can use with only a few lines of code.
The future of J2EE's Web Tier isn't one of self-determination. Many other factors will
play into the success of a standardized Web framework. J2EE and its developers can
ensure its continued success on the Web by being willing to adapt. As a community, we
need to continue to recognize good technologies and ideas from experienced developers
and Open Source projects. We need to be willing to support remote scripting with Ajax,
and we need to adapt to produce rich-client experiences, like those that Laszlo offers with
the Open Source Flash-rendering system. By accepting and enhancing these technologies,
we can continue to use J2EE and its APIs to make our lives easier. By developing more
efficiently with the best technologies available in Java, we can continue to produce
satisfied customers.
Well, what Java web framework should I use?

Well, that is an interesting question. I think that in a meritocracy, the best framework would
win. What makes a framework the best?
The ease of development and maintenance for the far-sighted crowd costs less and therefore
is most beneficial. For each organization, that merit may be found in any one of the
frameworks mentioned, due to the ease of development due to elegance of design, ample
documentation, easy development semantics, or sheer knowledge of the platform.
Familiarity is a strong argument for ease, and there are a lot of Java devs and architects out
there that will find that the "best" framework is the one they know the best.
Which framework has the most merit? It depends on you, your knowledge, your organization,
and most importantly, your customer's needs. For me and my current projects, I'd probably
go with Struts. But, unlike the techno-vangelists out there, I don't think that everyone has to
convert to my favorite web app framework.
Hefe | June 19, 2006 02:53 PM

Just thought I'd weigh in with a few words with RSF, since this "frameworks" issue is
perpetually interesting.
RSF takes the view that "every framework is an insult to its users", and therefore tries to be
the smallest possible insult. I wrote it primarily because I got extremely fed up with
frameworks perpetually getting in the way of my writing code, and found that Spring was the
first framework that had the conceptual coherence to get out of the way fast enough.
That said, I found there were a few genuinely new and productive ideas in JSF that had been
missed by other frameworks - in particular I find Spring MVC problematic since although it is
sensible so far as it goes, it is the way it is because it is designed to cater to the "lowest

Page
common denominator" of webapp frameworks, which is currently very low. RSF is what a
Spring webapp framework would be, if one abandoned the Spring credo of "working with what 105
is there already" and started from the ground up.
Given the current conversations about Rails and ORM solutions, I think RSF's approach to ORM
(nicknamed "OTP") is very interesting - you might think of it as a little analogous to Rails since
it aims to make access to storage idiomatic and transparent, but goes further in that it not an
"implementation" and only an "idiom" - hence it can be layered on top of whatever ORM you
happen to like using at the moment. What most people like using at the moment in the Java
world is Hibernate, so RSF ORM enables you to use Hibernate ORM without seeing any
Hibernate dependence in your code.
Similarly RSF enables you to write a portlet without seeing any portlet code, or indeed an "X"
without any "X" code in general - for want of a better label this is the "Spring" philosophy of
"invisible frameworks", but RSF takes traditional IoC further by allowing request-scope IoC
using a lightweight Spring clone, RSAC to "clean the areas Spring cannot reach". I think

Section 30. Introduction (7 pages)


request-scope IoC is one of the great ideas in (web) programming that is simply waiting for
enough people to have experience of it - note that request scoping has been slipped into the
upcoming Spring 2.0 release (as of RC4) but the implementation will probably be too slow to
be practical.
Much like Spring itself, request-scope IoC is one of those things that you just have to take the
plunge on in order to appreciate the kinds of problems that it solves, but it is key to RSF's
easy and transparent portability and integration (Hibernate, Coccoon, JSR-168, Sakai). I'm
also planning a Spring MVC integration in the next couple of weeks, but largely for
"philosophical" purposes...
In terms of the "next years" I think 3 years is far too short a time for this mess to be sorted
out, but I hope (actually I'm sure) that no such heavyweight JSF-based or JSF-alike (read
Seam, Shale &c) will gather hearts & minds because anything based on the heavy clay of JSF
is just going to crumble (fall to pieces).
In terms of Rails itself, I think it is a fundamentally wrongly-arranged solution to its problem
(loosely-speaking, that of ORM) since it tips a design on its head to be based on its storage
(read, schema), rather than simply "allowing" storage to be one of the functional possibilities.
This TSS article I think makes the key point by asking the question (in Rails) "What fields does
this object have? You have to look at the database schema". This is the wrong answer to such
a question.
Just a final word re AJAX - since RSF features pure (that is REALLY pure, not just "generally
parsing pure" as in Tapestry) HTML templating, any existing AJAX/HTML component, with just
the lightest packaging, is automatically an RSF component. It's this that I see key to RSF's
adoption in the long-term, since peddlers of Java frameworks (especially Sun) fail to see that
Java frameworks as a whole, let alone their particular framework, are *always* going to be a
very small part of the complete web programming community, and so the "interoperability"
promised by swapping "components" for their framework amongst their developers and users
is always going to be proportionally tiny. Java is a reasonably good language, but developers
have to appreciate that they will *always* be inhabiting a world with other sorts of
technology, and there's no arena that more regularly rubs one's nose in this than web
programming. If you follow my RSF thread on TSS I explain there why Wicket and other
"heavy component" frameworks are improper inhabitants of this heterogeneous community.
As for Stripes, it is based on JSPs. 'Nuff said.
Antranig Basman | June 20, 2006 07:24 AM

Excellent discussion!
Thank you, Tim O'B, Tim F, Antranig, et al, for a very lively and informative discussion. I've
been surveying all frameworks, all languages, for the past few months, and this is one of the
best discussion/overviews I've seen. I'm about to begin building a new n-faced app (web +
mobile), and I want to be sure I've looked high and low for the "best" framework/language
combination before I start. I'll most likely have to live with any messes I make for the next

Page
few years, so really need to avoid any serious mistakes. Anything I work on will be built using
typically-chaotic XP methods, so the base platform has to promote code that is maintainable,
reliable, reusable, very flexible. My clients generally don't care what technologies I use, as
106
long as it does what they want, is rock-solid stable, and it's easy/fast to make changes.
I've been a Java developer for 10 years now, so Java is "home" these days, just as C++ and C
were "home" in the years before that. As a professional developer (and consultant) I have an
obligation (to myself and to my clients) to remain open-minded, pondering whether it's time
to abandon Java. If I stagnate, then I deserve the professional death that will surely follow.
(Hopefully I'll get rich soon, and I can stop holding onto this tiger's tail!)
There are certainly some viable alternative languages (Python, Ruby, PHP, Groovy, to name a
few) and they each have fairly good platforms (e.g., Zope/Plone, Rails, PHP-Nuke), so they
deserve to be examined carefully. Each has it's strengths and weaknesses, and the same can
be said for every Java-based solution I've seen or worked with. My job is to stay informed

Section 30. Introduction (7 pages)


about the best-of-breed in every class of solution, and choose the right tools/methods when
starting on a new project.
My current thinking is that I'll go with a hybrid solution that includes a little of everything. For
the system plumbing it's pretty hard to beat JBoss. The built-in features provide a solid,
feature-rich framework that handles scaling from laptop-dev, to simple two-tier, to small
clusters, to massive multi-DC farms. I could go with other app-server platforms, of course, but
a JBoss backbone is reasonably mature, at least as stable as it's commercial competitors, and
the price/TCO are pretty hard to beat. That takes care of the plumbing, persistence (Hibernate
or EJB3), and gives me a good place to hang my business logic.
The real fun starts when I look at the presentation tier, which is what most of this discussion
has been about. Given that I'll be building on a Java foundation, the most obvious choice
would be a Java web framework (e.g., Struts, SpringMVC, Stripes, etc.). Alternatively, the
Java-based scripting environments like Jython and Groovy (maybe BSF or JRuby), along with
their attendant frameworks, are definitely worth considering carefully. That would give me
easy integration with the back-end, as well as the dev-time flexibility of a scripting
environment. (The lack of deep integration with the Java plumbing make it pretty hard to
justify PHP or Ruby.)
For page-generation, the reality is that I want it all - ease-of-use, power, flexibility, and a rich
component set. I guess what I want is a flexible portal-like framework with:
easy page/sub-page layout/skinning/themes
a minimum of archane markup
AJAX-style user interaction and display refresh
very loose inter-component coupling (but enough that it can be done!)
quick-turn component development with little or no framework dependency
enough maturity that it's not brittle (I'm a rambunctuous guy)
FYI - I've looked at most of the Java-based portal frameworks out there (Jetspeed2, LifeRay,
JBoss Portal, etc.), and they don't pass the last test yet. There are other webapp frameworks,
obviously, but they tend to lack the flexibility I'm looking for.
A lot to ask for, I know, but it's what I want. Anyone have any opinions or suggestions?
:::grabbing umbrella:::
Ken Scott | June 27, 2006 02:37 PM

Explain standard uses for JSP pages and servlets in a typical Java EE
5.2 application.

Page

Ref. [JEE_5_TUTORIAL] ch. 4 & 5
107
JavaServer Pages Technology
JavaServer Pages (JSP) technology allows you to easily create web content that has
both static and dynamic components. JSP technology makes available all the dynamic
capabilities of Java Servlet technology but provides a more natural approach to creating
static content. The main features of JSP technology are as follows:
• A language for developing JSP pages, which are text-based documents that
describe how to process a request and construct a response
• An expression language for accessing server-side objects
• Mechanisms for defining extensions to the JSP language

Section 30. Introduction (7 pages)


JSP technology also contains an API that is used by developers of web containers.

What Is a JSP Page?


A JSP page is a text document that contains two types of text: static data, which can be
expressed in any text-based format (such as HTML, SVG,WML, and XML), and JSP
elements, which construct dynamic content.
The recommended file extension for the source file of a JSP page is .jsp. The page can
be composed of a top file that includes other files that contain either a complete JSP
page or a fragment of a JSP page. The recommended extension for the source file of a
fragment of a JSP page is .jspf.
The JSP elements in a JSP page can be expressed in two syntaxes, standard and XML,
though any given file can use only one syntax. A JSP page in XML syntax is an XML
document and can be manipulated by tools and APIs for XML documents.

Java Servlet Technology


As soon as the web began to be used for delivering services, service providers
recognized the need for dynamic content. Applets, one of the earliest attempts toward
this goal, focused on using the client platform to deliver dynamic user experiences. At
the same time, developers also investigated using the server platform for this purpose.
Initially, Common Gateway Interface (CGI) scripts were the main technology used to
generate dynamic content. Although widely used, CGI scripting technology has a
number of shortcomings, including platform dependence and lack of scalability. To
address these limitations, Java Servlet technology was created as a portable way to
provide dynamic, user-oriented content.

What Is a Servlet?
A servlet is a Java programming language class that is used to extend the capabilities of
servers that host applications accessed by means of a request-response programming
model. Although servlets can respond to any type of request, they are commonly used to
extend the applications hosted by web servers. For such applications, Java Servlet
technology defines HTTP-specific servlet classes.

Explain standard uses for JavaServer Faces components in a typical Java


5.3 EE application.


Ref. [JEE_5_TUTORIAL] ch. 10

Page
108
JavaServer Faces Technology
JavaServer Faces technology is a server-side user interface component framework for
Java technology-based web applications.
The main components of JavaServer Faces technology are as follows:
• An API for representing UI components and managing their state; handling
events, server-side validation, and data conversion; defining page navigation;
supporting internationalization and accessibility; and providing extensibility for all
these features
• Two JavaServer Pages (JSP) custom tag libraries for expressing UI components
within a JSP page and for wiring components to server-side objects

Section 30. Introduction (7 pages)


The well-defined programming model and tag libraries significantly ease the burden of
building and maintaining web applications with server-side UIs. With minimal effort, you
can
• Drop components onto a page by adding component tags
• Wire component-generated events to server-side application code
• Bind UI components on a page to server-side data
• Construct a UI with reusable and extensible components
• Save and restore UI state beyond the life of server requests

Given a system requirements definition, explain and justify your


rationale for choosing a web-centric or EJB-centric implementation to
5.4 solve the requirements. Web-centric means that you are providing a
solution that does not use EJB components. EJB-centric solution will
require an application server that supports EJB components.

• [JEE_5_TUTORIAL]
Ref. • Enterprise JavaBeans - Re: EJB Vs JSP/POJO/Servlets, etc

Page
109
Java EE server: The runtime portion of a Java EE product. A Java EE server provides EJB and
web containers.
■ Enterprise JavaBeans (EJB) container: Manages the execution of enterprise beans for Java

EE applications. Enterprise beans and their container run on the Java EE server.
■ Web container: Manages the execution of JSP page and servlet components for Java EE

applications. Web components and their container run on the Java EE server.
■ Application client container: Manages the execution of application client components.

Application clients and their container run on the client.


■ Applet container: Manages the execution of applets. Consists of a web browser and Java Plug-

in running on the client together.

Section 30. Introduction (7 pages)


Web Components
Java EE web components are either servlets or pages created using JSP technology (JSP
pages) and/or JavaServer Faces technology. Servlets are Java programming language classes
that dynamically process requests and construct responses. JSP pages are text-based
documents that execute as servlets but allow a more natural approach to creating static content.
JavaServer Faces technology builds on servlets and JSP technology and provides a user
interface component framework for web applications.

A web application is a dynamic extension of a web or application server. There are two types of
web applications:
■ Presentation-oriented: A presentation-oriented web application generates interactive web

pages containing various types of markup language (HTML, XML, and so on) and dynamic
content in response to requests.
■ Service-oriented: A service-oriented web application implements the endpoint of a web service.

(See Objective 3.2 for more information.) Presentation-oriented applications are often clients of
service-oriented web applications.

Web Applications
In the Java 2 platform, web components provide the dynamic extension capabilities for a web
server. Web components are either Java servlets, JSP pages, or web service endpoints. The
interaction between a web client and a web application is illustrated in Figure 3–1. The client
sends an HTTP request to the web server. A web server that implements Java Servlet and
JavaServer Pages technology converts the request into an HTTPServletRequest object. This
object is delivered to a web component, which can interact with JavaBeans components or a
database to generate dynamic content. The web component can then generate an
HTTPServletResponse or it can pass the request to another web component. Eventually a web
component generates a HTTPServletResponse object. The web server converts this object to an
HTTP response and returns it to the client.

Page
110

Servlets are Java programming language classes that dynamically process requests and
construct responses. JSP pages are text-based documents that execute as servlets but allow a
more natural approach to creating static content. Although servlets and JSP pages can be used
interchangeably, each has its own strengths. Servlets are best suited for service-oriented
applications (web service endpoints are implemented as servlets) and the control functions of a
presentation-oriented application, such as dispatching requests and handling nontextual data.

Section 30. Introduction (7 pages)


JSP pages are more appropriate for generating text-based markup such as HTML, Scalable
Vector Graphics (SVG), WirelessMarkup Language (WML), and XML.
Since the introduction of Java Servlet and JSP technology, additional Java technologies and
frameworks for building interactive web applications have been developed. Figure 3–2 illustrates
these technologies and their relationships.

Notice that Java Servlet technology is the foundation of all the web application technologies, even
if you do not intend to write servlets. Each technology adds a level of abstraction that makes web
application prototyping and development faster and the web applications themselves more
maintainable, scalable, and robust.
Web components are supported by the services of a runtime platform called a web container. A
web container provides services such as request dispatching, security, concurrency, and life-
cycle management. It also gives web components access to APIs such as naming, transactions,
and email.
Certain aspects of web application behavior can be configured when the application is installed,
or deployed, to the web container. The configuration information is maintained in a text file in XML
format called a web application deployment descriptor (DD). ADD must conform to the schema
described in the Java Servlet Specification.

Web-Centric Application Scenario

Page
111

There are a number of scenarios in which the use of enterprise beans in an application
would be considered overkill: sort of like using a sledgehammer to crack a nut. The J2EE
specification doesn't mandate a specific application configuration, nor could it
realistically do so. The J2EE platform is flexible enough to support the application
configuration most appropriate to a specific application design requirement.

Section 30. Introduction (7 pages)


As demonstrated in the book J2EE Technology In Practice, a three-tier Web-centric
application scenario is widely used as the starting point for many J2EE applications. The
Web container hosts both presentation and business logic, and it is assumed that JDBC
and the J2EE Connector architecture are used to access EIS resources.

Business Components

Page
112

Business code, which is logic that solves or meets the needs of a particular business domain
such as banking, retail, or finance, is handled by enterprise beans running in the business tier.
Figure 1–4 shows how an enterprise bean receives data from client programs, processes it (if
necessary), and sends it to the enterprise information system tier for storage. An enterprise bean
also retrieves data from storage, processes it (if necessary), and sends it back to the client
program.

Benefits of Enterprise Beans

Section 30. Introduction (7 pages)


For several reasons, enterprise beans simplify the development of large, distributed applications.
First, because the EJB container provides system-level services to enterprise beans, the bean
developer can concentrate on solving business problems. The EJB container, rather than the
bean developer, is responsible for system-level services such as transaction management and
security authorization. Second, because the beans rather than the clients contain the
application’s business logic, the client developer can focus on the presentation of the client. The
client developer does not have to code the routines that implement business rules or access
databases. As a result, the clients are thinner, a benefit that is particularly important for clients
that run on small devices. Third, because enterprise beans are portable components, the
application assembler can build new applications from existing beans. These applications can run
on any compliant Java EE server provided that they use the standard APIs.

When to Use Enterprise Beans


You should consider using enterprise beans if your application has any of the following
requirements:
• The application must be scalable. To accommodate a growing number of users,
you may need to distribute an application’s components across multiple
machines. Not only can the enterprise beans of an application run on different
machines, but also their location will remain transparent to the clients.
• Transactions must ensure data integrity. Enterprise beans support transactions,
the mechanisms that manage the concurrent access of shared objects.
• The application will have a variety of clients. With only a few lines of code,
remote clients can easily locate enterprise beans. These clients can be thin,
various, and numerous.

EJB-centric vs. web-centric implementation


So, does EJB do anything to improve scalability and performance in a considerable way
as compared to JSP, POJO, etc, or is it mainly for increased decoupling of tiers and
subsequent better maintenance and reusability?

Technically, the answer is that EJB does not add anything that cannot be done with
regular business objects...

The difference is how easy it is to do the things you need.

With EJB before EJB3 there is a lot more effort that goes into using the technology, this
is why people often will use regular business objects... they may only need one or two of
the things EJBs bring to the table, so the pain of EJBs is not worth the gain.

Page
EJB3 is a lot easier to implement (than previous EJB specifications) so we should start
to see people using this technology more. 113
When you are asking about EJB vs POJO the question you need to ask yourself first is:
have we decided how the application will be deployed?

If that decision has been made, then you will be able to know what flavor of EJB is
available (if at all) on your application server.
e.g. if it will be deployed on Tomcat => no EJBs at all
if it will be deployed on Glassfish or JBoss5 (or possibly some versions of JBoss4) or if it
will be deployed on Websphere => EJB3 is available.

Where I see EJBs fitting into the MVC application is in the Model layer, i.e. they provide
a means of abstracting your model so that it can be expressed in terms of business

Section 30. Introduction (7 pages)


methods, etc. This can simplify the controller. But again, it is nothing that cannot be done
with regular POJOs.

What you have to ask yourself is: do you actually need the extra capability and
complexity of EJB.

Just because EJB is available for use is not a good enough reason to use it.

Need for distributed transactions and biz components spread across multiple physical
servers would be one good reason to use EJB. However, that is not the case for most of
the apps out there.

As for other services provided by an EJB container such as (CMT, security, etc.), those
can be provided by lightweight IOC (Inversion of Control, which is an event-based
programming model.) containers like Spring, without the overhead and complexity of
EJB.

I think this is a very valid question. Think 1999...almost every java application used
servlets/JSP for presentation along with regular java objects for business logic. Of
course, today servlets are for presentation too and EJBs for business logic. So the 2 can
be compared. Brace yourself for impact.......A well designed solution using
Servlets/JSPs will *always* be faster and more scalable than a well designed solution
that uses Servlets/JSP + EJB. Please see
http://rubis.objectweb.org/download/perf_scalability_ejb.pdf for documented evidence.
However the difference in scalability and speed might not be very much particularly if
session beans are used. If you need to develop an application that is largely read-only,
needs to have very few transactions, is typically small and does not need the out-of-box
security features of EJB, simply use servlets/JSP. Hence, EJB-centric solution is for
systems with expected large, heavy load and possible huge write operations.

Page
114

Section 30. Introduction (7 pages)


6. Applicability of Java EE Technology ( pages3)

Given a specified business problem, design a modular solution that


1 solves the problem using Java EE.

Explain how the Java EE platform enables service oriented architecture


2 (SOA) -based applications.

Explain how you would design a Java EE application to repeatedly


measure critical non-functional requirements and outline a standard
3 process with specific strategies to refactor that application to improve on
the results of the measurements.

Given a specified business problem, design a modular solution that


6.1 solves the problem using Java EE.

Ref. • Introducing Enterprise Java Application Architecture and Design

n-tier Architecture
Developing n-tier distributed applications is a complex and challenging job. Distributing
the processing into separate tiers leads to better resource utilization. It also allows
allocation of tasks to experts who are best suited to work and develop a particular tier.
The web page designers, for example, are more equipped to work with the presentation
layer on the web server. The database developers, on the other hand, can concentrate
on developing stored procedures and functions. However, keeping these tiers as
isolated silos serves no useful purpose. They must be integrated to achieve a bigger

Page
enterprise goal. It is imperative that this is done leveraging the most efficient protocol;
otherwise, this leads to serious performance degradation.
115
Besides integration, a distributed application requires various services. It must be able to
create, participate, or manage transactions while interacting with disparate information
systems. This is an absolute must to ensure the concurrency of enterprise data. Since n-
tier applications are accessed over the Internet, it is imperative that they are backed by
strong security services to prevent malicious access.
These days, the cost of hardware, like CPU and memory, has gone down drastically. But
still there is a limit, for example, to the amount of memory that is supported by the

Section 30. Introduction (7 pages)


processor. Hence, there is a need to optimally use the system resources. Modern
distributed applications are generally built leveraging object-oriented technologies.
Therefore, services such as object caches or pools are very handy. These applications
frequently interact with relational databases and other information systems such as
message-oriented middleware. However, opening connections to these systems is costly
because it consumes a lot of process resources and can prove to be a serious deterrent
to performance. In these scenarios, a connection pool is immensely useful to improve
performance as well as to optimize resource utilization.
Distributed applications typically use middleware servers to leverage the system
services such as transaction, security, and pooling. The middleware server API had to
be used to access these services. Hence, application code would be muddled with a
proprietary API. This lock-in to vendor API wastes lot of development time and makes
maintenance extremely difficult, besides limiting portability.
In 1999, Sun Microsystems released the Java EE 2 platform to address the difficulties in
the development of distributed multitier enterprise applications. The platform was based
on Java Platform, Standard Edition 2, and as a result it had the benefit of "write once,
deploy and run anywhere." The platform received tremendous support from the open
source community and major commercial vendors such as IBM, Oracle, BEA, and others
because it was based on specifications. Anyone could develop the services as long as it
conformed to the contract laid down in the specification. The specification and the
platform have moved on from there; the platform is currently based on Java Platform,
Standard Edition 5, and it is called Java Platform, Enterprise Edition 5. In this article, we
will concentrate on this latest version, referred to officially as Java EE 5.
n-tier Java EE Architecture
The Java EE platform makes the development of distributed n-tier applications easier.
The application components can be easily divided based on functions and hosted on
different tiers. The components on different tiers generally collaborate using an
established architectural principle called MVC.
Page
An MVC Detour
116
Trygve Reenskaug first described MVC way back in 1979 in a paper called "Applications
Programming in Smalltalk-80™: How to use Model-View-Controller." It was primarily
devised as a strategy for separating user interface logic from business logic. However,
keeping the two isolated does not serve any useful purpose. It also suggests adding a
layer of indirection to join and mediate between presentation and business logic layers.
This new layer is called the controller layer. Thus, in short, MVC divides an application
into three distinct but collaborating components:

Section 30. Introduction (7 pages)


• The model manages the data of the application by applying business rules.
• The view is responsible for displaying the application data and presenting the
control that allows the users to further interact with the system.
• The controller takes care of the mediation between the model and the view.
Figure 6 depicts the relationship between the three components. The events triggered by
any user action are intercepted by the controller. Depending on the action, the controller
invokes the model to apply suitable business rules that modify application data. The
controller then selects a view component to present the modified application data to the
end user. Thus, you see that MVC provides guidelines for a clean separation of
responsibilities in an application. Because of this separation, multiple views and
controllers can work with the same model.

Java EE Architecture with MVC


The MVC concept can be easily applied to form the basis for Java EE application
architecture. Java EE servlet technology is ideally suited as a controller component. Any
browser request can be transferred via HTTP to a servlet. A servlet controller can then
invoke EJB model components, which encapsulate business rules and also retrieve and Page
117
modify the application data. The retrieved and/or altered enterprise data can be
displayed using JSP. This is an oversimplified representation of real-life enterprise Java
architecture, although it works for a small-scale application. But this has tremendous
implications for application development. Risks can be reduced and productivity
increased if you have specialists in the different technologies working together.
Moreover, one layer can be transparently replaced and new features easily added
without adversely affecting others (see Figure 7).

Section 30. Introduction (7 pages)


Layers in a Java EE Application

Page
118
It is evident from Figure 7 that layered architecture is an extension of the MVC
architecture. In the traditional MVC architecture, the data access or integration layer was
assumed to be part of the business layer. However, in Java EE, it has been reclaimed as
a separate layer. This is because enterprise Java applications integrate and
communicate with a variety of external information system for business data—relational
database management systems (RDBMSs), mainframes, SAP ERP, or Oracle e-
business suites, to name just a few. Therefore, positioning integration services as a

Section 30. Introduction (7 pages)


separate layer helps the business layer concentrate on its core function of executing
business rules.
The benefits of the loosely coupled layered Java EE architecture are similar to those of
MVC. Since implementation details are encapsulated within individual layers, they can
be easily modified without deep impact on neighboring layers. This makes the
application flexible and easy to maintain. Since each layer has its own defined roles and
responsibilities, it is simpler to manage, while still providing important services.
• Client Tier
This tier represents all device or system clients accessing the system or the
application. A client can be a Web browser, a Java or other application, a Java
applet, a WAP phone, a network application, or some device introduced in the
future. It could even be a batch process.
• Presentation Tier
This tier encapsulates all presentation logic required to service the clients that
access the system. The presentation tier intercepts the client requests, provides
single sign-on, conducts session management, controls access to business
services, constructs the responses, and delivers the responses to the client.
Servlets and JSP reside in this tier. Note that servlets and JSP are not
themselves UI elements, but they produce UI elements.
• Business Tier
This tier provides the business services required by the application clients. The
tier contains the business data and business logic. Typically, most business
processing for the application is centralized into this tier. It is possible that, due to
legacy systems, some business processing may occur in the resource tier.
Enterprise bean components are the usual and preferred solution for
implementing the business objects in the business tier.
• Integration Tier
This tier is responsible for communicating with external resources and systems
such as data stores and legacy applications. The business tier is coupled with
the integration tier whenever the business objects require data or services that
reside in the resource tier. The components in this tier can use JDBC, J2EE
connector technology (The connector architecture is a standard architecture for
integrating J2EE applications with EISs that are not relational databases.), or
Page
some proprietary middleware to work with the resource tier. 119
• Resource Tier
This is the tier that contains the business data and external resources such as
mainframes and legacy systems, business-to-business (B2B) integration
systems, and services such as credit card authorization.

Java EE Application Design


In the past few sections I laid the foundation for exploring Java EE application design in
greater detail. However, the design of Java EE software is a huge subject in itself, and
many books have been written about it. My intention in this article is to simplify Java EE
application design and development by applying patterns and best practices through the

Section 30. Introduction (7 pages)


Spring Framework. Hence, in keeping with the theme and for the sake of brevity, I will
cover only those topics relevant in this context. This will enable me to focus on only
those topics that are essential for understanding the subject.
Some developers and designers are of the opinion that Java EE application design is
essentially OO design. This is true, but Java EE application design involves a lot more
than traditional object design. It requires finding the objects in the problem domain and
then determining their relationships and collaboration (i.e. act of working together). The
objects in individual layers are assigned responsibilities, and interfaces are laid out for
interaction between layers. However, the task doesn't finish here. In fact, it gets more
complicated. This is because, unlike traditional object design, Java EE supports
distributed object technologies such as EJB for deploying business components. The
business components are developed as remotely accessible session Enterprise
JavaBeans. JMS and message-driven beans (MDB) make things even complex by
allowing distributed asynchronous interaction of objects.
The design of distributed objects is an immensely complicated task even for experienced
professionals. You need to consider critical issues such as scalability, performance,
transactions, and so on, before drafting a final solution. The design decision to use a
coarse-grained or fine-grained session EJB facade can have serious impact on the
overall performance of a Java EE application. Similarly, the choice of the correct method
on which transactions will be imposed can have critical influence on data consistency.

Explain how the Java EE platform enables service oriented architecture


6.2 (SOA) -based applications.

• [web service]
Ref. • [SOA for J2EE]

Web Service Page


120
There is no commonly accepted definition for a Web service. For the purposes of this
specification, a Web service is defined as a component with the following characteristics:
• A service implementation implements the methods of an interface that is
describable by WSDL. The methods are implemented using a Stateless
Session EJB or JAX-RPC web component.
• A Web service may have its interface published in one or more registries for
Web services during deployment.

Section 30. Introduction (7 pages)


• A Web Service implementation, which uses only the functionality described
by this specification, can be deployed in any Web Services for J2EE
compliant application server.
• A service instance, called a Port, is created and managed by a container.
• Run-time service requirements, such as security attributes, are separate from
the service implementation. Tools can define these requirements during
assembly or deployment.
• A container mediates access to the service.
JAX-RPC defines a programming model mapping of a WSDL document to Java which
provides a factory (Service) for selecting which aggregated Port a client wishes to use.
See Figure 2 for a logical diagram. In general, the transport, encoding, and address of
the Port are transparent to the client. The client only needs to make method calls on the
Service Endpoint Interface, as defined by JAX-RPC, (i.e. PortType) to access the
service.

Java EE Platform and Web Services


Beginning with J2EE 1.4, the Java EE platform has fully supported both clients of web
services and web service endpoints. As a result, a Java EE component can be deployed
as a web service at the same time it acts as a client of a web service deployed
elsewhere. JAX-WS (Java API for XML Web Services) is the primary API for web
services in Java EE 5. It supports web service calls by using the SOAP/HTTP protocol
as constrained by the WS-I Basic Profile specification.
Java EE web services are defined by the specification Implementing Enterprise Web
Services (JSR 109). The specification describes the deployment of web service clients
and web service endpoints in Java EE releases as well as the implementation of web
service endpoints using Enterprise JavaBeans (EJB) components. Java EE web
services, along with JBI, provide a strong foundation for implementing a SOA.

JAX-WS 2.0
JAX-WS 2.0 replaces an older API, JAX-RPC 1.1 (Java API for XML-based Remote
Procedure Call), extending it in many areas. The new specification supports multiple
protocols, such as Simple Object Access Protocol (SOAP) 1.1, SOAP 1.2, and XML.
JAX-WS uses JAXB 2.0 as its data binding model, relying on usage annotations to
considerably simplify web service development. It also uses many annotations defined
by the specification Web Services Metadata for the Java Platform and introduces steps
to plug in multiple protocols instead of HTTP only. In a related development, JAX-WS
defines its own message-based session management.

Java EE Web Service Architecture Page


121
Web service architecture, in general, allows a service to be defined abstractly,
implemented, published and discovered, and used interoperably. You can decouple a
web service implementation from its use by a client in a variety of ways-in programming
model, logic, and transport. As a consequence, a web service that has been developed
with the .NET platform can be used by a Java EE application, and vice versa.
In simplest terms, a service instance, called a Port component, is created and managed
by a Java EE container, which in turn can be accessed by the client application. The
Port component can also be referenced in client, web, and EJB containers.

Section 30. Introduction (7 pages)


The life cycle of a Port component’s implementation is specific to and completely
controlled by its container-the two are intimately linked.
The Port component associates a Web Service Definition Language (WSDL) port
address with an EJB service implementation bean-a Java class that provides the
business logic of the web service and that always runs in an EJB container. Because the
service implementation is specific to a container, the service implementation also ties a
Port component to its container’s behavior. The methods that the service implementation
bean implements are defined by the Service Endpoint Interface.
A container provides a listener for the WSDL port address and a means of dispatching
the request to the service implementation bean. For example, if you deploy an EJB
service implementation bean by using basic JAX-WS SOAP/HTTP transport, the bean
will run in a Java EE EJB container and an HTTP listener will be available in the
container for receiving the request.

Page
Web Services for J2EE Overview
The Web Services for J2EE specification defines the required architectural relationships
122
as shown in Figure 3. This is a logical relationship and does not impose any
requirements on a container provider for structuring containers and processes. The
additions to the J2EE platform include a port component that depends on container
functionality provided by the web and EJB containers, and the SOAP/HTTP transport.
Web Services for J2EE requires that a Port be referencable from the client, web, and
EJB containers. This specification does not require that a Port be accessible from the
applet container.
This specification adds additional artifacts to those defined by JAX-RPC that may be
used to implement Web services, a role based development methodology, portable

Section 30. Introduction (7 pages)


packaging and J2EE container services to the Web services architecture. These are
described in later sections.

This specification defines two means for implementing a Web service, which runs in a
J2EE environment, but does not restrict Web service implementations to just those
means.
The first is a container based extension of the JAX-RPC programming model which
defines a Web service as a Java class running in the web container.
The second uses a constrained implementation of a stateless session EJB in the EJB
container. Other service implementations are possible, but are not defined by this
specification.

The container provides for life cycle management of the service implementation,
concurrency management of method invocations, and security services. A container
provides the services specific to supporting Web services in a J2EE environment. This
specification does not require that a new container be implemented. Existing J2EE
containers may be used and indeed are expected to be used to host Web services. Web
service instance life cycle and concurrency management is dependent on which
container the service implementation runs in. A JAX-RPC Service Endpoint
implementation in a web container follows standard servlet life cycle and concurrency
requirements and an EJB implementation in an EJB container follows standard EJB life
cycle and concurrency requirements.

This specification defines the responsibilities of the existing J2EE platform roles. There
are no new roles defined by this specification. There are two roles specific to Web
Services for J2EE used within this specification, but they can be mapped onto existing
J2EE platform roles. The Web Services for J2EE product provider role can be mapped
to a J2EE product provider role and the Web services container provider role can be
mapped to a container provider role within the J2EE specification.

In general, the developer role is responsible for the service definition, implementation,
and packaging within a J2EE module. The assembler role is responsible for assembling
the module into an application, and the deployer role is responsible for publishing the
deployed services and resolving client references to services. More details on role
responsibilities can be found in later sections.

Explain how you would design a Java EE application to repeatedly Page


123
measure critical non-functional requirements and outline a standard
6.3 process with specific strategies to refactor that application to improve on
the results of the measurements.

• Measurement & Analysis


Ref. • [SAMP ARCHITECTURES]
• [12-Steps]

Let’s say that our non-functional requirements are:

Section 30. Introduction (7 pages)


• Performance
• Scalability
• Availability
• Extensibility
• Interoperability
• Security

There are multiple techniques that can be applied to this architectural model to enhance
systemic qualities and to improve the overall end user experience. The following
strategies, which are discussed in the following pages, are frequently applied in SAMP
(Solaris™ Operating System, Apache, MySQL™ database, PHP) architecture
deployments according to specific site and application needs:
• Layered resource model
• Application threading
• Session management
• Load balancing
• Reverse proxies
• Distributed caching
• Clustering
• Data abstraction and segmentation
• Monitoring and management

Layered Resource Model


As in most traditional application architectures, a typical Web 2.0 application spans
multiple layers and tiers, as depicted in Figure 1. This model promotes scalability,
allowing additional resources to be added seamlessly to each layer and tier as additional
processing capacity is needed. To cope with rapid increases in the number of incoming
client requests, existing resources can be tuned to optimize utilization, or additional
physical or logical nodes can be added.
To expedite content delivery of Web 2.0 applications, proxy and/or reverse proxy
services (Figure 1) are commonly used. These services enable content caching of pages
that are often requested but that do not change frequently. For large Web sites, proxies
and reverse proxies are usually distributed across different geographic domains (e.g., in
Asia, the U.S., and Europe) to enable rapid access to requested data.
The Web server (Figure 1) consists of the Apache Web server with necessary runtime
engines and accelerators like PHP's eAccelerator. From the deployment side, a common
practice is to scale the Web tier horizontally across a farm of servers to rapidly process
incoming requests. In many cases, to enhance the Web layer’s scalability, the underlying

Page
infrastructure is founded on processors that feature multiple cores and chip multi-
threading (CMT) technology, such as UltraSPARC® processors in Sun servers. CMT 124
technology enables a single server to act as a multi-node farm, enabling cost-effective
consolidation of multiple Web server instances and components. Because of advanced
thread density, Sun servers based on UltraSPARC processors are ideal systems for the
Web server layer.
Due to the high volume of user data, caching is an essential part of every Web 2.0
application. Memcached is a de facto industry-standard caching solution that allows
companies to store objects in memory and expedite data access, especially in
comparison to disk-based data access in a traditional database.

Section 30. Introduction (7 pages)


As illustrated in Figure 1, Web 2.0 deployments commonly incorporate a Web Analytics
component to analyze visitor behavior. Often implemented in the Java programming
language, Web Analytics modules are long-running, multi-node applications that analyze
site behavior and produce statistical reports. Web analytics modules are also used to
correlate real-time user behavior and profile data with targeted advertising.
To store user generated content, data is often split across two tiers (Figure 1). The first
tier may encompass a cluster of MySQL servers, while the second one is dedicated to
unstructured data (e.g., photos, video clips, etc.) that are kept on highly scalable file
systems like ZFS, MogileFS, etc.

Application Logic and Threading


Although adding resources to scale Web 2.0 applications is usually fairly straightforward,
being able to make effective use of resources is often more complicated. Application
architecture is a critical factor in scalability. If the application's core processing logic is
designed as a single-threaded component, it is unlikely that the application will scale well
as the user population increases and resources are expanded in particular layers or

Page
tiers. To handle thousands or possibly millions of simultaneous user requests, Web 2.0
applications must be able to process many user interactions concurrently. Today’s off- 125
the-shelf operating systems, Web servers, application servers, cache servers, and
databases are generally engineered to support multiple execution threads and
concurrent workloads.
Even so, there are still some specific design challenges that can limit an application’s
ability to scale:
• Shared mutable data. Shared mutable data is the scourge of any parallel
system. In spite of shared user access, the application is responsible for
maintaining data integrity and consistency. Shared data that is read-only is
not an issue since every concurrent process has the exact same view of the
data. If the data is writable, however, then it must be protected by a locking
mechanism to prevent concurrent access. The topic of how to best implement

Section 30. Introduction (7 pages)


locking mechanisms is beyond the scope of this article, but in general, locking
mechanisms impact scalability in that they cause one process to wait while
another accesses the locked data.
• Critical sections. Some applications have critical sections that are shared
across multiple processes but can only be accessed sequentially. As before,
a locking mechanism is required for protection. The lock is acquired as one
process enters the critical section and released when it leaves, and all other
processes must wait to acquire the lock.
• Data locality. Data can be very large, very far away, or both. Moving large
chunks of data around a system can take time, just as it can for data access
that requires a relatively expensive call to a central database, a disk-based
file store, or to a remote Web service. To enhance scalability, data should be
as close as possible to its point of use. For this reason, caching is often
implemented to improve the speed of data access. In many implementations,
when data is first accessed, it is cached locally (preferably in memory). When
it is needed again, it can be fetched from cache, thus avoiding a time-
consuming call to a data store. Often caching occurs behind the scenes —
file systems cache files in memory, CPUs cache data accessed from memory
in faster local memories, and databases cache frequently used data buffers in
a buffer cache. For Web 2.0 applications where caching is a key to scaling,
new breeds of caches have been developed, including application-level,
distributed caches (such as Memcached) or distributed caching file systems
(such as Hadoop, Lustre, and MogileFS). It is sometimes difficult to predict
where caching can best be applied, and in many cases, performance analysis
can determine optimal use. For additional information and best practices
related to caching, see “Distributed Caching”, page 14.
• Size of datasets. Huge monolithic data stores tend not to scale well. Often as
the amount of data increases (sometimes exponentially), a single database
must be partitioned into multiple, smaller databases. Data can be partitioned
using an index to a specific field or key within the data, or using fields that
have been added to the data specifically for the purpose of data
segmentation. The latter technique allows new databases to be added as the
amount of data increases without the need to change partition boundaries. In
some cases databases automatically partition data by hashing the primary
key, with the entire process transparent to the application.
The ultimate result of partitioning is that the database load is distributed across multiple
databases, reducing contention (see “Data Abstraction and Segmentation”, page 16).

Session Management
Page
Since HTTP is stateless, any Web server in a cluster can potentially process an
application client request. Session management allows a user’s session state to persist
126
— for example, after a user has been authenticated by the Web server, there is no need
to re-authenticate at the next HTTP request if the user’s authentication persists in the
user’s session state.
The convenience of session management, however, comes with a price — applications
that use session management must maintain each user’s session state, which is usually
stored in memory. This can greatly increase an application’s run-time memory footprint
and tends to link user sessions to specific servers, requiring those sessions to be
migrated to another node if the server is taken offline. If session management is not
implemented, applications can have a smaller footprint and any cluster node can service
requests from any user.

Section 30. Introduction (7 pages)


Although a majority of social networking applications do not maintain session state, there
are a number of ways to implement it. Commonly, user sessions are stored in cookies or
in a hidden entry on a page. An application can assign a unique identifier to a user and
then tracks the user’s progress on the site. A user's interaction typically spans across
multiple Web pages — by recording the user’s session state, the application can
remember the user’s most recent interaction with the site. In some cases the user’s
session state is persisted to simplify subsequent visits. With Yahoo and some other Web
sites, a session persists for about two weeks, so at a minimum, users don't have to re-
login if they return to the site within that period. Some applications apply the user's
session state to dynamically customize page content based on a user's preferences and
buying patterns. Sessions stored in cookies are often cryptographically signed, but the
data is usually unencrypted. Session data that is stored in cookies should not contain
any sensitive information like a user’s credit card or other personal data. User sessions
can be also stored in a database or via Memcached.
From a development standpoint, there are scripting frameworks (such as PHP WASP
and Zend Framework) that offer session management features, such as Zend's
Zend_Session. In the Rails framework, the Action Controller's Session Management is
used to manage user sessions, which by default are stored in a browser cookie. Using
ActiveRecordStore in Rails allows session data to be stored in a database. Alternatively,
using MemCacheStore enables information to be stored via Memcached.

Load Balancing
One key factor in a successful Web 2.0 application architecture is the ability to distribute
application load — a growing number of connections or a geographically dispersed user
base must not significantly impact response time. To accomplish this, applications are
frequently deployed in conjunction with some type of load-balancing solution. In many
cases, a site’s load-balancing scheme redirects incoming requests to the nearest server
geographically.
Both hardware and software-based load-balancing solutions are available. Hardware
load balancers are usually located above the Web tier and sometimes in-between tiers.
Most software-level clustering implementations include load-balancing software that is
used by upstream components. A reverse proxy, such as Squid, can also be used to
distribute load across multiple servers. Other open source load-balancing software
solutions include Perlbal, Pen, and Pound. Hardware-based solutions include BigIP and
ServerIron, among others.

Reverse Proxies
The Web tier hosts both static and dynamic content accessible by end users. With
technologies such as AJAX introducing asynchronous request processing, Web servers
Page
are delivering increasingly complex content aggregated from multiple Web sites and
using different formats, such as JSON or XML. The content is often transported via
127
multiple protocols other than common HTTP formats. For applications to scale well and
achieve performance goals, the use of reverse proxies is becoming a de facto trend.
Some reverse proxies (for example, Squid) allow caching of commonly requested pages
and static content, which reduces Web server load. The Web server is then free to
deliver dynamically generated content. An Apache Web server is commonly deployed in
the Web server tier and receives dispatched requests for dynamic content, as shown in
Figure 2.

Section 30. Introduction (7 pages)


The reverse proxy server is often deployed as a part of a cluster. The Web server nodes,
which reside below the reverse proxy server, can include modules such as the mod_*
web framework, the FCGI or Apache mod_fastcgi module, or the popular Nginx (“engine
X”) reverse proxy. Another popular Web server is lighttpd (“lighty”), which is designed
with asynchronous I/O and a small memory footprint, and features effective CPU load
balancing, output compression, SCGI, and multiple security features. These Web
servers are often deployed in lieu of or in conjunction with an Apache Web server.

Distributed Caching
An essential component of the Web tier is a distributed caching module (Memcached or
other caching module) that provides distributed shared caching capability of user
content. Created by danga.com for Live Journal, Memcached is an open source, high
performance, distributed memory object caching system that is widely adopted and
simple to manage. Rather than caching information within individual Web processes,
Memcached clients and servers enable the creation of a single global cache across
many systems. The cache can then be accessed via a client API either within application
code or via language-specific modules that abstract the data access layer and access
the cache before deferring to the underlying data store. Applications typically cache
partial or full dynamically generated Web pages, partial or full result sets from complex
database queries, and any application-level data that can be shared and reused.
Unlike some databases, Memcached does not block a reading thread while writes are in
progress, which generally speeds up access to data. This is particularly effective in read-
intensive applications. Memcached also provides opportunities for better data locality,
allowing data to be available much closer to where it is needed. In addition, it generally
Page
improves access times compared to accessing data from a database or disk. Depending
on application load, the more Memcached instances there are, the faster the access.
128
Identifying which instance holds the needed data is a constant time operation and with
more instances, there is less load per server. Managing Memcached instances is
simplified because there is no crosstalk between instances — in fact the instances know
nothing of each other.
Memcached comes with many client APIs, including languages such as Ruby on Rails,
PHP, Perl, and the Java programming language. There are also a set of User Defined
Functions (UDF) for MySQL that can be used to push data out to Memcached on writes
and updates. Although this technology is still somewhat experimental, it does go a long
way towards removing the need for a client to manually populate the cache after a write.
In building Web 2.0 applications, it is useful to anticipate the use of caching and initially

Section 30. Introduction (7 pages)


design the application with caching in mind, thus avoiding the burden of redesign later
on.

Clustering
Clustering for Web 2.0 applications requires that the workload be spread across multiple,
often identical, instances of application tier components. An instance can be a host
system running an entire application stack or it can be a set of virtualized systems
running on the same physical server (such as on a highly threaded Sun CMT server
using Sun Logical Domains or Solaris Containers). An instance can also refer to one of
several application instances running on the same physical server. Deployment
strategies sometimes require a combination of techniques.
Clustering provides several key benefits. First, if a cluster instance (or node) fails, the
application continues to run. Secondly, for appropriately designed applications, it is
possible to scale out a cluster by adding more nodes or more instances, which allows
the entire system to scale, supporting potentially greater workloads.
Failover within a cluster is not always automatic. In a failover scenario, the following
steps must occur when a cluster node fails:
• The failed node must be removed from the cluster
• Any incoming work must be directed to other nodes
• Any ongoing sessions must be redirected to other nodes, along with any data related to
those sessions. These failover requirements are sometimes addressed by load
balancers sitting in front of the cluster (see “Load Balancing”, page 13). Load balancers
not only balance load across cluster nodes, but they can also:
• Detect node failures and prevent any new requests going to failed nodes
• Route all requests for the same session to the same cluster node
• Use weighting policies to distribute load dependent on the capabilities of specific nodes
or by how busy they are

Dependent on the application type, there can be a requirement for cluster node failover
to be invisible to the end user. This means that any data specific to user application
sessions would need to be available across all cluster nodes. Sharing of session data is
usually implemented through application-level APIs and through a persistence layer that
allows session data to be stored and recovered by other cluster nodes when a node
failure occurs (see “Session Management”, page 12). Since a node failure means that
other nodes in the cluster must do more work, it is important to design a cluster such that
node failure does not result in saturation of the rest of the cluster.
In the data tier, different storage requirements can translate into different availability
strategies. For databases, availability is commonly addressed through database
replication. Writes are made initially to a single database instance, which is defined as
Page
the master. These writes are then written out (replayed) either synchronously or
asynchronously to one or more replicas. Along with the master, the replicas service
129
read-only traffic. Synchronous updates provide a consistent view of the data across the
entire cluster but are slower than asynchronous updates — therefore synchronous
updates are generally not well-suited for implementations that use multiple replicas.
Asynchronous updates can suffer from what is known as “replication lag,” where the
replay of writes to the replicas causes the possibility of a read seeing stale data. The
choice of database vendor (along with data integrity and consistency requirements)
ultimately determines which replication strategy is best suited for a given application.

Data Abstraction and Segmentation


The sheer volume of data stored and accessed by most Web 2.0 applications requires
partitioning to achieve effective scalability. One of the key components of a Web 2.0

Section 30. Introduction (7 pages)


architecture is its ability to break down stored information into silos that are accessed via
a Web Service layer — this technique is often referred to as Data Sharding.
Data sharding enables rapid access to underlying stored information. For rapid data
retrieval, a thin layer of Web Services helps to navigate to the required data. Typically
application providers rely on a couple of key techniques — caching and abstraction of
the data layer — rather than accessing data directly in the database. The user data tier
is abstracted out with a Data Access Service that has a very specific meaning in the
Web 2.0 environment. Data Access is commonly implemented as a Web Services
facade that routes incoming requests to the corresponding data bucket. For example, if
the end user's name starts with A through C, the service knows that the user's profile is
available in a specific bucket.
Web 2.0 PHP or Ruby applications usually seek the data in cache first, and if it is not
available in cache, move down to the next layer. Caching information helps to improve
application performance and data readiness for a massive user base. Persistent user
data is often spread across a database cluster, along with some of the user data.
Data segmentation is commonly achieved in two steps. First, a mapping occurs between
the user and his or her corresponding data. User-generated content, such as user feeds,
are stored in memory in leaf nodes. A Web Service facilitates mapping between a user
and the corresponding data leaf (i.e., the Memcached leaf node). The second important
step is segmentation across leaf nodes. Multiple user leaves are consolidated into
segments or buckets. This is done in order to easily unload buckets off the server, or a
virtual machine, in case the underlying system gets overloaded. With this model of data
segmentation, data availability is addressed with server replication.
Should a leaf server fail, a hot standby replica replaces the failed node. User leaves
stored on multiple nodes, physical and/or virtual, also help to ensure data reliability.
Every user's data is passed across redundant nodes to guarantee the delivery of user
generated content. Horizontal sharding is not the only common technique. Often times
application vertical sharding is used. This can be achieved by factoring out the most
resource-intensive application component and deploying it on a separate server.
Horizontal and vertical sharding are often combined in Web 2.0 application deployments.
For structured data, industry-proven open source databases — such as MySQL and
PostgreSQL— are both attractive solutions for Web 2.0 developers, although MySQL
seems to be the dominant choice. Depending on the database selected and other
implementation factors, database instances can be clustered to achieve data
redundancy and to improve availability. Unstructured data are commonly stored on a file
system and segmented off the main dataset stored in the RDBMS. Distributed file
systems are often used to store high volume data. Among the distributed file systems in
use for Web 2.0 applications are GoogleFS, MogileFS, Lustre, and ZFS. These file
systems enable dynamic data growth without the need for reconfiguration. Instead, the
Page
file system dynamically recognizes new disk space added in the underlying infrastructure
layer. These scalable and distributed file systems are aimed at processing petabytes of
130
data commonly used by Web 2.0 media content, and are designed to scale efficiently.
Reliable data interchange, with large user file upload and asynchronous request
submission, is achieved with messaging middleware solutions that use queuing points or
ingestion. Incoming user generated content can come from multitude protocols such as
IM, SMS, SMTP, WAP, and of course, HTTP. The content must be dynamically
processed to make it instantaneously available through the user’s profile. Data indexing,
compression, thumbnail image generation, and other techniques are used to achieve
this task.

Monitoring and Management

Section 30. Introduction (7 pages)


In modern Web 2.0 applications, users create videos, share images, record music, and
in many other ways produce content. All these operations must be monitored to avoid
loss. In addition, effective site management tools can track whether a site is up and
running, helping to determine whether SLA requirements are adequately being met. To
simplify application deployment and management, many startups use hosting services
as a first step in deployment, relying on these services to provide the infrastructure to
support cloud computing and Web 2.0 applications. A hosting provider often allocates a
virtual machine with pre-configured components, and resources can be added
dynamically as the load increases and the site grows in popularity. Should a database
need more space, for example, a new MySQL container can be added or additional disk
space can be allocated to the existing one. There are a number of cloud computing
offerings available today on the market, including Amazon EC2 and S3, 3Par, and
Joyent. Joyent cloud computing provides hosting services using the OpenSolaris™
operating system, and offers developer environments for Facebook, OpenSocial, and
other social networking applications.
When monitoring a Web 2.0 site, there are multiple metrics to track. For example,
monitoring can record site traffic statistics, the end-user site usage pattern, the effects of
marketing (including viral marketing) efforts, and finally underlying site resource
utilization. To monitor site statistics, companies often use Web Analytics software such
as Google Analytics. There are many site statistical and monitoring tools available,
including AWStats, Statcounter, SiteMeter, Alexa, Compete, Performancing’s pMetrics,
and AjaxMetrics. Some tools, like GoStats, also offer reporting functionality. In addition,
companies frequently develop custom Web Analytics applications to monitor specific site
components, proactively send notifications, limit server downtime or other system
failures, and to help with application scaling.
Effective management has a significant impact on the success of Web 2.0 applications.
Today there are various tools and utilities available to manage systems, operating
environments, Web servers, application servers, databases, and other Web site building
blocks. The tools can provide performance data for the underlying system or for
individual components of the solution stack. Some solutions allow provisioning of
individual stack components — for example, if another instance of an application server
is needed or there are insufficient compute resources, the tool can dynamically provision
additional service instances or resources.
Popular infrastructure management solutions, such as RightScale and ElasticServer,
help to simplify infrastructure management for hosted applications. RightScale, as an
example, simplifies application management on Amazon EC2, GoGrid, and Flexiscale.
Provided management tools automate how to start and stop services and how to
preview log files. Many infrastructure management solutions can supply various on
demand services, such as servers on-demand, disk volume on-demand, as well as
Page
scalable and reliable server templates that provide automation and built-in redundancy.
Whether a deployment is standalone or hosted, there are several required management
131
tasks, including management of structured and unstructured data, application
management, and Web tier management. In a hosted environment there are also
virtualization form factors, different operating systems, and the heterogeneous nature of
underlying servers. To ensure smooth operations, application developers must take into
account deployment issues such as change management, synchronized server clocks,
and up-to-date patches and packages.
In the Solaris 10 Operating System, the Service Management Facility (SMF) provides a
unified model for service management. Replacing init.d scripts in earlier OS releases,
the SMF framework handles system boot-up, process management, and self-healing. It
addresses the shortcomings of startup scripts and creates an infrastructure to manage
daemons after the host has booted and to automatically restart services after a failure.

Section 30. Introduction (7 pages)


SMF works in conjunction with the Solaris Fault Manager, allowing software recovery in
the event of certain hardware faults, administrative errors, and software core dumps.
Finally, with either the Solaris or OpenSolaris OS, developers can use Solaris Dynamic
Tracing (DTrace) to monitor application performance, view system and application calls,
and to help identify bottlenecks in application performance.

We must have a standard process for non-functional requirements.

• Why measure? Because without data, you only have opinions.


• Why analyze? Because the data you collect can't help you if you don't understand it
and use it to shape your decisions.

What is software measurement and analysis?

Measurement and analysis involves gathering quantitative data about products,


processes, and projects and analyzing that data to influence your actions and plans.
Measurement and analysis activities allow you to
• characterize, or gain understanding of your processes, products, resources, and
environments
• evaluate, to determine your status with respect to your plans
• predict, by understanding relationships among processes and products so the values
you observe for some attributes can be used to predict others
• improve, by identifying roadblocks, root causes, inefficiencies, and other opportunities
for improvement

Refer to [12 steps] to the explanation of the following tasks:


Step 1 – Identify Metrics Customers
Step 2 – Target Goals
Step 3 – Ask Questions
Step 4 – Select Metrics
Step 5 – Standardize Definitions Page
132
Step 6 – Choose a Measurement Function
Step 7 – Establish a Measurement Method
Step 8 – Define Decision Criteria
Step 9 – Define Reporting Mechanisms
Step 10 – Determine Additional Qualifiers
Step 11 – Collect Data
Step 12 – The People Side of the Metrics Equation

Section 30. Introduction (7 pages)


Page
133

Section 30. Introduction (7 pages)


7. Patterns ( pages3)

From a list, select the most appropriate pattern for a given scenario.
Patterns are limited to those documented in this book - Alur, Crupi and
1 Malks (2003). Core J2EE Patterns: Best Practices and Design Strategies
2nd Edition and named using the names given in that book.

From a list, select the most appropriate pattern for a given scenario.
Patterns are limited to those documented in this book - Gamma, Erich;
2 Richard Helm, Ralph Johnson, and John Vlissides (1995). Design
Patterns: Elements of Reusable Object-Oriented Software and are named
using the names given in that book.

From a list, select the benefits and drawbacks of a pattern drawn from
this book - Gamma, Erich; Richard Helm, Ralph Johnson, and John
3 Vlissides (1995). Design Patterns: Elements of Reusable Object-
Oriented Software.

From a list, select the benefits and drawbacks of a specified Core J2EE
4 pattern drawn from this book – Alur, Crupi and Malks (2003). Core
J2EE Patterns: Best Practices and Design Strategies 2nd Edition.

Core J2EE Pattern Catalog

Pattern Name Description


Business Reduce coupling between Web and Enterprise
Delegate JavaBeans tiers
TM

Composite
Model a network of related business entities
Entity
Page
Composite Separately manage layout and content of multiple
View composed views
134
Data Access
Abstract and encapsulate data access mechanisms
Object (DAO)
Fast Lane
Improve read performance of tabular data
Reader
Front Controller Centralize application request processing
Intercepting
Pre- and post-process application requests
Filter
Model-View- Decouple data representation, application behavior,
Controller and presentation

Section 30. Introduction (7 pages)


Simplify client access to enterprise business
Service Locator
services
Coordinate operations between multiple business
Session Facade
objects in a workflow
Transfer Object Transfer business data between tiers
Value List
Efficiently iterate a virtual list
Handler
Simplify access to model state and data access
View Helper
logic

Page
135

Section 30. Introduction (7 pages)


Page
136

7.1 From a list, select the most appropriate pattern for a given scenario.
Patterns are limited to those documented in this book - Alur, Crupi and
Malks (2003). Core J2EE Patterns: Best Practices and Design Strategies

Section 30. Introduction (7 pages)


2nd Edition and named using the names given in that book.

• [CORE_J2EE_PATTERNS]
Ref. • The [CORE_J2EE_PATTERNS]’s site

Among the 5 tiers of the J2EE architecture (see diagram here), J2EE patterns address the
following 3 tiers: presentation, business and integration.

# J2EE Pattern Definition

Presentation Tier

To intercept and manipulate a request and a response


1 Intercepting Filter
before and after they are processed.

2 Front Controller To centralize presentation-tier request handling.

To avoid using protocol-specific system information outside


3 Context Object
of its relevant context.

Application To centralize retrieval and invocation of request-processing


4
Controller components, such as commands and views.

To separate programming logic from the view to facilitate


5 View Helper division of labor between software developers and web page
designers.

To build and combine a view from modular, atomic


6 Composite View component parts, while managing the content and the layout
independently.

To provide dynamic handling of requests and responses


before/after they are passed to the view tier.(Service to
7 Service to Worker
Worker = Front Controller + Application Controller + View
Helper)

It is a view that handles a request and generates a Page


137
8 Dispatcher View response, while it manages limited amounts of business
processing.

Business Tier

To hide the details of service creation, reconfiguration, and


9 Business Delegate
invocation retries from the clients.

To transparently locate business components and services


10 Service Locator
in a uniform manner.

Section 30. Introduction (7 pages)


11 Session Façade To control client access to business objects.

To encapsulate use case-specific logic outside of individual


12 Application Service
Business Objects.

To have a conceptual model containing structured


13 Business Object interrelated composite objects with sophisticated business
logic, validation and business rules.

To encapsulate the physical database design from the


14 Composite Entity
clients.

15 Transfer Object To transfer multiple data elements over a tier.

Transfer Object To aggregate transfer objects from several business


16
Assembler components.

To provide the clients with an efficient search and iterate


17 Value List Handler
mechanism over a large results set.

Integration Tier

Data Access Object To encapsulate data access and manipulation in a separate


18
(DAO) object.

19 Service Activator To invoke services asynchronously.

20 Domain Store To separate persistence from your object model.

To provide access to one or more services using XML and


21 Web Service Broker
web protocols

J2EE Pattern Relationships


Note: Please refer to the Pattern Relationships Diagram in the following page while reading the text below.

Page
Intercepting Filter intercepts incoming requests and outgoing responses and applies a filter. These filters
may be added and removed in a declarative manner, allowing them to be applied unobtrusively in a variety
138
of combinations. After this preprocessing and/or post-processing is complete, the final filter in the group
vectors control to the original target object. For an incoming request, this is often a Front Controller, but may
be a View.
Front Controller is a container to hold the common processing logic that occurs within the presentation tier
and that may otherwise be erroneously placed in a View. A controller handles requests and manages
content retrieval, security, view management, and navigation, delegating to a Dispatcher component to
dispatch to a View.
Application Controller centralizes control, retrieval, and invocation of view and command processing. While a
Front Controller acts as a centralized access point and controller for incoming requests, the Application
Controller is responsible for identifying and invoking commands, and for identifying and dispatching to views.
Context Object encapsulates state in a protocol-independent way to be shared throughout your application.
Using Context Object makes testing easier, facilitating a more generic test environment with reduced
dependence upon a specific container.

Section 30. Introduction (7 pages)


View Helper encourages the separation of formatting-related code from other business logic. It suggests
using Helper components to encapsulate logic relating to initiating content retrieval, validation, and adapting
and formatting the model. The View component is then left to encapsulate the presentation formatting.
Helper components typically delegate to the business services via a Business Delegate or an Application
Service, while a View may be composed of multiple subcomponents to create its template.
Composite View suggests composing a View from numerous atomic pieces. Multiple smaller views, both
static and dynamic, are pieced together to create a single template. The Service to Worker and Dispatcher
View patterns represent a common combination of other patterns from the catalog. The two patterns share a
common structure, consisting of a controller working with a Dispatcher, Views, and Helpers. Service to
Worker and Dispatcher View have similar participant roles, but differ in the division of labor among those
roles. Unlike Service to Worker, Dispatcher View defers business processing until view processing has been
performed.
Business Delegate reduces coupling between remote tiers and provides an entry point for accessing remote
services in the business tier. A Business Delegate might also cache data as necessary to improve
performance. A Business Delegate encapsulates a Session Façade and maintains a one-to-one relationship
with that Session Façade. An Application Service uses a Business Delegate to invoke a Session Façade.
Service Locator encapsulates the implementation mechanisms for looking up business service components.
A Business Delegate uses a Service Locator to connect to a Session Façade. Other clients that need to
locate and connect to Session Façade, other business-tier services, and web services can use a Service
Locator.
Session Façade provides coarse-grained services to the clients by hiding the complexities of the business
service interactions. A Session Façade might invoke several Application Service implementations or
Business Objects. A Session Façade can also encapsulate a Value List Handler.
Application Service centralizes and aggregates behavior to provide a uniform service layer to the business
tier services. An Application Service might interact with other services or Business Objects. An Application
Service can invoke other Application Services and thus create a layer of services in your application.
Business Object implements your conceptual domain model using an object model. Business Objects
separate business data and logic into a separate layer in your application. Business Objects typically
represent persistent objects and can be transparently persisted using Domain Store.
Composite Entity implements a Business Object using local entity beans and POJOs. When implemented
with bean-managed persistence, a Composite Entity uses Data Access Objects to facilitate persistence.
The Transfer Object pattern provides the best techniques and strategies to exchange data across tiers (that
is, across system boundaries) to reduce the network overhead by minimizing the number of calls to get data
from another tier.
The Transfer Object Assembler constructs a composite Transfer Object from various sources. These
sources could be EJB components, Data Access Objects, or other arbitrary Java objects. This pattern is
most useful when the client needs to obtain data for the application model or part of the model.
The Value List Handler uses the GoF iterator pattern to provide query execution and processing services.
The Value List Handler caches the results of the query execution and return subsets of the result to the
clients as requested. By using this pattern, it is possible to avoid overheads associated with finding large
numbers of entity beans. The Value List Handler uses a Data Access Object to execute a query and fetch
the results from a persistent store.

Page
Data Access Object enables loose coupling between the business and resource tiers. Data Access Object
encapsulates all the data access logic to create, retrieve, delete, and update data from a persistent store.
Data Access Object uses Transfer Object to send and receive data.
139
Service Activator enables asynchronous processing in your enterprise applications using JMS. A Service
Activator can invoke Application Service, Session Façade or Business Objects. You can also use several
Service Activators to provide parallel asynchronous processing for long running tasks.
Domain Store provides a powerful mechanism to implement transparent persistence for your object model. It
combines and links several other patterns including Data Access Objects.
Web Service Broker exposes and brokers one or more services in your application to external clients as a
web service using XML and standard web protocols. A Web Service Broker can interact with Application
Service and Session Façade. A Web Service Broker uses one or more Service Activators to perform
asynchronous processing of a request.

Section 30. Introduction (7 pages)


Page
140

1. Intercepting Filter
Problem
You want to intercept and manipulate a request and a response before and after the request is processed.

Forces

Section 30. Introduction (7 pages)


• You want centralized, common processing across requests, such as checking the data-encoding
scheme of each request, logging information about each request, or compressing an outgoing
response.
• You want pre and post processing components loosely coupled with core request-handling services
to facilitate unobtrusive addition and removal.
• You want pre and post processing components independent of each other and self contained to
facilitate reuse.

Solution
Use an Intercepting Filter as a pluggable filter to pre and post process requests and responses. A
filter manager combines loosely coupled filters in a chain, delegating control to the appropriate
filter. In this way, you can add, remove, and combine these filters in various ways without changing
existing code.
This pattern is useful for security checks, auditing, caching, compression, etc.
Class Diagram

Sequence Diagram

Page
141

Section 30. Introduction (7 pages)


Strategies
• Standard Filter Strategy
• Custom Filter Strategy
• Base Filter Strategy
• Template Filter Strategy
• Web Service Message Handling Strategies
○ Custom SOAP Filter Strategy
○ JAX RPC Filter Strategy

Page
Consequences
• Centralizes control with loosely coupled handlers
142
• Improves reusability
• Declarative and flexible configuration
• Information sharing is inefficient

Related Patterns
• Front Controller
The controller solves some similar problems, but is better suited to handling core processing.
• Decorator [GoF]
The Intercepting Filter is related to the Decorator, which provides for dynamically pluggable
wrappers.

Section 30. Introduction (7 pages)


• Template Method [GoF]
The Template Method is used to implement the Template Filter strategy.
• Interceptor [POSA2]
The Intercepting Filter is related to the Interceptor, which allows services to be added transparently
and triggered automatically.
• Pipes and Filters [POSA1]
The Intercepting Filter is related to Pipes and Filters.

1. Front Controller
Problem
You want a centralized access point for presentation-tier request handling.

Forces
• You want to avoid duplicate control logic.
• You want to apply common logic to multiple requests.
• You want to separate system processing logic from the view.
• You want to centralize controlled access points into your system.

Solution
Use a Front Controller as the initial point of contact for handling all related requests. The Front
Controller centralizes control logic that might otherwise be duplicated, and manages the key request
handling activities.
This pattern should be the entry point for the system and should (delegates work to Application Controller)
and should not be too fat
Class Diagram

Sequence Diagram Page


143

Section 30. Introduction (7 pages)


Strategies
• Servlet Front Strategy
• JSP Front Strategy
• Command and Controller Strategy
• Physical Resource Mapping Strategy
• Logical Resource Mapping Strategy
• Dispatcher in Controller Strategy
• Base Front Strategy
• Filter Controller Strategy

Consequences

Page
• Centralizes control 144
• Improves manageability
• Improves reusability
• Improves role separation

Related Patterns
• Intercepting Filter
Both Intercepting Filter and Front Controller describe ways to centralize control of certain aspects
of request processing.
• Application Controller
Application Controller encapsulates the action and view management code to which the controller
delegates.

Section 30. Introduction (7 pages)


• View Helper
View Helper describes factoring business and processing logic out of the view and into helper
objects and a central point of control and dispatch. Control-flow logic is factored forward into the
controller and formatting-related code moves back into the helpers.
• Dispatcher View and Service to Worker
Dispatcher View and Service to Worker represent different usage patterns. Service to Worker is a
controller-centric architecture, highlighting the Front Controller, while Dispatcher View is a view-
centric architecture.

1. Context Object
Problem
You want to avoid using protocol-specific system information outside of its relevant context.

Forces
• You have components and services that need access to system information.
• You want to decouple application components and services from the protocol specifics of system
information.
• You want to expose only the relevant APIs within a context.

Solution
Use a Context Object to encapsulate state in a protocol-independent way to be shared throughout
your application.
Application components should not have to know HTTP. Instead, they should call getXXX() on a
context object.
Class Diagram

Sequence Diagram Page


145

Section 30. Introduction (7 pages)


Strategies
• Request Context Strategies
○ Request Context Map Strategy
○ Request Context POJO Strategy
○ Request Context Validation Strategy
• Configuration Context Strategies
○ JSTL Configuration Strategy
○ Security Context Strategy
• General Context Object Strategies
○ Context Object Factory Strategy
○ Context Object Auto-Population Strategy

Page
Consequences
• Improves reusability and maintainability 146
• Improves testability
• Reduces constraints on evolution of interfaces
• Reduces performance

Related Patterns
• Intercepting Filter
An Intercepting Filter can use a ContextFactory to create a Context Object during web request
handling.
• Front Controller
A Front Controller can use a ContextFactory to create a Context Object during web request
handling

Section 30. Introduction (7 pages)


• Application Controller
An Application Controller can use a ContextFactory to create a Context Object during web request
handling
• Transfer Object
A Transfer Object is used specifically to transfer state across remote tiers to reduce network
communication, while a Context Object is used to hide implementation details, improving reuse and
maintainability.

1. Application Controller
Problem
You want to centralize and modularize action and view management.

Forces
• You want to reuse action and view-management code.
• You want to improve request-handling extensibility, such as adding use case functionality to an
application incrementally.
• You want to improve code modularity and maintainability, making it easier to extend the application
and easier to test discrete parts of your request-handling code independent of a web container.

Solution
Use an Application Controller to centralize retrieval and invocation of request-processing
components, such as commands and views.
Class Diagram

Sequence Diagram
Page
147

Section 30. Introduction (7 pages)


Strategies
• Command Handler Strategy
• View Handler Strategy
• Transform Handler Strategy
• Navigation and Flow Control Strategy
• Message Handling Strategies
• Custom SOAP Message Handling Strategy
• JAX RPC Message Handling Strategy

Consequences
• Improves modularity
• Improves reusability
• Improves extensibility

Page
Related Patterns
• Front Controller
148
A Front Controller uses an Application Controller to perform action and view management.
• Service Locator
A Service Locator performs service location and retrieval. A Service Locator is a coarser object,
often uses sophisticated infrastructure for lookup, and doesn’t manage routing. It also doesn’t
address view management.
• Command Processor [POSA1]
A Command Processor manages command invocations, providing invocation scheduling, logging,
and undo/redo functionality.
• Command Pattern [GoF]
A Command encapsulates a request in an object, separating the request from its invocation.

Section 30. Introduction (7 pages)


• Composite Pattern [GoF]
A Composite represents objects as part-whole hierarchies, treating individual objects and
compositions of objects uniformly.
• Application Controller [PEAA]
Martin Fowler’s description of Application Controller [PEAA] seems to focus on controlling a user’s
navigation through an application using a state machine, as described in the Navigation and Flow
Control strategy. However, the Application Controller [PEAA] and our documentation of Application
Controller have the same core intent.

1. View Helper
Problem
You want to separate a view from its processing logic.

Forces
• You want to use template-based views, such as JSP.
• You want to avoid embedding program logic in the view.
• You want to separate programming logic from the view to facilitate division of labor between
software developers and web page designers.

Solution
Use Views to encapsulate formatting code and Helpers to encapsulate view-processing logic. A View
delegates its processing responsibilities to its helper classes, implemented as POJOs, custom tags,
or tag files. Helpers serve as adapters between the view and the model, and perform processing
related to formatting logic, such as generating an HTML table, so No programming logic in the
views.
Class Diagram

Sequence Diagram

Page
149

Section 30. Introduction (7 pages)


Strategies
• Template-Based View Strategy
• Controller-Based View Strategy
• JavaBean Helper Strategy
• Custom Tag Helper Strategy
• Tag File Helper Strategy
• Business Delegate as Helper Strategy

Consequences
• Improves application partitioning, reuse, and maintainability
• Improves role separation
• Eases testing
• Helper usage mirrors scriptlets

Related Patterns

Page
• Front Controller
A Front Controller typically delegates to an Application Controller , to perform action and view 150
management.
• Application Controller
An Application Controller manages view preparation and view creation, delegating to views and
helpers.
• View Transform
An alternative approach to view creation is to perform a View Transform.
• Business Delegate
A Business Delegate reduces the coupling between a helper object and a remote business service,
upon which the helper object can invoke.
1. Composite View
Problem

Section 30. Introduction (7 pages)


You want to build a view from modular, atomic component parts that are combined to create a composite
whole, while managing the content and the layout independently.

Forces
• You want common subviews, such as headers, footers and tables reused in multiple views, which
may appear in different locations within each page layout.
• You have content in subviews which might which frequently change or might be subject to certain
access controls, such as limiting access to users in certain roles.
• You want to avoid directly embedding and duplicating subviews in multiple views which makes
layout changes difficult to manage and maintain.

Solution
Use Composite Views that are composed of multiple atomic subviews. Each subview of the overall
template can be included dynamically in the whole, and the layout of the page can be managed
independently of the content.
Class Diagram

Sequence Diagram

Page
151

Strategies

Section 30. Introduction (7 pages)


• JavaBean View Management Strategy
• Standard Tag View Management Strategy
• Custom Tag View Management Strategy
• Transformer View Management Strategy
• Early-Binding Resource Strategy
• Late-Binding Resource Strategy

Consequences
• Improves modularity and reuse
• Adds role-based or policy-based control
• Enhances maintainability
• Reduces maintainability
• Reduces performance

Related Patterns
• View Helper
A Composite View can fulfill the role of View in View Helper.
• Composite [GoF]
A Composite View is based on Composite [GoF], which describes part-whole hierarchies where a
composite object is composed of numerous subparts.

1. Service to Worker
Problem
You want to perform core request handling and invoke business logic before control is passed to the view.

Forces
• You want specific business logic executed to service a request in order to retrieve content that will
be used to generate a dynamic response.
• You have view selections which may depend on responses from business service invocations.
• You may have to use a framework or library in the application.

Solution
Use Service to Worker to centralize control and request handling to retrieve a presentation model
before turning control over to the view. The view generates a dynamic response based on the
presentation model.
This pattern is composed of Front Controller + Application Controller + View Helper.
Class Diagram

Page
152

Section 30. Introduction (7 pages)


Sequence Diagram

Strategies

Page
• Servlet Front Strategy
• JSP Front Strategy
153
• Template-Based View Strategy
• Controller-Based View Strategy
• JavaBean Helper Strategy
• Custom Tag Helper Strategy
• Dispatcher in Controller Strategy

Consequences
• Centralizes control and improves modularity, reusability, and maintainability
• Improves role separation

Related Patterns

Section 30. Introduction (7 pages)


• Front Controller, Application Controller, and View Helper
Service to Worker is a controller-centric architecture, highlighting a Front Controller. The Front
Controller delegates to an Application Controller for navigation and dispatch, then to view and
helpers.
• Composite View
The view can be a Composite View.
• Business Delegate
A Business Delegate is used to hide any remote semantics of the business service.
• Dispatcher View
Dispatcher View is a view-centric architecture, where business processing is done after control is
passed to the view.
• Business Delegate
A Business Delegate reduces the coupling between a helper object and a remote business service,
upon which the helper object can invoke.
1. Dispatcher View
Problem
You want a view to handle a request and generate a response, while managing limited amounts of business
processing.

Forces
• You have static views.
• You have views generated from an existing presentation model.
• You have views which are independent of any business service response.
• You have limited business processing.

Solution
Use Dispatcher View with views as the initial access point for a request. Business processing, if
necessary in limited form, is managed by the views.
Class Diagram

Page
154

Sequence Diagram

Section 30. Introduction (7 pages)


Strategies
• Servlet Front Strategy
• JSP Front Strategy
• Template-Based View Strategy
• Controller-Based View Strategy
• JavaBean Helper Strategy
• Custom Tag Helper Strategy
• Dispatcher in Controller Strategy

Consequences
• Leverages frameworks and libraries.
• Introduces potential for poor separation of the view from the model and control logic.
• Separates processing logic from view and improves reusability.

Related Patterns
• Front Controller
In a Dispatcher View approach, a Front Controller can handle the request or the request may be
handled initially by the view.

Page
• Application Controller
An Application Controller will often not be used with Dispatcher View. An Application Controller is 155
used in those cases where limited view management is required to resolve an incoming request to
the actual view.
• View Helper
Helpers mainly adapt and transform the presentation model for the view, but also help with any
limited business processing that is initiated from the view.
• Composite View
The view can be a Composite View.
• Service to Worker
The Service to Worker approach centralizes control, request handling, and business processing
before control is passed to the view. Dispatcher View, defers this behavior, if needed, to the time of
view processing.
1. Business Delegate

Section 30. Introduction (7 pages)


Problem
You want to hide clients from the complexity of remote communication with business service
components.
Forces
• You want to access the business-tier components from your presentation-tier
components and clients, such as devices, web services, and rich clients.
• You want to minimize coupling between clients and the business services, thus hiding the
underlying implementation details of the service, such as lookup and access.
• You want to avoid unnecessary invocation of remote services.
• You want to translate network exceptions into application or user exceptions.
• You want to hide the details of service creation, reconfiguration, and invocation retries
from the clients.
Solution
Use a Business Delegate to encapsulate access to a business service. The Business
Delegate hides the implementation details of the business service, such as lookup and
access mechanisms.
Class Diagram

Sequence Diagram

Page
156

Section 30. Introduction (7 pages)


Strategies
• Delegate Proxy Strategy
• Delegate Adapter Strategy
Consequences
• Reduces coupling, improves maintainability
Page
157
• Translates business service exceptions
• Improves availability
• Exposes a simpler, uniform interface to the business tier
• Improves performance
• Introduces an additional layer
• Hides remoteness
Related Patterns
• Service Locator
The Business Delegate typically uses a Service Locator to encapsulate the

Section 30. Introduction (7 pages)


implementation details of business service lookup. When the Business Delegate needs to
look up a business service, it delegates the lookup functionality to the Service Locator.
• Session Façade
For most EJB applications, the Business Delegate communicates with a Session Façade
and maintains a one-to-one relationship with that facade. Typically, the developer who
implements a Session Façade also provides corresponding Business Delegate
implementations.
• Proxy [GoF]
A Business Delegate can act as a proxy, providing a stand-in for objects in the business
tier. The Delegate Proxy strategy provides this functionality.
• Adapter [GoF]
A Business Delegate can use the Adapter design pattern to provide integration for
otherwise incompatible systems.
• Broker [POSA1]
A Business Delegate acts as a Broker to decouple the business-tier objects from the
clients in other tiers.
1. Service Locator
Problem
You want to transparently locate business components and services in a uniform manner.
Forces
• You want to use the JNDI API to look up and use business components, such as
enterprise beans and JMS components, and services such as data sources.
• You want to centralize and reuse the implementation of lookup mechanisms for J2EE
application clients.
• You want to encapsulate vendor dependencies for registry implementations, and hide the
dependency and complexity from the clients.
• You want to avoid performance overhead related to initial context creation and service
lookups.
• You want to reestablish a connection to a previously accessed enterprise bean instance,
using its Handle object.
Solution
Use a Service Locator to implement and encapsulate service and component lookup. A
Service Locator hides the implementation details of the lookup mechanism from the client
and encapsulates related dependencies.
Class Diagram

Page
158

Section 30. Introduction (7 pages)


Sequence Diagram

Page
159

Section 30. Introduction (7 pages)


Strategies
• EJB Service Locator Strategy
• JDBC DataSource Service Locator Strategy
• JMS Service Locator Strategies
○ JMS Queue Service Locator Strategy
○ JMS Topic Service Locator Strategy
• Web Service Locator Strategy
Consequences
• Abstracts complexity
• Provides uniform service access to clients
• Facilitates adding EJB business components
• Improves network performance
• Improves client performance by caching
Related Patterns
• Business Delegate
Business Delegate uses a Service Locator to locate and obtain references to the
business service objects, such as EJB objects, JMS topics, and JMS queues. This
separates the complexity of service location from the Business Delegate, leading to loose
coupling and increased manageability.
• Session Façade
Session Façade uses a Service Locator to locate and obtain home and remote
references to the session beans and entity beans, as well as to locate a data source.
• Transfer Object Assembler
Transfer Object Assembler uses a Service Locator to locate references to session beans
and entity beans that it needs to access data and build a composite transfer object.
• Data Access Object
A Data Access Object uses a Service Locator to look up and obtain a reference to a data
source.

1. Session Façade
Problem
You want to expose business components and services to remote clients.

Page
Forces 160
• You want to avoid giving clients direct access to business-tier components, to prevent
tight coupling with the clients.
• You want to provide a remote access layer to your Business Objects (374) and other
business-tier components.
• You want to aggregate and expose your Application Services (357) and other services to
remote clients.
• You want to centralize and aggregate all business logic that needs to be exposed to
remote clients.
• You want to hide the complex interactions and interdependencies between business
components and services to improve manageability, centralize logic, increase flexibility,
and improve ability to cope with changes.

Section 30. Introduction (7 pages)


Solution
Use a Session Façade to encapsulate business-tier components and expose a coarse-
grained service to remote clients. Clients access a Session Façade instead of accessing
business components directly.
Class Diagram

Sequence Diagram

Page
161

Strategies
• Stateless Session Façade Strategy
• Stateful Session Façade Strategy
Consequences

Section 30. Introduction (7 pages)


• Introduces a layer that provides services to remote clients
• Exposes a uniform coarse-grained interface
• Reduces coupling between the tiers
• Promotes layering, increases flexibility and maintainability
• Reduces complexity
• Improves performance, reduces fine-grained remote methods
• Centralizes security management
• Centralizes transaction control
• Exposes fewer remote interfaces to clients
Related Patterns
• Business Delegate
The Business Delegate is the client-side abstraction for a Session Façade. The Business
Delegate proxies or adapts the client request to a Session Façade that provides the
requested service.
• Business Object
The Session Façade encapsulates complex interactions of Business Objects that
participate in processing a use case request.
• Application Service
In some applications, Application Services are used to encapsulate complex business
logic and business rules. In these applications, the Session Façade implementations
become simpler because they mostly delegate to Application Services and Business
Objects.
• Data Access Object
The Session Façade might sometimes access a Data Access Object directly to obtain
and store data. This is typical in simpler applications that do not use Business Objects.
The Session Façades encapsulate trivial business logic and use Data Access Objects to
facilitate persistence.
• Service Locator
The Session Façade can use a Service Locator to look up other business components,
such as entity beans and session beans. This reduces the code complexity in the facade
and leverages the benefits of the Service Locator pattern.
• Broker [POSA1]
The Session Façade performs the role of a Broker to decouple the Business Objects
(374) and fine-grained services from their client tier.
• Facade [GoF]

Page
The Session Façade is based on the Facade design pattern.
162
1. Application Service
Problem
You want to centralize business logic across several business-tier components and services.
Forces
• You want to minimize business logic in service facades.
• You have business logic acting on multiple Business Objects or services.
• You want to provide a coarser-grained service API over existing business-tier
components and services.
• You want to encapsulate use case-specific logic outside of individual Business Objects.

Section 30. Introduction (7 pages)


Solution
Use an Application Service to centralize and aggregate behavior to provide a uniform
service layer.
Class Diagram

Sequence Diagram

Page
163

Strategies
• Application Service Command Strategy
• GoF Strategy for Application Service Strategy

Section 30. Introduction (7 pages)


• Application Service Layer Strategy
Consequences
• Centralizes reusable business and workflow logic
• Improves reusability of business logic
• Avoids duplication of code
• Simplifies facade implementations
• Introduces additional layer in the business tier
Related Patterns
• Session Façade
Application Services provide the background infrastructure for Session Façades, which
become simpler to implement and contain less code because they can delegate the
business processing to Application Services.
• Business Object
In applications that use Business Objects, Application Services encapsulate cross-
Business Objects logic and interact with several Business Objects.
• Data Access Object
In some applications, an Application Service can use a Data Access Object directly to
access data in a data store.
• Service Layer [PEAA]
Application Service is similar to Service Layer pattern in that both aim to promote a
service layer in your application. Service Layer explains how a set of services can be
used to create a boundary layer for your application.
• Transaction Script [PEAA]
When an Application Service is used without Business Objects (374), it becomes a
service object where you can implement your procedural logic. Transaction Script
describes the use of procedural approach to implementing business logic in your
application.
1. Business Object
Problem
You have a conceptual domain model with business logic and relationship.
Forces
• You have a conceptual model containing structured, interrelated composite objects.
• You have a conceptual model with sophisticated business logic, validation and business
rules.
• You want to separate the business state and related behavior from the rest of the Page
application, improving cohesion and reusability.
164
• You want to centralize business logic and state in an application.
• You want to increase reusability of business logic and avoid duplication of code.
Solution
Use Business Objects to separate business data and logic using an object model.
Class Diagram

Section 30. Introduction (7 pages)


Sequence Diagram

Page
165

Section 30. Introduction (7 pages)


Page
166
Strategies
• POJO Business Object Strategy
• Composite Entity Business Object Strategy
Consequences
• Promotes object-oriented approach to the business model implementation
• Centralizes business behavior and state, and promotes reuse
• Avoids duplication of and improves maintainability of code

Section 30. Introduction (7 pages)


• Separates persistence logic from business logic
• Promotes service-oriented architecture
• POJO implementations can induce, and are susceptible to, stale data
• Adds extra layer of indirection
• Can result in bloated objects
Related Patterns
• Composite Entity
You can implement Business Objects using local entity beans in EJB 2.x. Local entity
beans are recommended over remote entity beans to implement Business Objects.
Composite Entity discusses the details of implementing Business Objects with entity
beans.
• Application Service
Atomic business logic and rules related to a single Business Object are typically
implemented within that Business Object. However, in most applications, you also have
business logic that acts on multiple Business Objects. You might also need to provide
business behavior specific to use cases, client types, or channels on top of what the
Business Objects inherently implement. Using Application Service is an excellent way to
implement business logic that acts upon several Business Objects and thereby provides
a service encapsulation layer for the Business Objects
• Transfer Object
You can use Transfer Objects to carry data to and from the Business Objects. Some
developers are tempted to use Transfer Objects as the internal state representation for
Business Objects. However, a Business Object wrapping a Transfer Object is not
recommended because it violates the intention of the Transfer Object pattern and tightly
couples the two types of objects. (See Business Object Wraps Transfer Object (Page
390)).
• Data Access Object
You can use Data Access Objects to facilitate Business Objects persistence, in
implementations that use custom JDBC mechanisms.
• Domain Store
You can use Domain Store to provide Business Objects persistence to leverage the
power of custom implementation of transparent persistence mechanisms, similar to
standard JDO implementations.
• Transaction Script and Domain Model [PEAA]
Transaction Script discusses a procedural implementation of business logic and Domain
Model discusses the object-oriented implementation. While the Domain Model and
Business Object pattern are very similar, the term business object is more commonly

Page
used by developers and architects in the field and is more precise. We have seen
extensive use of the term business object in numerous projects that fall in line with the
concepts outlined in this pattern.
167
1. Composite Entity
Problem
You want to use entity beans to implement your conceptual domain model.
Forces
• You want to avoid the drawbacks of remote entity beans, such as network overhead and
remote inter-entity bean relationships.
• You want to leverage bean-managed persistence (BMP) using custom or legacy
persistence implementations.

Section 30. Introduction (7 pages)


• You want to implement parent-child relationships efficiently when implementing Business
Objects as entity beans.
• You want to encapsulate and aggregate existing POJO Business Objects with entity
beans.
• You want to leverage EJB container transaction management and security features.
• You want to encapsulate the physical database design from the clients.
Solution
Use a Composite Entity to implement persistent Business Objects using local entity beans
and POJOs. Composite Entity aggregates a set of related Business Objects into coarse-
grained entity bean implementations.
Class Diagram

Sequence Diagram

Page
168

Section 30. Introduction (7 pages)


Strategies
• Composite Entity Remote Facade Strategy
• Composite Entity BMP Strategies
• Lazy Loading Strategy
• Store Optimization (Dirty Marker) Strategy
• Composite Transfer Object Strategy
Consequences
• Increases maintainability
• Improves network performance
• Reduces database schema dependency

Page
• Increases object granularity
• Facilitates composite transfer object creation 169
Related Patterns
• Business Object
The Business Object pattern describes in general how domain model entities are
implemented in J2EE applications. Composite Entity is one of the strategies of the
Business Object pattern for implementing the Business Objects using entity beans.
• Transfer Object
The Composite Entity creates a composite Transfer Object and returns it to the client.
The Transfer Object is used to carry data from the Composite Entity and its dependent
objects.
• Session Façade
Composite Entities are generally not directly exposed to the application clients. Session

Section 30. Introduction (7 pages)


Façades are used to encapsulate the entity beans, add integration and web service
endpoints, and provide a simpler coarse-grained interface to clients.
• Transfer Object Assembler
When it comes to obtaining a composite transfer object from the Business Object, the
Composite Entity is similar to a Transfer Object Assembler. However, in this case, the
data sources for all the Transfer Objects in the composite are parts of the Composite
Entity itself, whereas for the Transfer Object Assembler, the data sources can be different
entity beans, session beans, Data Access Objects, Application Services, and so on.

1. Transfer Object
Problem
You want to transfer multiple data elements over a tier.
Forces
• You want clients to access components in other tiers to retrieve and update data.
• You want to reduce remote requests across the network.
• You want to avoid network performance degradation caused by chattier applications that
have high network traffic.
Solution
This pattern is also known as value object.
Use a Transfer Object to carry multiple data elements across a tier.
Class Diagram

Page
170
Sequence Diagram

Section 30. Introduction (7 pages)


Strategies
• Updatable Transfer Objects Strategy

Page
• Multiple Transfer Objects Strategy
• Entity Inherits Transfer Object Strategy
171
Consequences
• Reduces network traffic
• Simplifies remote object and remote interface
• Transfers more data in fewer remote calls
• Reduces code duplication
• Introduces stale transfer objects
• Increases complexity due to synchronization and version control
Related Patterns

Section 30. Introduction (7 pages)


• Session Façade
The Session Façade frequently uses Transfer Objects as an exchange mechanism with
participating Business Objects. When the facade acts as a proxy to the underlying
business service, the Transfer Object obtained from the Business Objects can be passed
to the client.
• Transfer Object Assembler
The Transfer Object Assembler builds composite Transfer Objects from different data
sources. The data sources are usually session beans or entity beans whose clients ask
for data, and who would then provide their data to the Transfer Object Assembler as
Transfer Objects. These Transfer Objects are considered to be parts of the composite
object assembled by the Transfer Object Assembler.
• Value List Handler
The Value List Handler is another pattern that provides lists of dynamically constructed
Transfer Objects by accessing the persistent store at request time.
• Composite Entity
The Transfer Object pattern addresses transferring data across tiers. This is one aspect
of the design considerations for entity beans that can use the Transfer Object to carry
data to and from the entity beans.
1. Transfer Object Assembler
Problem
You want to obtain an application model that aggregates transfer objects from several business
components.
Forces
• You want to encapsulate business logic in a centralized manner and prevent
implementing it in the client.
• You want to minimize the network calls to remote objects when building a data
representation of the business-tier object model.
• You want to create a complex model to hand over to the client for presentation purposes.
• You want the clients to be independent of the complexity of model implementation, and
you want to reduce coupling between the client and the business components.
Solution
Use a Transfer Object Assembler to build an application model as a composite Transfer
Object. The Transfer Object Assembler aggregates multiple Transfer Objects from various
business components and services, and returns it to the client.
Class Diagram

Page
172

Section 30. Introduction (7 pages)


Sequence Diagram

Page
173

Strategies
• POJO Transfer Object Assembler Strategy
• Session Bean Transfer Object Assembler Strategy

Section 30. Introduction (7 pages)


Consequences
• Separates business logic, simplifies client logic
• Reduces coupling between clients and the application model
• Improves network performance
• Improves client performance
• Can introduce stale data
Related Patterns
• Transfer Object
The Transfer Object Assembler uses Transfer Objects to create and transport data to the
client. The created Transfer Objects carry the data representing the application model
from the business tier to the clients requesting the data.
• Business Object
The Transfer Object Assembler uses the required Business Objects to obtain data to
build the required application model.
• Composite Entity
Composite Entity produces composite transfer objects from its own data. On the other
hand, Transfer Object Assembler constructs the application model by obtaining data from
different sources, such as Session Façades, Business Objects, Application Services,
Data Access Objects, and other services.
• Session Façade
When Transfer Object Assembler is implemented as a session bean, you could view it as
a limited special application of the Session Façade. If the client needs to update the
business components that supply the application model data, it accesses a Session
Façade (session bean) that provides that update service.
• Data Access Object
A Transfer Object Assembler can obtain certain data directly from the data store using a
Data Access Object.
• Service Locator
The Transfer Object Assembler uses a Service Locator to locate and use various
business components.

1. Value List Handler


Problem
You have a remote client that wants to iterate over a large results list.

Page
Forces
• You want to avoid the overhead of using EJB finder methods for large searches.
174
• You want to implement a read-only use-case that does not require a transaction.
• You want to provide the clients with an efficient search and iterate mechanism over a
large results set.
• You want to maintain the search results on the server side.
Solution
Use a Value List Handler to search, cache the results, and allow the client to traverse and
select items from the results.
Class Diagram

Section 30. Introduction (7 pages)


Sequence Diagram

Page
175

Strategies
• POJO Handler Strategy
• Value List Handler Session Façade Strategy

Section 30. Introduction (7 pages)


• Value List from Data Access Object Strategy
Consequences
• Provides efficient alternative to EJB finders
• Caches search results
• Provides flexible search capabilities
• Improves network performance
• Allows deferring entity bean transactions
• Promotes layering and separation of concerns
• Creating a large list of Transfer Objects can be expensive
Related Patterns
• Iterator [GoF]
This Value List Handler uses the Iterator pattern, described in the GoF book, Design
Patterns: Elements of Reusable Object-Oriented Software.
• Data Access Object
This Value List Handler uses the Data Access Object to perform searches using either
the DAO Transfer Object Collection strategy to obtain a collection of transfer objects or
using the DAO RowSet Wrapper List strategy to obtain a custom List implementation.
• Session Façade
The Value List Handler is often implemented as a specialized version of the Session
Façade responsible for managing the search results and providing a remote interface.
Some applications might have Session Façades that expose other business methods and
also include the functionality of the Value List Handler. However, it might be better to
keep the list handling functionality of the Value List Handler separate from the business
methods of a Session Façade. Thus, if the Value List Handler needs a remote interface,
provide a dedicated session bean implementation that encapsulates and facades the
Value List Handler.
1. Data Access Object
Problem
You want to encapsulate data access and manipulation in a separate layer.
Forces
• You want to implement data access mechanisms to access and manipulate data in a
persistent storage.
• You want to decouple the persistent storage implementation from the rest of your
application.
• You want to provide a uniform data access API for a persistent mechanism to various Page
types of data sources, such as RDBMS, LDAP, OODB, XML repositories, flat files, and
176
so on.
• You want to organize data access logic and encapsulate proprietary features to facilitate
maintainability and portability.
Solution
Use a Data Access Object to abstract and encapsulate all access to the persistent store.
The Data Access Object manages the connection with the data source to obtain and store
data. It encapsulate the persistent store technology (RDBMS, LDAP, ...). It isolates the
persistent storage implementation.
Possible Clients of this pattern are Business Object, Session Façade, Application Service, Value
List Handler, Transfer Object Assembler.

Section 30. Introduction (7 pages)


Class Diagram

Sequence Diagram

Page
177

Section 30. Introduction (7 pages)


Strategies
• Custom Data Access Object Strategy
• Data Access Object Factory Strategies

Page
• Transfer Object Collection Strategy
• Cached RowSet Strategy 178
• Read Only RowSet Strategy
• RowSet Wrapper List Strategy
Consequences
• Centralizes control with loosely coupled handlers
• Enables transparency
• Provides object-oriented view and encapsulates database schemas
• Enables easier migration
• Reduces code complexity in clients
• Organizes all data access code into a separate layer

Section 30. Introduction (7 pages)


• Adds extra layer
• Needs class hierarchy design (Factory Method Strategies)
• Introduces complexity to enable object-oriented design (RowSet Wrapper List Strategy)
Related Patterns
• Transfer Object
Data access objects in their most basic form use transfer objects to transport data to and
from their clients. Transfer Object are used in other strategies of Data Access Objects.
The RowSet Wrapper List strategy returns a list as a transfer object.
• Factory Method [GoF] and Abstract Factory [GoF]
The Data Access Object Factory strategies use the Factory Method pattern to implement
the concrete factories and its products (DAOs). For added flexibility, the Abstract Factory
pattern is used as described in the Data Access Object Factory strategy.
• Broker [POSA1]
The DAO pattern is related to the Broker pattern, which describes approaches for
decoupling clients and servers in distributed systems. The DAO pattern more specifically
applies this pattern to decoupling the resource tier from clients in another tier, such as the
business or presentation tier.
• Transfer Object Assembler
The Transfer Object Assembler uses the Data Access Object to obtain data to build the
composite transfer object it needs to assemble and send as the model data to the client.
• Value List Handler
The value list handler needs a list of results to act upon. Such a list is often obtained
using a Data Access Object, which can return a set of results. While the value list handler
has an option of obtaining a RowSet from the Data Access Object, it might be better to
obtain and use a RowSet Wrapper List instead.
Real World Product / Implementation
• CodeFutures adopts and implements the DAO Pattern in their Firestorm DAO product.
Check out their product here.

1. Service Activator
Problem
You want to invoke services asynchronously.
Forces
• You want to invoke business services, POJOs, or EJB components in an asynchronous

Page
manner.
• You want to integrate publish/subscribe and point-to-point messaging to enable 179
asynchronous processing services.
• You want to perform a business task that is logically composed of several business tasks.
Solution
Use a Service Activator to receive asynchronous requests and invoke one or more
business services.
Class Diagram

Section 30. Introduction (7 pages)


Sequence Diagram

Page
180

Strategies
• POJO Service Activator Strategy
• MDB Service Activator Strategy
• Service Activator Aggregator Strategy
• Response Strategies

Section 30. Introduction (7 pages)


○ Database Response Strategy
○ Email Response Strategy
○ JMS Message Response Strategy
Consequences
• Integrates JMS into enterprise applications
• Provides asynchronous processing for any business-tier component
• Enables standalone JMS listener
Related Patterns
• Session Façade
The Session Façade encapsulates the complexity of the system and provides coarse-
grained access to business objects. A Service Activator can access a Session Façade as
a business service to invoke business processing.
• Application Services
An Application Service can also be a kind of business service that a Service Activator
invokes to process the request.
• Business Delegate
The Service Activator typically uses a Business Delegate to access a Session Façade.
This results in simpler code for the Service Activator and results in Business Delegate
reuse across different tiers.
• Service Locator
The client can use the Service Locator pattern to look up and create JMS-related service
objects. The Service Activator can use the Service Locator pattern to look up and create
enterprise bean components.
• Half-Sync/Half-Async [POSA2]
The Service Activator is related to the Half-Sync/Half-Async pattern. The pattern
describes architectural decoupling by suggesting different layers for synchronous and
asynchronous processing, and an intermediate queuing layer in between.
• Aggregator [EIP]
The Aggregator pattern discusses the problem of converting a request into several
asynchronous tasks and aggregating the results. The Service Activator Aggregator
strategy is based on similar concepts.
1. Domain Store
Problem
You want to separate persistence from your object model.

Page
Forces
• You want to avoid putting persistence details in your Business Objects.
181
• You do not want to use entity beans.
• Your application might be running in a web container.
• Your object model uses inheritance and complex relationships.
Solution
Use a Domain Store to transparently persist an object model. Unlike J2EE’s container-
managed persistence and bean-managed persistence, which include persistence support
code in the object model, Domain Store's persistence mechanism is separate from the
object model.
Class Diagram

Section 30. Introduction (7 pages)


Sequence Diagram

Page
182

Strategies
• Custom Persistence Strategy
• JDO Strategy
Consequences

Section 30. Introduction (7 pages)


• Creating a custom persistence framework is a complex task
• Multi-layer object tree loading and storing requires optimization techniques
• Improves understanding of persistence frameworks
• A full-blown persistence framework might be overkill for a small object model
• Improves testability of your persistent object model
• Separates business object model from persistence logic
Related Patterns
• Unit of Work [PEAA]
Maintains a list objects affected by a business transaction. Unit of Work closely relates to
PersistenceManager.
• Query Object [PEAA]
An object that represents a database query. Relates to the Query role described in
Domain Store.
• Data Mapper [PEAA]
A layer of Mappers that moves data between objects and database. Relates to
StateManager.
• Table Data Gateway [PEAA]
An object that acts as gateway to a database table. Relates to StoreManager.
• Dependent Mapping [PEAA]
Has one class perform the database mapping for a child class. Relates to parent
dependent object and PersistMap.
• Domain Model [PEAA]
An object model that has behavior and data. Relates to BusinessObject.
• Data Transfer Object [PEAA]
Same as Transfer Object.
• Identity Map [PEAA]
Ensures each object only gets loaded once. Relates to StateManager.
• Lazy Load [PEAA]
An object which contains partial data and knows how to get complete data. Relates to
StateManager and StoreManager interaction for lazy loading.
1. Web Service Broker
Problem
You want to provide access to one or more services using XML and web protocols.

Page
Forces
• You want to reuse and expose existing services to clients. 183
• You want to monitor and potentially limit the usage of exposed services, based on your
business requirements and system resource usage.
• Your services must be exposed using open standards to enable integration of
heterogeneous applications.
• You want to bridge the gap between business requirements and existing service
capabilities
Solution
Use a Web Service Broker to expose and broker one or more services using XML and web
protocols.
Class Diagram

Section 30. Introduction (7 pages)


Sequence Diagram

Strategies

Page
• Custom XML Messaging Strategy
• Java Binder Strategy 184
• JAX-RPC Strategy
Consequences
• Introduces a layer between client and service
• Existing remote Session Façades (341) need be refactored to support local access
• Network performance may be impacted due to web protocols
Related Patterns
• Aggregator [EIP]
• Application Service
Application Service components can be called from Web Service Broker components.

Section 30. Introduction (7 pages)


• Session Façade
Session Façade components can be called from Web Service Broker components.
• Mediator [POSA1]
Mediator is a hub for interaction like Web Service Broker
• Message Router [EIP]

From a list, select the most appropriate pattern for a given scenario.
Patterns are limited to those documented in this book - Gamma, Erich;
7.2 Richard Helm, Ralph Johnson, and John Vlissides (1995). Design
Patterns: Elements of Reusable Object-Oriented Software and are named
using the names given in that book.

Ref. • [DESIGN_PATTERNS]

# OO Pattern Definition

To provide an interface for creating families of related or


1 Abstract Factory
dependent objects without specifying their concrete classes.

To convert the interface of a class into another interface clients


2 Adapter
expect.

To decouple an abstraction from its implementation so that the


two can vary independently. The class itself can be thought of as
3 Bridge the abstraction and what the class can do as the
implementation, so it separates what the class is from what it
does.

To separate the construction of a complex object from its


4 Builder representation so that the same construction process can create
different representations.

To avoid coupling the sender of a request to its receiver by Page


185
Chain of giving more than one object a chance to handle the request.
5
Responsibility Chain the receiving objects and pass the request along the chain
until an object handles it.

Encapsulate an operational request of a Service into an entity


(Command), so that the same Service can be used to trigger
6 Command
different operations. Optionally, this can be used to support
undo, logging, queuing, and other additional behaviors.

Section 30. Introduction (7 pages)


To compose objects into tree structures to represent part-whole
7 Composite hierarchies. Composite lets clients treat individual objects and
compositions of objects uniformly.

To attach additional responsibilities to an object dynamically. In


8 Decorator effect, decorators provide a flexible alternative to sub-classing
for extending functionality.

To provide a unified interface to a set of interfaces in a


9 Façade subsystem. Facade defines a higher-level interface that makes
the subsystem easier to use.

To Define an interface for creating an object, but let subclasses


10 Factory Method decide which class to instantiate. Factory Method lets a class
defer instantiation to subclasses.

It is a shared object that can be used in multiple contexts


11 Flyweight
simultaneously.

Given a language and its representation for its grammar, an


12 Interpreter interpreter uses the representation to interpret sentences in the
language.

To provide a way to access the elements of an aggregate object


13 Iterator
sequentially without exposing its underlying representation.

To define an object that encapsulates how a set of objects


14 Mediator
interact.

Without violating encapsulation, capture and externalize an


15 Memento object's internal state so that the object can be restored to this
state later.

To define a one-to-many dependency among objects so that


16 Observer when one object changes state, all its dependents are notified
and updated automatically.

Page
When instances of a class can have one of only a few different
combinations of state. It may be more convenient to install a 186
17 Prototype corresponding number of prototypes and clone them (less
expensive at runtime) rather than instantiating the class
manually, each time with the appropriate state.

To provide a placeholder for another object to control access to


18 Proxy
it.

To ensure a class only has one instance, and provide a global


19 Singleton
point of access to it.

20 State To allow an object to alter its behavior when its internal state

Section 30. Introduction (7 pages)


changes.

To define a family of algorithms, encapsulate each one, and


21 Strategy make them interchangeable. Strategy lets the algorithm vary
independently from clients that use it.

To define the skeleton of an algorithm in an operation, deferring


some steps to subclasses. Template Method lets subclasses
22 Template Method
redefine certain steps of an algorithm without changing the
algorithm's structure.

To represent an operation to be performed on the elements of


an object structure. Visitor lets you define a new operation
23 Visitor
without changing the classes of the elements on which it
operates.

Classifications of OO patterns

Purpose
Creational Structural Behavioral
Factory Method Adapter Interpreter
Class
Template Method
Abstract
Factory Adapter Chain of Responsibility
Builder Bridge Command
Composit
Scop Prototype e Iterator
e Objec Singleton Decorator Mediator
t Façade Memento
Flyweight Observer
Proxy State
Strategy

Page
Visitor
187
We classify OO design patterns by two criteria in the table above. The first criterion,
called purpose, reflects what a pattern does. Patterns can have creational, structural, or
behavioral purpose. Creational patterns concern the process of object creation.
Structural patterns deal with the composition of classes or objects. Behavioral patterns
characterize the ways in which classes or objects interact and distribute responsibility.

The second criterion, called scope, specifies whether the pattern applies primarily to
classes or to objects. Class patterns deal with relationships between classes and their
subclasses. These relationships are established through inheritance, so they are static—
fixed at compile-time. Object patterns deal with object relationships, which can be

Section 30. Introduction (7 pages)


changed at run-time and are more dynamic. Almost all patterns use inheritance to some
extent. So the only patterns labeled "class patterns" are those that focus on class
relationships. Note that most patterns are in the Object scope.

Creational class patterns defer some part of object creation to subclasses, while
Creational object patterns defer it to another object. The Structural class patterns use
inheritance to compose classes, while the Structural object patterns describe ways to
assemble objects. The Behavioral class patterns use inheritance to describe algorithms
and flow of control, whereas the Behavioral object patterns describe how a group of
objects cooperate to perform a task that no single object can carry out alone.

There are other ways to organize the patterns. Some patterns are often used together.
For example, Composite is often used with Iterator or Visitor. Some patterns are
alternatives: Prototype is often an alternative to Abstract Factory. Some patterns result in
similar designs even though the patterns have different intents. For example, the
structure diagrams of Composite and Decorator are similar.

1. Abstract Factory
Provide an interface for creating families of related or dependent objects without
specifying their concrete classes.

Page
188

Section 30. Introduction (7 pages)


The Abstract Factory Pattern provides a way to encapsulate a group of individual
factories that have a common theme. In normal usage, the client software would
create a concrete implementation of the abstract factory and then use the generic
interfaces to create the concrete objects that are part of the theme. The client does
not know (nor care) about which concrete objects it gets from each of these internal
factories since it uses only the generic interfaces of their products. This pattern
separates the details of implementation of a set of objects from its general usage.
2. Adapter
Convert the interface of a class into another interface clients expect. Adapter lets
classes work together that couldn't otherwise because of incompatible interfaces.
The form of the Adapter shown below uses delegation from the adapter to the
foreign class to reuse the pre-existing behavior:

The form of the Adapter (called the Class Adapter) shown below uses inheritance:

Page
189

Section 30. Introduction (7 pages)


The Adapter Design Pattern 'adapts' one interface for a class into one that a client
expects. An adapter allows classes to work together that normally could not because
of incompatible interfaces by wrapping its own interface around that of an already
existing class. The adapter is also responsible for handling any logic necessary to
transform data into a form that is useful for the consumer.
3. Bridge
Decouple an abstraction from its implementation so that the two can vary
independently.
When an abstraction can have one of several possible implementations, the usual
way to accommodate them is to use inheritance. An abstract class defines the
interface to the abstraction, and concrete subclasses implement it in different ways.
But this approach isn't always flexible enough. Inheritance binds an implementation
to the abstraction permanently, which makes it difficult to modify, extend, and reuse
abstractions and implementations independently.
When a class varies often, the features of object-oriented programming become very
useful because changes to a program's code can be made easily with minimal prior
knowledge about the program. The Bridge Pattern is useful when not only the class
itself varies often but also what the class does. The class itself can be thought of as
the abstraction and what the class can do as the implementation.

Page
190

Section 30. Introduction (7 pages)


Separate a varying entity from a varying behavior (AKA separate an "abstraction"
from its "implementation"), so that these issues can vary independently. Sometimes
we say it this way: separate what something is from what it does, where both of
these things vary for different reasons.

4. Builder
Separate the construction of a complex object from its representation so that the
same construction process can create different representations.
The Builder Design Pattern encapsulates the logic of how to put together a
complex object so that the client just requests a configuration and the Builder directs
the logic of building it.
The Builder pattern allows a client object to construct a complex object by specifying
only its type and content. The client is shielded from the details of the object's
construction.
It is a pattern for step-by-step creation of a complex object so that the same
construction process can create different representations is the routine in the builder
pattern that also makes for finer control over the construction process. All the
different builders generally inherit from an abstract builder class that declares the
general functions to be used by the director to let the builder create the product in
parts.

Page
Builder has a similar motivation to the Abstract Factory, but whereas in that pattern,
the client uses the Abstract Factory class methods to create its own object, in Builder 191
the client instructs the builder class on how to create the object and then asks it for
the result. How the class is put together is up to the Builder class. It's a subtle
difference.
The Builder pattern is applicable when the algorithm for creating a complex object
should be independent of the parts that make up the object and how they are
assembled and the construction process must allow different representations for the
object that's constructed.

Section 30. Introduction (7 pages)


Use the Builder pattern when:
• The algorithm for creating a complex object should be independent of the
parts that make up the object and how they're assembled.
• The construction process must allow different representations for the object
that's constructed.

Page
192

Section 30. Introduction (7 pages)


5. Chain of Responsibility
Avoid coupling the sender of a request to its receiver by giving more than one object
a chance to handle the request. Chain the receiving objects and pass the request
along the chain until an object handles it.
The Chain of Responsibility pattern is a design pattern consisting of a source of
command objects and a series of processing objects. Each processing object contains
a set of logic that describes the types of command objects that it can handle, and
how to pass off those that it cannot to the next processing object in the chain. A
mechanism also exists for adding new processing objects to the end of this chain.

Page
193

Section 30. Introduction (7 pages)


Use Chain of Responsibility when:
• More than one object may handle a request, and the handler isn't known a
priori. The handler should be ascertained automatically.
• You want to issue a request to one of several objects without specifying the
receiver explicitly.
• The set of objects that can handle a request should be specified dynamically.

6. Command
Encapsulate a request as an object, thereby letting you parameterize clients with
different requests, queue or log requests, and support undoable operations.
Encapsulate an operational request of a Service into an entity (Command), so that
the same Service can be used to trigger different operations. Optionally, this can be
used to support undo, logging, queuing, and other additional behaviors. This is
typically used to separate non-changing code (framework code) from changing code
(plugin code) when a reusable object framework is created.
Use the Command Design Pattern when you want to:
• Parameterize objects by an action to perform. You can express such
parameterization in a procedural language with a callback function, that is, a
function that's registered somewhere to be called at a later point. Commands
are an object-oriented replacement for callbacks.
• Specify, queue, and execute requests at different times. A Command object
can have a lifetime independent of the original request. If the receiver of a
request can be represented in an address space-independent way, then you
can transfer a command object for the request to a different process and

Page
fulfill the request there.
• Support undo. The Command's execute operation can store state for
194
reversing its effects in the command itself. The Command interface must
have an added unexecute operation that reverses the effects of a previous
call to execute. Executed commands are stored in a history list. Unlimited-
level undo and redo is achieved by traversing this list backwards and
forwards calling unexecute and execute, respectively.
• Support logging changes so that they can be reapplied in case of a system
crash. By augmenting the Command interface with load and store
operations, you can keep a persistent log of changes. Recovering from a
crash involves reloading logged commands from disk and reexecuting them
with the execute operation.

Section 30. Introduction (7 pages)


• Structure a system around high-level operations built on primitives
operations. Such a structure is common in information systems that support
transactions. A transaction encapsulates a set of changes to data. The
Command pattern offers a way to model transactions. Commands have a
common interface, letting you invoke all transactions the same way. The
pattern also makes it easy to extend the system with new transactions.

7. Composite
Compose objects into tree structures to represent part-whole hierarchies. Composite

Page
lets clients treat individual objects and compositions of objects uniformly.
Model simple and complex components in such a way as to allow client entities to
195
consume their behavior in the same way. The Composite Design Pattern captures
hierarchical relationships of varying complexity and structure.

Section 30. Introduction (7 pages)


Simple component - a single class, also called a "Leaf"
Complex component - a class that contains pointers to sub-ordinate or "child"
instances, and may delegate some or all of its responsibilities to them. These child
instances may be simple or complex themselves. Also called a "Node".
Use the Composite pattern when:
• You want to represent part-whole hierarchies of objects.
• You want clients to be able to ignore the difference between architecture of
objects and individual objects. Clients will treat all objects in the composite
structure uniformly.

8. Decorator
Attach additional responsibilities to an object dynamically. Decorators provide a
flexible alternative to sub-classing (i.e. inheritance) for extending functionality.
The Decorator Pattern works by wrapping the new "decorator" object around the
original object, which is typically achieved by passing the original object as a
parameter to the constructor of the decorator, with the decorator implementing the
new functionality. The interface of the original object needs to be maintained by the

Page
decorator.
Decorators are alternatives to subclassing. Subclassing adds behavior at compile
196
time whereas decorators provide a new behavior at runtime.
This difference becomes most important when there are several independent ways of
extending functionality. In some object-oriented programming languages, classes
cannot be created at runtime, and it is typically not possible to predict what
combinations of extensions will be needed at design time. This would mean that a
new class would have to be made for every possible combination. By contrast,
decorators are objects, created at runtime, and can be combined on a per-use basis.
An example of the decorator pattern is the Java I/O Streams implementation.

Section 30. Introduction (7 pages)


Use Decorator:
• To add responsibilities to individual objects dynamically and transparently,
that is, without affecting other objects.
• For responsibilities that can be withdrawn.
• When extension by subclassing is impractical. Sometimes a large number of
independent extensions are possible and would produce an explosion of
subclasses to support every combination. Or a class definition may be hidden
or otherwise unavailable for subclassing.

9. Façade
Provide a unified interface to a set of interfaces in a subsystem. Façade defines a
higher-level interface that makes the subsystem easier to use.
Structuring a system into subsystems helps reduce complexity. A common design

Page
goal is to minimize the communication and dependencies between subsystems. One
way to achieve this goal is to introduce a Façade object that provides a single,
simplified interface to the more general facilities of a subsystem.
197
Façade allows for the re-use of a valuable sub-system without coupling to the
specifics of its nature.

Section 30. Introduction (7 pages)


A Façade is an object that provides a simplified interface to a larger body of code,
such as a class library. A Façade can:
• Make a software library easier to use and understand, since the Façade has
convenient methods for common tasks.
• Make code that uses the library more readable, for the same reason.
• Reduce dependencies of outside code on the inner workings of a library, since
most code uses the Façade, thus allowing more flexibility in developing the
system.
• Wrap a poorly designed collection of APIs with a single well-designed API.
Use the Façade pattern when:
• You want to provide a simple interface to a complex subsystem. Subsystems
often get more complex as they evolve. Most patterns, when applied, result in
more and smaller classes. This makes the subsystem more reusable and

Page
easier to customize, but it also becomes harder to use for clients that don't
need to customize it. A Façade can provide a simple default view of the
subsystem that is good enough for most clients. Only clients needing more
198
customizability will need to look beyond the Façade.
• There are many dependencies between clients and the implementation
classes of an abstraction. Introduce a Façade to decouple the subsystem from
clients and other subsystems, thereby promoting subsystem independence
and portability.
• You want to layer your subsystems. Use a Façade to define an entry point to
each subsystem level. If subsystems are dependent, then you can simplify
the dependencies between them by making them communicate with each
other solely through their Façades.

Section 30. Introduction (7 pages)


10. Factory Method
Define an interface for creating an object, but let subclasses decide which class to
instantiate. Factory Method lets a class defer instantiation to subclasses.

Use the Factory Method pattern when:


• A class can't anticipate the class of objects it must create.
• A class wants its subclasses to specify the objects it creates.
• Classes delegate responsibility to one of several helper subclasses, and you
want to localize the knowledge of which helper subclass is delegate to.

11. Flyweight
Use sharing to support large numbers of fine-grained objects efficiently.
A Flyweight is a shared object that can be used in multiple contexts simultaneously.
The Flyweight acts as an independent object in each context - it's indistinguishable
from an instance of the object that's not shared. Flyweights cannot make
assumptions about the context in which they operate. The key concept here is the
distinction between intrinsic and extrinsic state. Intrinsic state is stored in the
flyweight; it consists of information that's independent of the flyweight's context,

Page
thereby making it sharable. Extrinsic state depends on and varies with the
flyweight's context and therefore can't be shared. Client objects are responsible for
passing extrinsic state to the flyweight when it needs it.
199

Section 30. Introduction (7 pages)


A Flyweight is an object that minimizes memory occupation by sharing as much data
as possible with other similar objects; it is a way to use objects in large numbers
when a simple representation would use an unacceptable amount of memory. Often
some parts of the object state cannot be shared and it's common to put them in
external data structures and pass them to the flyweight objects temporarily when
they are used.
The Flyweight pattern's effectiveness depends heavily on how and where it's used.
Apply the Flyweight pattern when ALL of the following are true:
• An application uses a large number of objects.
• Storage costs are high because of the sheer quantity of objects.
• Most object state can be made extrinsic.
• Many groups of objects may be replaced by relatively few shared objects once
extrinsic state is removed.
• The application doesn't depend on object identity. Since flyweight objects
may be shared, identity tests will return true for conceptually distinct objects.

12. Interpreter
Given a language and its representation for its grammar, an interpreter uses the

Page
representation to interpret sentences in the language.
Use the Interpreter Pattern when there is a language to interpret, and you can
200
represent statements in the language as abstract syntax trees. The Interpreter
Pattern works best when:
• The grammar is simple. For complex grammars, the class hierarchy for the
grammar becomes large and unmanageable. Tools such as parser generators
are a better alternative in such cases. They can interpret expressions without
building abstract syntax trees, which can save space and possibly time.
• Efficiency is not a critical concern. The most efficient interpreters are usually
not implemented by interpreting parse trees directly but by first translating
them into another form. For example, regular expressions are often
transformed into state machines. But even then, the translator can be
implemented by the Interpreter pattern, so the pattern is still applicable.

Section 30. Introduction (7 pages)


13. Iterator
It provides a way to access the elements of an aggregate object sequentially without
exposing its underlying representation.
An aggregate object such as a list should give you a way to access its elements
without exposing its internal structure. Moreover, you might want to traverse the list
in different ways, depending on what you want to accomplish. But you probably don't
want to bloat the List interface with operations for different traversals, even if you
could anticipate the ones you will need. You might also need to have more than one
traversal pending on the same list.
The Iterator pattern lets you do all this. The key idea in this pattern is to take the
responsibility for access and traversal out of the list object and put it into an iterator
object. The Iterator class defines an interface for accessing the list's elements. An
Iterator object is responsible for keeping track of the current element; that is, it

Page
knows which elements have been traversed already.
201

Section 30. Introduction (7 pages)


Use the Iterator pattern:
• To access an aggregate object's contents without exposing its internal
representation.
• To support multiple traversals of aggregate objects.
• To provide a uniform interface for traversing different aggregate structures
(that is, to support polymorphic iteration).

14. Mediator
Define an object that encapsulates how a set of objects interact. Mediator promotes
loose coupling by keeping objects from referring to each other explicitly, and it lets
you vary their interaction independently.
Object-oriented design encourages the distribution of behavior among objects. Such
distribution can result in an object structure with many connections between objects;
in the worst case, every object ends up knowing about every other.
Though partitioning a system into many objects generally enhances reusability,
proliferating interconnections tend to reduce it again. Lots of interconnections make
it less likely that an object can work without the support of others - the system acts
as though it were monolithic. Moreover, it can be difficult to change the system's
behavior in any significant way, since behavior is distributed among many objects.
As a result, you may be forced to define many subclasses to customize the system's
behavior.

Page
You can avoid these problems by encapsulating collective behavior in a separate
Mediator object. A mediator is responsible for controlling and coordinating the
interactions of a group of objects. The Mediator serves as an intermediary that
202
keeps objects in the group from referring to each other explicitly. The objects only
know the Mediator, thereby reducing the number of interconnections.

Section 30. Introduction (7 pages)


Use the Mediator Design Pattern when:
• A set of objects communicate in well-defined but complex ways. The resulting
interdependencies are unstructured and difficult to understand.
• Reusing an object is difficult because it refers to and communicates with
many other objects.
• A behavior that's distributed between several classes should be customizable
without a lot of subclassing.

15. Memento
Without violating encapsulation, capture and externalize an object's internal state so Page
that the object can be restored to this state later.
203
Sometimes it's necessary to record the internal state of an object. This is required
when implementing checkpoints and undo mechanisms that let users back out of
tentative operations or recover from errors. You must save state information
somewhere so that you can restore objects to their previous states. But objects
normally encapsulate some or all of their state, making it inaccessible to other
objects and impossible to save externally. Exposing this state would violate
encapsulation, which can compromise the application's reliability and extensibility.
We can solve this problem with the Memento pattern. A memento is an object that
stores a snapshot of the internal state of another object - the memento's originator.
The undo mechanism will request a memento from the originator when it needs to
checkpoint the originator's state. The originator initializes the memento with

Section 30. Introduction (7 pages)


information that characterizes its current state. Only the originator can store and
retrieve information from the memento - the memento is "opaque" to other objects.

Use the Memento Design Pattern when:


• A snapshot of (some portion of) an object's state must be saved so that it can
be restored to that state later, AND
• A direct interface to obtaining the state would expose implementation details
and break the object's encapsulation.

16. Observer
Define a one-to-many dependency among objects so that when one object changes
state, all its dependents are notified and updated automatically.
An event can occurs and a number of entities need to receive a message about it.
When will the event occur? Perhaps if it will occur at all it will be unpredictable. Also,
the number of entities, and which entities they are, is unpredictable. Ideally, the
subscribers to this notification should be configurable at runtime.
A common side-effect of partitioning a system into a collection of cooperating classes
is the need to maintain consistency between related objects. You don't want to
achieve consistency by making the classes tightly coupled, because that reduces
their reusability.
The Observer Design Pattern describes how to establish these relationships. The

Page
key objects in this pattern are subject and observer. A subject may have any
number of dependent observers. All observers are notified whenever the subject 204
undergoes a change in state. In response, each observer will query the subject to
synchronize its state with the subject's state.
This kind of interaction is also known as publish-subscribe. The subject is the
publisher of notifications. It sends out these notifications without having to know
who its observers are. Any number of observers can subscribe to receive
notifications.

Section 30. Introduction (7 pages)


Use the Observer Design Pattern in ANY of the following situations:
• When an abstraction has two aspects, one dependent on the other.
Encapsulating these aspects in separate objects lets you vary and reuse them
independently.
• When a change to one object requires changing others, and you don't know
how many objects need to be changed.
• When an object should be able to notify other objects without making
assumptions about who these objects are. In other words, you don't want
these objects tightly coupled.

17. Prototype
Specify the kinds of objects to create using a prototypical instance, and create new
objects by copying this prototype.
A Prototype Design Pattern is a creational design pattern used in software
development when the type of objects to create is determined by a prototypical
instance, which is cloned to produce new objects. This pattern is used for example
when the inherent cost of creating a new object in the standard way (e.g., using the
'new' keyword) is prohibitively expensive for a given application.
To implement the pattern, declare an abstract base class that specifies a pure virtual

Page
clone() method. Any class that needs a "polymorphic constructor" capability
derives itself from the abstract base class, and implements the clone() operation. 205
The client, instead of writing code that invokes the "new" operator on a hard-wired
class name, calls the clone() method on the prototype, calls a factory method with
a parameter designating the particular concrete derived class desired, or invokes the
clone() method through some mechanism provided by another design pattern.

Section 30. Introduction (7 pages)


Use the Prototype Design Pattern when a system should be independent of how
its products are created, composed, and represented; AND
• When the classes to instantiate are specified at run-time, for example, by
dynamic loading; OR
• To avoid building a class hierarchy of factories that parallels the class
hierarchy of products; OR
• When instances of a class can have one of only a few different combinations
of state. It may be more convenient to install a corresponding number of
prototypes and clone them rather than instantiating the class manually, each
time with the appropriate state.

18. Proxy
It provides a placeholder for another object to control access to it.
A proxy, in its most general form, is a class functioning as an interface to another
thing. The other thing could be anything: a network connection, a large object in
memory, a file, or some other resource that is expensive or impossible to duplicate.
Proxy Design Pattern is applicable whenever there is a need for a more versatile
or sophisticated reference to an object than a simple pointer. Here are several
common situations in which the Proxy pattern is applicable:

Page
206

Section 30. Introduction (7 pages)


• A remote proxy provides a local representative for an object in a different
address space.
• A virtual proxy creates expensive objects on demand.
• A protection proxy controls access to the original object. Protection proxies
are useful when objects should have different access rights.
• A smart reference is a replacement for a bare pointer that performs
additional actions when an object is accessed (counting the number of
references, loading a persistent object into memory when it's first referenced,
checking that the real object is locked).

19. Singleton
It ensures a class only has one instance, and provides a global point of access to it.
The singleton pattern is a design pattern that is used to restrict instantiation of a
class to one object. This is useful when exactly one object is needed to coordinate
actions across the system:

Page
207

Use the Singleton Design Pattern when:

Section 30. Introduction (7 pages)


• There must be EXACTLY ONE instance of a class, and it must be accessible to
clients from a well-known access point.
• When the sole instance should be extensible by subclassing, and clients
should be able to use an extended instance without modifying their code.

20. State
It allows an object to alter its behavior when its internal state changes. The object
will appear to change its class.
The key idea in this pattern is to introduce an abstract class to represent the possible
states of the object. This class declares an interface common to all the classes that
represent different operational states. The concrete subclasses implement state-
specific behavior. Based on the current state, the appropriate concrete class is
selected and used.

Use the State Design Pattern in either of the following cases:


• An object's behavior depends on its state, and it must change its behavior at
run-time depending on that state.
• Operations have large, multipart conditional statements that depend on the
object's state. This state is usually represented by one or more enumerated
constants. Often, several operations will contain this same conditional
structure. The State pattern puts each branch of the conditional in a separate
class. This lets you treat the object's state as an object in its own right that

Page
can vary independently from other objects.
208
21. Strategy
Define a family of algorithms, encapsulate each one, and make them
interchangeable. Strategy lets the algorithm vary independently from clients that use
it.
A single behavior with varying implementation exists, and we want to decouple
consumers of this behavior from any particular implementation. We may also want to
decouple them from the fact that the implementation is varying at all.

Section 30. Introduction (7 pages)


Use the Strategy Design Pattern when:
• Many related classes differ only in their behavior. Strategies provide a way to
configure a class with one of many behaviors.
• You need different variants of an algorithm. For example, you might define
algorithms reflecting different space/time trade-offs. Strategies can be used
when these variants are implemented as a class hierarchy of algorithms.
• An algorithm uses data that clients shouldn't know about. Use the Strategy
pattern to avoid exposing complex, algorithm-specific data structures.
• A class defines many behaviors, and these appear as multiple conditional
statements in its operations. Instead of many conditionals, move related
conditional branches into their own Strategy class.

22. Template Method


Define the skeleton of an algorithm in an operation, deferring some steps to
subclasses. Template Method lets subclasses redefine certain steps of an algorithm
without changing the algorithm's structure.
A Template Method defines the program skeleton of an algorithm. The algorithm
itself is made abstract, and the subclasses override the abstract methods to provide
concrete behavior.
First a class is created that provides the basic steps of an algorithm design. These
steps are implemented using abstract methods. Later on, subclasses change the

Page
abstract methods to implement real actions. Thus the general algorithm is saved in
one place but the concrete steps may be changed by the subclasses. 209
The Template Method thus manages the larger picture of task semantics, and more
refined implementation details of selection and sequence of methods. This larger
picture calls abstract and non-abstract methods for the task at hand. The non-
abstract methods are completely controlled by the Template Method. The expressive
power and degrees of freedom occur in abstract methods that may be implemented
in subclasses. Some or all of the abstract methods can be specialized in a subclass;
the abstract method is the smallest unit of granularity, allowing the writer of the
subclass to provide particular behavior with minimal modifications to the larger
semantics. In contrast the Template Method need not be changed and is not an
abstract operation and thus may guarantee required steps before and after the
abstract operations. Thus the Template Method is invoked and as a consequence the
subordinate non-abstract methods and abstract methods are called in the correct
sequence.

Section 30. Introduction (7 pages)


The Template Method pattern should be used:
• To implement the invariant parts of an algorithm once and leave it up to
subclasses to implement the behavior that can vary.
• When common behavior among subclasses should be factored and localized in
a common class to avoid code duplication. This is a good example of
"refactoring to generalize". You first identify the differences in the existing
code and then separate the differences into new operations. Finally, you
replace the differing code with a template method that calls one of these new
operations.
• To control subclasses extensions. You can define a template method that calls
"hook" operations at specific points, thereby permitting extensions only at
those points.

Page
23. Visitor
It represents an operation to be performed on the elements of an object structure.
210
Visitor lets you define a new operation without changing the classes of the elements
on which it operates.
The Visitor Design Pattern is a way of separating an algorithm from an object
structure. A practical result of this separation is the ability to add new operations to
existing object structures without modifying those structures.
The idea is to use a structure of element classes, each of which has an accept()
method that takes a visitor object as an argument. Visitor is an interface that has a
visitXXX() method for each element class. The accept() method of an element
class calls back the visitXXX() method for its class. Separate concrete visitor
classes can then be written that perform some particular operations:

Section 30. Introduction (7 pages)


• A client that uses the Visitor pattern must create a ConcreteVisitor object and
then traverse the object structure, visiting each element with the visitor.
• When an element is visited, it calls the Visitor operation that corresponds to
its class. The element supplies itself as an argument to this operation to let
the visitor access its state, if necessary.
Use the Visitor Design Pattern when:
• An object structure contains many classes of objects with differing interfaces,
and you want to perform operations on these objects that depend on their
concrete classes.
• Many distinct and unrelated operations need to be performed on objects in an
object structure, and you want to avoid "polluting" their classes with these
operations. Visitor lets you keep related operations together by defining them
in one class. When the object structure is shared by many applications, use
Visitor to put operations in just those applications that need them.
• The classes defining the object structure rarely change, but you often want to
define new operations over the structure. Changing the object structure
classes requires redefining the interface to all visitors, which is potentially

Page
costly. If the object structure classes change often, then it's probably better
to define the operations in those classes. 211

Section 30. Introduction (7 pages)


From a list, select the benefits and drawbacks of a pattern drawn from this book -
7.3 Gamma, Erich; Richard Helm, Ralph Johnson, and John Vlissides (1995). Design
Patterns: Elements of Reusable Object-Oriented Software.

Ref. • [DESIGN_PATTERNS]

# OO Pattern Definition

Abstract Factory To provide an interface for creating families of related or

Page
1
dependent objects without specifying their concrete classes.
212
Adapter To convert the interface of a class into another interface clients
2
expect.

Bridge To decouple an abstraction from its implementation so that the


3
two can vary independently.

Builder To separate the construction of a complex object from its


4 representation so that the same construction process can create
different representations.

5 Chain of To avoid coupling the sender of a request to its receiver by

Section 30. Introduction (7 pages)


Responsibility giving more than one object a chance to handle the request.
Chain the receiving objects and pass the request along the chain
until an object handles it.

Command To Encapsulate a request as an object, thereby letting you


6 parameterize clients with different requests, queue or log
requests, and support undoable operations.

Composite To compose objects into tree structures to represent part-whole


7 hierarchies. Composite lets clients treat individual objects and
compositions of objects uniformly.

Decorator To attach additional responsibilities to an object dynamically.


8 Decorators provide a flexible alternative to subclassing for
extending functionality.

Façade To provide a unified interface to a set of interfaces in a


9
subsystem.

Factory Method To Define an interface for creating an object, but let subclasses
10 decide which class to instantiate. Factory Method lets a class
defer instantiation to subclasses.

Flyweight is a shared object that can be used in multiple contexts


11
simultaneously.

Interpreter Given a language, define a represention for its grammar along


12 with an interpreter that uses the representation to interpret
sentences in the language.

Iterator To provide a way to access the elements of an aggregate object


13
sequentially without exposing its underlying representation.

Mediator To define an object that encapsulates how a set of objects


14
interact.

Memento Without violating encapsulation, capture and externalize an

Page
15 object's internal state so that the object can be restored to this
state later. 213
Observer To define a one-to-many dependency between objects so that
16 when one object changes state, all its dependents are notified
and updated automatically.

Prototype To specify the kinds of objects to create using a prototypical


17
instance, and create new objects by copying this prototype.

Proxy To provide a surrogate or placeholder for another object to


18
control access to it.

Section 30. Introduction (7 pages)


Singleton To ensure a class only has one instance, and provide a global
19
point of access to it.

State To allow an object to alter its behavior when its internal state
20
changes.

Strategy To define a family of algorithms, encapsulate each one, and


21 make them interchangeable. Strategy lets the algorithm vary
independently from clients that use it.

Template Method To define the skeleton of an algorithm in an operation, deferring


some steps to subclasses. Template Method lets subclasses
22
redefine certain steps of an algorithm without changing the
algorithm's structure.

Visitor To represent an operation to be performed on the elements of


an object structure. Visitor lets you define a new operation
23
without changing the classes of the elements on which it
operates.

1. Abstract Factory
• It isolates concrete classes. The Abstract Factory pattern helps you control
the classes of objects that an application creates. Because a factory
encapsulates the responsibility and the process of creating product objects, it
isolates clients from implementation classes. Clients manipulate instances
through their abstract interfaces. Product class names are isolated in the
implementation of the concrete factory; they do not appear in client code.
• It makes exchanging product families easy. The class of a concrete factory
appears only once in an application - that is, where it's instantiated. This
makes it easy to change the concrete factory an application uses. It can use
different product configurations simply by changing the concrete factory.
Because an abstract factory creates a complete family of products, the whole
product family changes at once.
• It promotes consistency among products. When product objects in a family
are designed to work together, it's important that an application use objects
from only one family at a time. AbstractFactory makes this easy to enforce.
• When we use the Abstract Factory we gain protection from illegitimate

Page
combinations of service objects. This means we can design the rest of the
system for maximum flexibility, since we know that the Abstract Factory will 214
eliminate any concerns of the flexibility yeilding bugs. Also, the consuming
entity (Client) or entities will be incrementally simpler, since they can deal
with the components at the abstract level.
The drawback are:
• Supporting new kinds of products is difficult. Extending abstract factories to
produce new kinds of Products isn't easy. That's because the AbstractFactory
interface fixes the set of products that can be created. Supporting new kinds
of products requires extending the factory interface, which involves changing
the AbstractFactory class and all of its subclasses.
• As with factories in general, the Abstract Factory's responsibility is limited to
the creation of instances, and thus the testable issue is whether or not the
right set of instances is created under a given circumstance. Often, this is

Section 30. Introduction (7 pages)


covered by the test of the entities that use the factory, but if it is not, the
test can use type-checking to determine that the proper concrete types are
created under the right set of circumstances.
• The Abstract Factory holds up well if the maintenance aspects are limited to
new sets, or a given set changing an implementation. On the other hand, if
an entirely new abstract concept enters the domain, then the maintenance
issues will be more profound as the Abstract Factory interface, all the
existing factory implementations, and the Client entities will all have to be
changed.
This is not a "fault" of the pattern, but rather points out the degree to which
object-oriented systems are vulnerable to missing/new abstractions.
2. Adapter
• How much adapting does Adapter do? Adapters vary in the amount of work
they do to adapt Adaptee to the Target interface. There is a spectrum of
possible work, from simple interface conversion - for example, changing the
names of operations - to supporting an entirely different set of operations.
The amount of work Adapter does depends on how similar the Target
interface is to Adaptee's.
• Pluggable adapters. A class is more reusable when you minimize the
assumptions other classes must make to use it. By building interface
adaptation into a class, you eliminate the assumption that other classes see
the same interface. Put another way, interface adaptation lets us incorporate
our class into existing systems that might expect different interfaces to the
class.
• Using two-way adapters to provide transparency. A potential problem with
adapters is that they aren't transparent to all clients. An adapted object no
longer conforms to the Adaptee interface, so it can't be used as is wherever
an Adaptee object can. Two-way adapters can provide such transparency.
Specifically, they're useful when two different clients need to view an object
differently.
• To test the Adapter, you can use a Mock or Fake object in place of the
foreign object (which would normally be adapted). The Mock or Fake can
return predictable behavior for the adapter to convert.
• The Adapter is a very low-cost solution, and is therefore quite commonplace.
The cost is the creation of an additional class, but the benefits are:
○ Encapsulated reuse of existing behavior.
○ Polymorphism (through an up-cast) with a foreign class.
○ Promotes the Open-Closed Principle.

Page
○ If the construction of the foreign class was not encapsulated (which
is common), the Adapter can encapsulate it in its constructor.
215
However, an object factory is preferred.
Class and Object Adapters have different trade-offs.
A Class Adapter:
• adapts Adaptee to Target by committing to a concrete Adapter class. As a
consequence, a Class Adapter won't work when we want to adapt a class and
all its subclasses.
• lets Adapter override some of Adaptee's behavior, since Adapter is a subclass
of Adaptee.
• introduces only one object, and no additional pointer indirection is needed to
get to the adaptee.

Section 30. Introduction (7 pages)


An Object Adapter:
• lets a single Adapter work with many Adaptees - that is, the Adaptee itself
and all of its subclasses (if any). The Adapter can also add functionality to all
Adaptees at once.
• makes it harder to override Adaptee behavior. It will require subclassing
Adaptee and making Adapter refer to the subclass rather than the Adaptee
itself.

3. Bridge
• Decoupling interface and implementation. An implementation is not bound
permanently to an interface. The implementation of an abstraction can be
configured at run-time. It's even possible for an object to change its
implementation at run-time.
Decoupling Abstraction and Implementor also eliminates compile-time
dependencies on the implementation. Changing an implementation class
doesn't require recompiling the Abstraction class and its clients. This
property is essential when you must ensure binary compatibility between
different versions of a class library.
• Improved extensibility. You can extend the Abstraction and Implementor
hierarchies independently.
• Hiding implementation details from clients. You can shield clients from
implementation details, like the sharing of implementor objects and the
accompanying reference count mechanism (if any).
• The Behavior Classes will probably be testable on their own (unless they are
Adapters and/or Façades, in which case see the testing forces accompanying
those patterns). However the entity classes are dependant upon behaviors,
and so a Mock or Fake object can be used to control the returns from these
dependencies, and also to check on the action taken upon the behavior by
the entity, if this is deemed an appropriate thing to test.
• The Bridge creates flexibility because the entities and behaviors can each
vary without necessarily affecting the other.
• Both the Entities and Behaviors are open-closed, if we build the bridge in an
object factory, which is recommended.
• If the Entities are highly orthoganal from one another, the Behavior interface
will tend to be broad.
• The interface of the Behavior can require changes over time, which can
cause maintenance problems. Specifically, if new Entities that may be added

Page
to the system in the future are unlikely to be satisfied with the current
Behavior interface, then this interface may bloat, requiring potentially
extensive maintenance.
216
• The delegation from the Entities to the Behaviors can degrade performance.

4. Builder
Benefits:
• It lets you vary a product's internal representation. The Builder object
provides the director with an abstract interface for constructing the product.
The interface lets the builder hide the representation and internal structure
of the product. It also hides how the product gets assembled. Because the
product is constructed through an abstract interface, all you have to do to
change the product's internal representation is define a new kind of builder.

Section 30. Introduction (7 pages)


• It isolates code for construction and representation. The Builder pattern
improves modularity by encapsulating the way a complex object is
constructed and represented. Clients needn't know anything about the
classes that define the product's internal structure; such classes don't appear
in Builder's interface.
• It gives you finer control over the construction process. Unlike creational
patterns that construct products in one shot, the Builder pattern constructs
the product step by step under the director's control. Only when the product
is finished does the director retrieve it from the builder. Hence the Builder
interface reflects the process of constructing the product more than other
creational patterns. This gives you finer control over the construction process
and consequently the internal structure of the resulting product.

5. Chain of Responsibility
• Reduced coupling. The pattern frees an object from knowing which other
object handles a request. An object only has to know that a request will be
handled "appropriately." Both the receiver and the sender have NO explicit
knowledge of each other, and an object in the chain doesn't have to know
about the chain's structure.
As a result, Chain of Responsibility can simplify object interconnections.
Instead of objects maintaining references to all candidate receivers, they
keep a single reference to their successor.
• Added flexibility in assigning responsibilities to objects. Chain of
Responsibility gives you added flexibility in distributing responsibilities
among objects. You can add or change responsibilities for handling a request
by adding to or otherwise changing the chain at run-time. You can combine
this with subclassing to specialize handlers statically.
The drawbacks are:
• Receipt isn't guaranteed. Since a request has no explicit receiver, there's no
guarantee it'll be handled—the request can fall off the end of the chain
without ever being handled. A request can also go unhandled when the chain
is not configured properly.
• The chain may get lengthy, and may introduce performance problems.

6. Command
The Command pattern has the following consequences:
• Command decouples the object that invokes the operation from the one that

Page
knows how to perform it.
• Commands are first-class objects. They can be manipulated and extended
217
like any other object.
• You can assemble commands into a composite command. In general,
composite commands are an instance of the Composite pattern.
• It's easy to add new Commands, because you don't have to change existing
classes.

7. Composite
• defines class hierarchies consisting of primitive objects and composite
objects. Primitive objects can be composed into more complex objects, which

Section 30. Introduction (7 pages)


in turn can be composed, and so on recursively. Wherever client code
expects a primitive object, it can also take a composite object.
• makes the client simple. Clients can treat composite structures and individual
objects uniformly. Clients normally don't know (and shouldn't care) whether
they're dealing with a leaf or a composite component. This simplifies client
code, because it avoids having to write tag-and-case-statementstyle
functions over the classes that define the composition.
• makes it easier to add new kinds of components. Newly defined Composite
or Leaf subclasses work automatically with existing structures and client
code. Clients don't have to be changed for new Component classes.
• can make your design overly general. The disadvantage of making it easy to
add new components is that it makes it harder to restrict the components of
a composite. Sometimes you want a composite to have only certain
components. With Composite, you can't rely on the type system to enforce
those constraints for you. You'll have to use run-time checks instead.

8. Decorator
The Decorator Pattern has at least two key benefits and two liabilities:
• More flexibility than static inheritance. The Decorator pattern provides a
more flexible way to add responsibilities to objects than can be had with
static (multiple) inheritance. With decorators, responsibilities can be added
and removed at run-time simply by attaching and detaching them. In
contrast, inheritance requires creating a new class for each additional
responsibility. This gives rise to many classes and increases the complexity
of a system. Furthermore, providing different Decorator classes for a specific
Target Abstraction class lets you mix and match responsibilities.
Decorators also make it easy to add a property twice.
• Avoids feature-laden classes high up in the hierarchy. Decorator offers a
pay-as-you-go approach to adding responsibilities. Instead of trying to
support all foreseeable features in a complex, customizable class, you can
define a simple class and add functionality incrementally with Decorator
objects. Functionality can be composed from simple pieces. As a result, an
application needn't pay for features it doesn't use. It's also easy to define
new kinds of Decorators independently from the classes of objects they
extend, even for unforeseen extensions. Extending a complex class tends to
expose details unrelated to the responsibilities you're adding.
The drawbacks are:
• A decorator and its component aren't identical. A decorator acts as a

Page
transparent enclosure. But from an object identity point of view, a decorated
component is not identical to the component itself. Hence you shouldn't rely
on object identity when you use decorators.
218
• Lots of little objects. A design that uses Decorator often results in systems
composed of lots of little objects that all look alike. The objects differ only in
the way they are interconnected, not in their class or in the value of their
variables. Although these systems are easy to customize by those who
understand them, they can be hard to learn and debug.

9. Façade
The Façade pattern offers the following benefits:
• It shields clients from subsystem components, thereby reducing the number
of objects that clients deal with and making the subsystem easier to use.

Section 30. Introduction (7 pages)


• It promotes weak coupling between the subsystem and its clients. Often the
components in a subsystem are strongly coupled. Weak coupling lets you
vary the components of the subsystem without affecting its clients. Facades
help layer a system and the dependencies between objects. They can
eliminate complex or circular dependencies. This can be an important
consequence when the client and the subsystem are implemented
independently.
Reducing compilation dependencies is vital in large software systems. You
want to save time by minimizing recompilation when subsystem classes
change. Reducing compilation dependencies with facades can limit the
recompilation needed for a small change in an important subsystem. A
facade can also simplify porting systems to other platforms, because it's less
likely that building one subsystem requires building all others.
• It doesn't prevent applications from using subsystem classes if they need to.
Thus you can choose between ease of use and generality.

10. Factory Method


Factory methods eliminate the need to bind application-specific classes into your
code. The code only deals with the Product interface; therefore it can work with any
user-defined ConcreteProduct classes.
A potential disadvantage of factory methods is that clients might have to subclass
the Creator class just to create a particular ConcreteProduct object. Subclassing is
fine when the client has to subclass the Creator class anyway, but otherwise the
client now must deal with another point of evolution.
Here are two additional consequences of the Factory Method pattern:
• Provides hooks for subclasses. Creating objects inside a class with a factory
method is always more flexible than creating an object directly. Factory
Method gives subclasses a hook for providing an extended version of an
object.
• Connects parallel class hierarchies. In the examples we've considered so far,
the factory method is only called by Creators. But this doesn't have to be the
case; clients can find factory methods useful, especially in the case of
parallel class hierarchies.

11. Flyweight
The drawback of flyweights that it may introduce run-time costs associated with
transferring, finding, and/or computing extrinsic state, especially if it was formerly
stored as intrinsic state. However, such costs are offset by space savings, which

Page
increase as more flyweights are shared.
Storage savings are a function of several factors:
219
• the reduction in the total number of instances that comes from sharing.
• the amount of intrinsic state per object.
• whether extrinsic state is computed or stored.
The more flyweights are shared, the greater the storage savings. The savings
increase with the amount of shared state. The greatest savings occur when the
objects use substantial quantities of both intrinsic and extrinsic state, and the
extrinsic state can be computed rather than stored. Then you save on storage in two
ways: Sharing reduces the cost of intrinsic state, and you trade extrinsic state for
computation time.

Section 30. Introduction (7 pages)


The Flyweight pattern is often combined with the Composite pattern to represent a
hierarchical structure as a graph with shared leaf nodes. A consequence of sharing is
that flyweight leaf nodes cannot store a pointer to their parent. Rather, the parent
pointer is passed to the flyweight as part of its extrinsic state. This has a major
impact on how the objects in the hierarchy communicate with each other.
12. Interpreter
The Interpreter pattern has the following benefits and liabilities:
• It's easy to change and extend the grammar. Because the pattern uses
classes to represent grammar rules, you can use inheritance to change or
extend the grammar. Existing expressions can be modified incrementally,
and new expressions can be defined as variations on old ones.
• Implementing the grammar is easy, too. Classes defining nodes in the
abstract syntax tree have similar implementations. These classes are easy to
write, and often their generation can be automated with a compiler or parser
generator.
• Adding new ways to interpret expressions. The Interpreter pattern makes it
easier to evaluate an expression in a new way. For example, you can support
pretty printing or type-checking an expression by defining a new operation
on the expression classes. If you keep creating new ways of interpreting an
expression, then consider using the Visitor pattern to avoid changing the
grammar classes.
The drawback is:
• Complex grammars are hard to maintain. The Interpreter pattern defines at
least one class for every rule in the grammar (grammar rules defined using
BNF may require multiple classes). Hence grammars containing many rules
can be hard to manage and maintain. Other design patterns can be applied
to mitigate the problem. But when the grammar is very complex, other
techniques such as parser or compiler generators are more appropriate.

13. Iterator
The Iterator pattern has three important consequences:
• It supports variations in the traversal of an aggregate. Complex aggregates
may be traversed in many ways. For example, code generation and semantic
checking involve traversing parse trees. Code generation may traverse the
parse tree inorder or preorder. Iterators make it easy to change the traversal
algorithm: Just replace the iterator instance with a different one. You can
also define Iterator subclasses to support new traversals.

Page
• Iterators simplify the Aggregate interface. Iterator's traversal interface
obviates the need for a similar interface in Aggregate, thereby simplifying
the aggregate's interface.
220
• More than one traversal can be pending on an aggregate. An iterator keeps
track of its own traversal state. Therefore you can have more than one
traversal in progress at once.

14. Mediator
The Mediator pattern has the following benefits and drawbacks:
• It limits subclassing. A mediator localizes behavior that otherwise would be
distributed among several objects. Changing this behavior requires
subclassing Mediator only; Colleague classes can be reused as is.

Section 30. Introduction (7 pages)


• It decouples colleagues. A mediator promotes loose coupling between
colleagues. You can vary and reuse Colleague and Mediator classes
independently.
• It simplifies object protocols. A mediator replaces many-to-many interactions
with one-to-many interactions between the mediator and its colleagues.
One-to-many relationships are easier to understand, maintain, and extend.
• It abstracts how objects cooperate. Making mediation an independent
concept and encapsulating it in an object lets you focus on how objects
interact apart from their individual behavior. That can help clarify how
objects interact in a system.
• It centralizes control. The Mediator pattern trades complexity of interaction
for complexity in the mediator. Because a mediator encapsulates protocols, it
can become more complex than any individual colleague. This can make the
mediator itself a monolith that's hard to maintain.

15. Memento
The Memento Design Pattern has several consequences:
• Preserving encapsulation boundaries. Memento avoids exposing information
that only an originator should manage but that must be stored nevertheless
outside the originator. The pattern shields other objects from potentially
complex Originator internals, thereby preserving encapsulation boundaries.
• It simplifies Originator. In other encapsulation-preserving designs, Originator
keeps the versions of internal state that clients have requested. That puts all
the storage management burden on Originator. Having clients manage the
state they ask for simplifies Originator and keeps clients from having to
notify originators when they're done.
• Using mementos might be expensive. Mementos might incur considerable
overhead if Originator must copy large amounts of information to store in the
memento or if clients create and return mementos to the originator often
enough. Unless encapsulating and restoring Originator state is cheap, the
pattern might not be appropriate.
• Defining narrow and wide interfaces. It may be difficult in some languages to
ensure that only the originator can access the memento's state.
• Hidden costs in caring for mementos. A caretaker is responsible for deleting
the mementos it cares for. However, the caretaker has no idea how much
state is in the memento. Hence an otherwise lightweight caretaker might
incur large storage costs when it stores mementos.

16. Observer Page


221
The Observer Desing Pattern lets you vary subjects and observers independently.
You can reuse subjects without reusing their observers, and vice versa. It lets you
add observers without modifying the subject or other observers.
Further benefits and liabilities of the Observer pattern include the following:
• Abstract coupling between Subject and Observer. All a subject knows is that
it has a list of observers, each conforming to the simple interface of the
abstract Observer class. The subject doesn't know the concrete class of any
observer. Thus the coupling between subjects and observers is abstract and
minimal.
Because Subject and Observer aren't tightly coupled, they can belong to
different layers of abstraction in a system. A lower-level subject can

Section 30. Introduction (7 pages)


communicate and inform a higher-level observer, thereby keeping the
system's layering intact. If Subject and Observer are lumped together, then
the resulting object must either span two layers (and violate the layering), or
it must be forced to live in one layer or the other (which might compromise
the layering abstraction).
• Support for broadcast communication. Unlike an ordinary request, the
notification that a subject sends needn't specify its receiver. The notification
is broadcast automatically to all interested objects that subscribed to it. The
subject doesn't care how many interested objects exist; its only
responsibility is to notify its observers. This gives you the freedom to add
and remove observers at any time. It's up to the observer to handle or
ignore a notification.
• Unexpected updates. Because observers have no knowledge of each other's
presence, they can be blind to the ultimate cost of changing the subject. A
seemingly innocuous operation on the subject may cause a cascade of
updates to observers and their dependent objects. Moreover, dependency
criteria that aren't well-defined or maintained usually lead to spurious
updates, which can be hard to track down.
This problem is aggravated by the fact that the simple update protocol
provides no details on what changed in the subject. Without additional
protocol to help observers discover what changed, they may be forced to
work hard to deduce the changes.

17. Prototype
Prototype has many of the same consequences that Abstract Factory and Builder
have: It hides the concrete product classes from the client, thereby reducing the
number of names clients know about. Moreover, these patterns let a client work with
application-specific classes without modification.
Additional benefits of the Prototype Design Pattern are listed below:
• Adding and removing products at run-time. Prototypes let you incorporate a
new concrete product class into a system simply by registering a prototypical
instance with the client. That's a bit more flexible than other creational
patterns, because a client can install and remove prototypes at run-time.
• Specifying new objects by varying values. Highly dynamic systems let you
define new behavior through object composition - by specifying values for an
object's variables, for example - and not by defining new classes. You
effectively define new kinds of objects by instantiating existing classes and
registering the instances as prototypes of client objects. A client can exhibit
new behavior by delegating responsibility to the prototype.

Page
This kind of design lets users define new "classes" without programming. In
fact, cloning a prototype is similar to instantiating a class. The Prototype 222
pattern can greatly reduce the number of classes a system needs.
• Specifying new objects by varying structure. Many applications build objects
from parts and subparts. Editors for circuit design, for example, build circuits
out of subcircuits. For convenience, such applications often let you
instantiate complex, user-defined structures, say, to use a specific subcircuit
again and again.
The Prototype pattern supports this as well. We simply add this subcircuit as
a prototype to the palette of available circuit elements. As long as the
composite circuit object implements clone() as a deep copy, circuits with
different structures can be prototypes.

Section 30. Introduction (7 pages)


• Reduced subclassing. Factory Method often produces a hierarchy of Creator
classes that parallels the product class hierarchy. The Prototype pattern lets
you clone a prototype instead of asking a factory method to make a new
object. Hence you don't need a Creator class hierarchy at all. This benefit
applies primarily to languages like C++ that don't treat classes as first-class
objects. Languages that do, like Smalltalk and Objective C, derive less
benefit, since you can always use a class object as a creator. Class objects
already act like prototypes in these languages.
• Configuring an application with classes dynamically. Some run-time
environments let you load classes into an application dynamically. The
Prototype pattern is the key to exploiting such facilities in a language like C+
+.
An application that wants to create instances of a dynamically loaded class
won't be able to reference its constructor statically. Instead, the run-time
environment creates an instance of each class automatically when it's
loaded, and it registers the instance with a prototype manager. Then the
application can ask the prototype manager for instances of newly loaded
classes, classes that weren't linked with the program originally.
The main liability of the Prototype pattern is that each subclass of Prototype must
implement the clone() operation, which may be difficult. For example, adding
clone() is difficult when the classes under consideration already exist.
Implementing clone() can be difficult when their internals include objects that
don't support copying or have circular references.
• Factory Method: creation through inheritance.
• Prototype: creation through delegation.

18. Proxy
The Proxy pattern introduces a level of indirection when accessing an object. The
additional indirection has many uses, depending on the kind of proxy:
• A remote proxy can hide the fact that an object resides in a different address
space.
• A virtual proxy can perform optimizations such as creating an object on
demand.
• Both protection proxies and smart references allow additional housekeeping
tasks when an object is accessed.
There's another optimization that the Proxy pattern can hide from the client. It's
called copy-on-write, and it's related to creation on demand. Copying a large and

Page
complicated object can be an expensive operation. If the copy is never modified,
then there's no need to incur this cost. By using a proxy to postpone the copying
process, we ensure that we pay the price of copying the object only if it's modified.
223
To make copy-on-write work, the subject must be reference counted. Copying the
proxy will do nothing more than increment this reference count. Only when the client
requests an operation that modifies the subject does the proxy actually copy it. In
that case the proxy must also decrement the subject's reference count. When the
reference count goes to zero, the subject gets deleted.
Copy-on-write can reduce the cost of copying heavyweight subjects significantly.
• Proxies promote strong cohesion.
• Proxies simplify the client object and the object being proxied (by hiding
complex issues like remoting and caching, etc.)

Section 30. Introduction (7 pages)


• If the instantiation of all classes is encapsulated by policy, inserting a proxy at
a later time is significantly easier.
• Proxies often evolve into Decorators when multiple additional behaviors are
needed. Knowing this, one does not have to introduce the Decorator until it is
needed, avoiding overdesign and analysis paralysis.

19. Singleton
The Singleton Design Pattern has several benefits:
• Controlled access to sole instance. Because the Singleton class encapsulates
its sole instance, it can have strict control over how and when clients access
it.
• Reduced name space. The Singleton pattern is an improvement over global
variables. It avoids polluting the name space with global variables that store
sole instances.
• Permits refinement of operations and representation. The Singleton class
may be subclassed, and it's easy to configure an application with an instance
of this extended class. You can configure the application with an instance of
the class you need at run-time.
• Permits a variable number of instances. The pattern makes it easy to change
your mind and allow more than one instance of the Singleton class.
Moreover, you can use the same approach to control the number of
instances that the application uses. Only the operation that grants access to
the Singleton instance needs to change.
• More flexible than class operations. Another way to package a singleton's
functionality is to use class operations (that is, static member functions in
C++ or class methods in Smalltalk). But both of these language techniques
make it hard to change a design to allow more than one instance of a class.
Moreover, static member functions in C++ are never virtual, so subclasses
can't override them polymorphically.

20. State
The State Design Pattern has the following consequences:
• It localizes state-specific behavior and partitions behavior for different states.
The State pattern puts all behavior associated with a particular state into one
object. Because all state-specific code lives in a State subclass, new states
and transitions can be added easily by defining new subclasses.
• It makes state transitions explicit. When an object defines its current state
Page
solely in terms of internal data values, its state transitions have no explicit
224
representation; they only show up as assignments to some variables.
Introducing separate objects for different states makes the transitions more
explicit. Also, State objects can protect the Context from inconsistent
internal states, because state transitions are atomic from the Context's
perspective - they happen by rebinding one variable (the Context's State
object variable), not several.
• State objects can be shared. If State objects have no instance variables -
that is, the state they represent is encoded entirely in their type - then
contexts can share a State object. When states are shared in this way, they
are essentially flyweights with no intrinsic state, only behavior.

Section 30. Introduction (7 pages)


1. Strategy
The Strategy Design Pattern has the following benefits and drawbacks:
• Families of related algorithms. Hierarchies of Strategy classes define a family
of algorithms or behaviors for contexts to reuse. Inheritance can help factor
out common functionality of the algorithms.
• An alternative to subclassing. Inheritance offers another way to support a
variety of algorithms or behaviors. You can subclass a Context class directly
to give it different behaviors. But this hardwires the behavior into Context. It
mixes the algorithm implementation with Context's, making Context harder
to understand, maintain, and extend. And you can't vary the algorithm
dynamically. You wind up with many related classes whose only difference is
the algorithm or behavior they employ. Encapsulating the algorithm in
separate Strategy classes lets you vary the algorithm independently of its
context, making it easier to switch, understand, and extend.
• Strategies eliminate conditional statements. The Strategy pattern offers an
alternative to conditional statements for selecting desired behavior. When
different behaviors are lumped into one class, it's hard to avoid using
conditional statements to select the right behavior. Encapsulating the
behavior in separate Strategy classes eliminates these conditional
statements.
• A choice of implementations. Strategies can provide different
implementations of the same behavior. The client can choose among
strategies with different time and space trade-offs.
• Clients must be aware of different Strategies. The pattern has a potential
drawback in that a client must understand how Strategies differ before it can
select the appropriate one. Clients might be exposed to implementation
issues. Therefore you should use the Strategy pattern only when the
variation in behavior is relevant to clients.
The drawbacks are:
• Communication overhead between Strategy and Context. The Strategy
interface is shared by all ConcreteStrategy classes whether the algorithms
they implement are trivial or complex. Hence it's likely that some
ConcreteStrategies won't use all the information passed to them through this
interface; simple ConcreteStrategies may use none of it! That means there
will be times when the context creates and initializes parameters that never
get used. If this is an issue, then you'll need tighter coupling between
Strategy and Context.
• Increased number of objects. Strategies increase the number of objects in an

Page
application. Sometimes you can reduce this overhead by implementing
strategies as stateless objects that contexts can share. Any residual state is
225
maintained by the context, which passes it in each request to the Strategy
object. Shared strategies should not maintain state across invocations. The
Flyweight pattern describes this approach in more detail.

1. Template Method
Template methods are a fundamental technique for code reuse. They are particularly
important in class libraries, because they are the means for factoring out common
behavior in library classes.
Template methods lead to an inverted control structure that's sometimes referred to
as "the Hollywood principle," that is, "Don't call us, we'll call you". This refers to how
a parent class calls the operations of a subclass and not the other way around.

Section 30. Introduction (7 pages)


It's important for template methods to specify which operations are hooks (may be
overridden) and which are abstract operations (must be overridden). To reuse an
abstract class effectively, subclass writers must understand which operations are
designed for overriding.

1. Visitor
Some of the benefits and liabilities of the Visitor pattern are as follows:
• Visitor makes adding new operations easy. Visitors make it easy to add
operations that depend on the components of complex objects. You can
define a new operation over an object structure simply by adding a new
visitor. In contrast, if you spread functionality over many classes, then you
must change each class to define a new operation.
• A visitor gathers related operations and separates unrelated ones. Related
behavior isn't spread over the classes defining the object structure; it's
localized in a visitor. Unrelated sets of behavior are partitioned in their own
visitor subclasses. That simplifies both the classes defining the elements and
the algorithms defined in the visitors. Any algorithm-specific data structures
can be hidden in the visitor.
• Visiting across class hierarchies. An iterator can visit the objects in a
structure as it traverses them by calling their operations. But an iterator
can't work across object structures with different types of elements.
Visitor does not have this restriction. It can visit objects that don't have a
common parent class. You can add any type of object to a Visitor interface.
• Accumulating state. Visitors can accumulate state as they visit each element
in the object structure. Without a visitor, this state would be passed as extra
arguments to the operations that perform the traversal, or they might
appear as global variables.
• Breaking encapsulation. Visitor's approach assumes that the
ConcreteElement interface is powerful enough to let visitors do their job. As
a result, the pattern often forces you to provide public operations that access
an element's internal state, which may compromise its encapsulation.
The drawback is:
• Adding new ConcreteElement classes is hard. The Visitor pattern makes it
hard to add new subclasses of Element. Each new ConcreteElement gives
rise to a new abstract operation on Visitor and a corresponding
implementation in every ConcreteVisitor class. Sometimes a default
implementation can be provided in Visitor that can be inherited by most of
the ConcreteVisitors, but this is the exception rather than the rule.

Page
So the key consideration in applying the Visitor pattern is whether you are
mostly likely to change the algorithm applied over an object structure or the 226
classes of objects that make up the structure. The Visitor class hierarchy can
be difficult to maintain when new ConcreteElement classes are added
frequently. In such cases, it's probably easier just to define operations on the
classes that make up the structure. If the Element class hierarchy is stable,
but you are continually adding operations or changing algorithms, then the
Visitor pattern will help you manage the changes.

7.4 From a list, select the benefits and drawbacks of a specified Core J2EE
pattern drawn from this book – Alur, Crupi and Malks (2003). Core

Section 30. Introduction (7 pages)


J2EE Patterns: Best Practices and Design Strategies 2nd Edition.

Ref. • [CORE_J2EE_PATTERNS]

# J2EE Pattern Definition

Presentation Tier

Intercepting Filter To intercept and manipulate a request and a response before


1
and after they are processed.

2 Front Controller To centralize presentation-tier request handling.

Context Object To avoid using protocol-specific system information outside of its


3
relevant context.

Application Controller To centralize retrieval and invocation of request-processing


4
components, such as commands and views.

View Helper To separate programming logic from the view to facilitate


5 division of labor between software developers and web page
designers.

Composite View To build a view from modular, atomic component parts that are
6 combined to create a composite whole, while managing the
content and the layout independently.

Service to Worker To provide dynamic handling of requests and responses before


7 they are passed to the view tier.(Service to Worker = Front
Controller + Application Controller + View Helper)

Dispatcher View For a view to handle a request and generate a response, while
8
managing limited amounts of business processing.

Page
Business Tier
227
Business Delegate To hide the details of service creation, reconfiguration, and
9
invocation retries from the clients.

To transparently locate business components and services in a


10
uniform manner.

11 To control client access to business objects,

Application Service To encapsulate use case-specific logic outside of individual


12
Business Objects.

Section 30. Introduction (7 pages)


Business Object To have a conceptual model containing structured, interrelated
13 composite objects with sophisticated business logic, validation
and business rules.

14 Composite Entity To encapsulate the physical database design from the clients.

15 Transfer Object To transfer multiple data elements over a tier.

Transfer Object To aggregate transfer objects from several business


16
Assembler components.

Value List Handler To provide the clients with an efficient search and iterate
17
mechanism over a large results set.

Integration Tier

Data Access Object To encapsulate data access and manipulation in a separate


18
object.

19 Service Activator To invoke services asynchronously.

20 Domain Store To separate persistence from your object model.

Web Service Broker To provide access to one or more services using XML and web
21
protocols

1. Intercepting Filter
• Centralizes control with loosely coupled handlers
Filters provide a central place for handling processing across multiple
requests, as does a controller. Filters are better suited to massaging
requests and responses for ultimate handling by a target resource, such as a
controller. Additionally, a controller often ties together the management of
numerous unrelated common services, such as authentication, logging,
encryption, and so forth. Filtering allows for much more loosely coupled
handlers, which can be combined in various permutations.

Page
• Improves reusability
Filters promote cleaner application partitioning and encourage reuse. You can
228
transparently add or remove these pluggable interceptors from existing code,
and due to their standard interface, they work in any permutations and are
reusable for varying presentations.
• Declarative and flexible configuration
Numerous services are combined in varying permutations without a single
recompile of the core code base.
• Information sharing is inefficient
Sharing information between filters can be inefficient, since by definition
each filter is loosely coupled. If large amounts of information must be shared
between filters, then this approach might prove to be costly.

Section 30. Introduction (7 pages)


The drawback is that information sharing is inefficient.

2. Front Controller
• Centralizes control
A controller provides a central place to handle control logic that is common
across multiple requests. A controller is the initial access point of the request
handling mechanism and delegates to an Application Controller to perform
the underlying business processing and view generation functionality.
• Improves manageability
Centralizing control makes it easier to monitor control flow that also provides
a choke point for illicit attempts to access the application. In addition,
auditing a single entrance into the application requires fewer resources than
distributing checks across all pages.
• Improves reusability
Promotes cleaner application partitioning and encourages reuse, as common
code moves into a controller or is managed/delegated to by a controller.
• Improves role separation
A controller promotes cleaner separation of team roles, since one role
(software developer) can more easily maintain programming logic while
another (web production) maintains markup for view generation.
3. Context Object
• Improves reusability and maintainability
Application components and subsystems are more generic and can be reused
for various types of clients, since the application interfaces are not polluted
with protocol-specific data types.
• Improves testability
Using Context Objects helps remove dependencies on protocol-specific code
that might tie a runtime environment to a container, such as a web server or
an application server. Testing is easier when such dependencies are limited
or removed, since automated testing tools, such as JUnit can work directly
with Context Objects.
• Reduces constraints on evolution of interfaces
Interfaces that accept a Context Object, instead of the numerous objects
that the Context Object encapsulates, are less tied to these specific details
that might constrain later changes. This is important when developing

Page
frameworks, but is also valuable in general.
• Reduces performance 229
There is a modest performance hit, because state is transferred from one
object to another. This reduction in performance is usually far outweighed by
the benefits of improved reusability and maintainability of the application
subcomponents.
4. Application Controller
• Improves modularity
Separating common action and view management code, into its own set of
classes makes the application more modular. This modularity might also ease
testing, since aspects of the Application Controller functionality will not be
tied to a web container.
• Improves reusability

Section 30. Introduction (7 pages)


You can reuse the common, modular components.
• Improves extensibility
Functionality can be added to the request handling mechanism in a
predictable way, and independent of protocol-specific or network access
code. Declarative flow control reduces coupling between code and
navigation/flow control rules, allowing these rules to be modified without
recompiling or modifying code.
5. View Helper
• Improves application partitioning, reuse, and maintainability
Separating HTML from processing logic, such as control logic, business logic,
data access logic, and formatting logic, results in improved application
partitioning.
In JSP, for example, try to minimize the amount of Java programming logic
that is embedded within the page, and try to minimize the amount of HTML
markup that is embedded within programming code. Failing to minimize
either of these scenarios results in a cumbersome and unwieldy situation,
especially in larger projects.
Programming logic that is extracted from JSP and encapsulated within
helpers is reusable, reducing the duplication of embedded view code and
easing maintenance.
• Improves role separation
Using helpers to separate processing logic from views also reduces the
potential dependencies that individuals fulfilling different roles might have on
the same resources. For example, if processing logic is embedded within a
view, then a software developer is tasked with maintaining code that is
embedded within HTML markup. Then, a web production team member
would need to modify page layout and design components that are mingled
with Java code. Neither individual fulfilling these roles is likely to be familiar
with the implementation specifics of the other individual's work, raising the
likelihood of accidental modifications introducing bugs into the system.
• Eases testing
As processing logic is extracted into separate helper components, testing
individual pieces of code becomes much easier. Testing a piece of code that
is embedded within a JSP is much more difficult than testing code that is
encapsulated within a separate class.
• Helper usage mirrors scriptlets
One important reason for extracting processing logic from a page is to

Page
reduce the implementation details that are embedded directly within the
page. It is important to keep in mind, though, that it is not a panacea to
simply use JavaBeans or custom tags within your JSP. The use of certain
230
generic helpers only replaces the embedded Java code with a references to
helpers that, in effect, produce the same problem of exposing the
implementation details, as opposed to the intent of the code.
An example is the use of a conditional helper, such as a custom tag that
models the conditional logic of an 'if' statement. Heavy usage of this sort of
helper tag may simply mirror the scriptlet code that it is intended to replace.
As a result, the resulting fragment continues to look like programming logic
embedded within the page. Using helpers as scriptlets is a bad practice,
although it is often done in an attempt to apply View Helper.

6. Composite View

Section 30. Introduction (7 pages)


• Improves modularity and reuse
The pattern promotes modular design. It is possible to reuse atomic portions
of a template, such as a table of stock quotes, in numerous views, and to
decorate these reused portions with different information. This pattern
permits the table to be moved into its own module and simply included
where necessary. This type of dynamic layout and composition reduces
duplication, fosters reuse, and improves maintainability.
• Adds role-based or policy-based control
A Composite View might conditionally include view template fragments based
on runtime decisions, such as user role or security policy.
• Enhances maintainability
Managing changes to portions of a template is much more efficient when the
template is not hardcoded directly into the view markup. When the template
content is kept separate from the view, you can modify modular portions of
template content independently of the template layout. Additionally, these
changes are available to the client immediately, depending on the
implementation strategy. You can more easily manage modifications to page
layout as well, since changes are centralized.
• Reduces maintainability
Aggregating atomic pieces of the display to create a single view introduces
the potential for display errors, since subviews are page fragments. This is a
limitation that can become a maintainability issue.
When you use this pattern, be aware that subviews must not be complete
views. You must account for tag usage quite strictly in order to create valid
Composite Views, and this can become a maintainability issue.
• Reduces performance
Generating a display that includes numerous subviews might slow
performance. Runtime inclusion of subviews will result in a delay each time
the page is served to the client. In environments that have specific response
times requirements, such performance slowdowns, though typically
extremely minimal, might not be acceptable. An alternative is to move the
subview inclusion to translation time, though this limits the subview to
changing only when the page is retranslated.

7. Service to Worker
• Centralizes control and improves modularity, reusability, and maintainability
Centralizing control and request-handling logic improves the system's

Page
modularity and reusability. Common request processing code can be reused,
reducing the sort of duplication that occurs if processing logic is embedded 231
within views. Less duplication means improved maintainability, since changes
are made in a single location.
• Improves role separation
Centralizing control and request-handling logic separates it from view
creation code and allows for a cleaner separation of team roles. Software
developers can focus on maintaining programming logic while page authors
can focus on the view.
8. Dispatcher View
• Leverages frameworks and libraries
Frameworks and libraries realize and support specific patterns. The
Dispatcher View approach is supported in standard and custom libraries that

Section 30. Introduction (7 pages)


provide view adapters and transformers and, for limited use, data access
tags. An example of one standard library is JSTL.
• Introduces potential for poor separation of the view from the model and
control logic
Since business processing is managed by the view, the Dispatcher View
approach is inappropriate for handling requests that rely upon heavy
business processing or data access. Embedding processing logic of any form
within a view should be minimized. The overriding goal is to separate control
and business logic from the view and to localize disparate logic.
• Separates processing logic from view and improves reusability
View Helpers adapt and convert the presentation model for the view.
Processing logic that might otherwise be embedded within the view is
extracted into reusable helpers, exposing less of the code's implementation
details and more of its intent.

9. Business Delegate
• Reduces coupling, improves maintainability
The Business Delegate reduces coupling between the presentation tier and
the business tier by hiding all business-tier implementation details. Managing
changes is easier because they are centralized in the Business Delegate.
• Translates business service exceptions
The Business Delegate translates network or infrastructure-related
exceptions into business exceptions, shielding clients from the knowledge of
the underlying implementation specifics.
• Improves availability
When a Business Delegate encounters a business service failure, the
delegate can implement automatic recovery features without exposing the
problem to the client. If the recovery succeeds, the client doesn't need to
know about the failure. If the recovery attempt fails, then the Business
Delegate needs to inform the client of the failure. Additionally, the Business
Delegate methods can be synchronized, if necessary.
• Exposes a simpler, uniform interface to the business tier
The Business Delegate is implemented as a simple Java object, making it
easier for application developers to use business-tier components without
dealing with the complexities of the business-service implementations.
• Improves performance
The Business Delegate can cache information on behalf of the presentation-

Page
tier components to improve performance for common service requests.
• Introduces an additional layer 232
The Business Delegate adds a layer that might be seen as increasing
complexity and decreasing flexibility. However, the benefits of the pattern
outweigh such drawbacks.
• Hides remoteness
Location transparency is a benefit of this pattern, but it can lead to problems
if you don't keep in mind where the Business Delegate resides. A Business
Delegate is a client-side proxy to a remote service. Even though a Business
Delegate is implemented as a local POJO, when you call a method on a
Business Delegate, the Business Delegate typically has to make a call across
the network to the underlying business service to fulfill this request.
Therefore, try to keep calls to the Business Delegate to a minimum to
prevent excess network traffic.

Section 30. Introduction (7 pages)


10. Service Locator
• Abstracts complexity
The Service Locator encapsulates the complexity of the service lookup and
creation process and keeps it hidden from the client.
• Provides uniform service access to clients
The Service Locator provides a useful and precise interface that all clients
can use. The interface ensures that all types of clients in the application
uniformly access business objects, in terms of lookup and creation. This
uniformity reduces development and maintenance overhead.
• Facilitates adding EJB business components
Because clients of enterprise beans are not aware of the EJB Home objects,
you can add new EJB Home objects for enterprise beans developed and
deployed at a later time without impacting the clients. JMS clients are not
directly aware of the JMS connection factories, so you can add new
connection factories without impacting the clients.
• Improves network performance
The clients are not involved in lookup and object creation. Because the
Service Locator performs this work, it can aggregate the network calls
required to look up and create business objects.
• Improves client performance by caching
The Service Locator can cache the initial context objects and references to
the factory objects (EJBHome, JMS connection factories). Also, when
accessing web services, the Service Locator can cache WSDL definitions and
endpoints.

11. Session Façade


• Introduces a layer that provides services to remote clients
Session Façades introduce a layer between clients and the business tier to
provide coarse-grained remote services. For some applications, this might be
unnecessary overhead, especially if the business tier is implemented without
using EJB components. However, Session Façades have almost become a
necessity in J2EE applications because they provide remote services and
leverage the benefits of an EJB container, such as transactions, security, and
lifecycle management.
• Exposes a uniform coarse-grained interface
A Session Façade encapsulates the complexity of the underlying business

Page
component interactions and presents the client with a simpler coarse-grained
service-layer interface to the system that is easy to understand and use. In 233
addition, by providing a Business Delegate for each Session Façade, you can
make it easier for client-side developers to leverage the power of Session
Façades.
• Reduces coupling between the tiers
Using a Session Façade decouples the business components from the clients,
and reduces tight coupling and dependency between the presentation and
business tiers. You can additionally implement Application Services to
encapsulate the complex business logic that acts on several Business
Objects. Instead of implementing the business logic, the Session Façades can
delegate the business logic to Application Services to implement.
• Promotes layering, increases flexibility and maintainability

Section 30. Introduction (7 pages)


When using Session Façades with Application Services, you increase the
flexibility of the system by layering and centralizing interactions. This
provides a greater ability to cope with changes due to reduced coupling.
Although changes to the business logic might require changes in the
Application Services or even the Session Façades, the layering makes such
changes more manageable.
• Reduces complexity
Using Application Services, you reduce the complexity of Session Façades.
Using Business Delegate for accessing Session Façades reduces the
complexity of client code. This helps make the system more maintainable
and flexible.
• Improves performance, reduces fine-grained remote methods
The Session Façade can also improve performance because it reduces the
number of remote network invocations by aggregating various fine-grained
interactions into a coarse-grained method. Furthermore, the Session Façades
are typically located in the same process space as the participating business
components, enabling faster communication between the two.
• Centralizes security management
You can manage security policies for the application at the Session Façade
level, since this is the tier presented to the clients. Because of the Session
Façade's coarse-grained access, it is easier and more manageable to define
security policies at this level rather than implementing security policies for
each participating fine-grained business component.
• Centralizes transaction control
The Session Façade represents the coarse-grained remote access point to
business-tier services, so centralizing and applying transaction management
at the Session Façade layer is easier. The Session Façade offers a central
place for managing and defining transaction control in a coarse-grained
fashion. This is simpler than managing transactions in finer-grained business
components or at the client side.
• Exposes fewer remote interfaces to clients
The Session Façade presents a coarse-grained access mechanism to the
business components, which greatly reduces the number of business
components exposed to the client. This reduces the scope for application
performance degradation because the number of interactions between the
clients and the Session Façade is lower than the number of direct
interactions between the client and the individual business components.

Page
12. Application Service
• Centralizes reusable business and workflow logic
234
Application Services create a layer of services encapsulating the Business
Objects layer. This creates a centralized layer that encapsulates common
business logic acting upon multiple Business Objects.
• Improves reusability of business logic
Application Services create a set of reusable components that can be reused
across various use case implementations. Application Services encapsulate
inter-Business Object operations.
• Avoids duplication of code

Section 30. Introduction (7 pages)


By creating a centralized reusable layer of business logic, the Application
Services avoid duplication of code in the clients, such as facades and helpers,
and in other Application Services.
• Simplifies facade implementations
Business logic is moved away from the service facades, whether they are
implemented as Session Façade or POJO facades. The facades become
simpler because they are only responsible for aggregating Application
Services interaction and delegating to one or more Application Services to
fulfill the requested service.
• Introduces additional layer in the business tier
The Application Service creates an additional layer in the business tier, which
you might consider an unnecessary overhead for some applications.
However, the additional layer provides for a powerful abstraction in the
application to encapsulate reusable common business logic.

13. Business Object


• Promotes object-oriented approach to the business model implementation
Business Objects create a logical layer of responsibility that reflects object
model implementation of the business model. For OO multi-tier applications,
this is a natural approach to implementing the business tier using objects.
• Centralizes business behavior and state, and promotes reuse
Business objects provide a centralized and modular approach to multi-tier
architecture by abstracting and implementing the business logic, rules and
behavior in a separate set of components. Such centralization provides for
and promotes reuse of the abstractions in the business tier across use cases
and different kinds of clients.
• Avoids duplication of and improves maintainability of code
Due to centralization of business state and behavior, clients avoid embedding
business logic and thereby avoid duplication of code. Using Business Objects
improves the maintainability of the system as a whole because they promote
reusability and centralization of code.
• Separates persistence logic from business logic
Persistence mechanism can be hidden and separated from the Business
Objects. You can use various persistence strategies such as JDO, custom
JDBC, object-relational mapping tools, or entity beans to facilitate
persistence of Business Objects.
• Promotes service-oriented architecture

Page
Business objects act as a centralized object model to all clients in an
application. You can build various services on top of Business Object, which
can also use other services such as persistence, business rules, integration,
235
and so forth. This facilitates separation of concerns in a multi-tiered
application and facilitates service-oriented architecture.
• POJO implementations can induce, and are susceptible to, stale data
When you implement Business Objects as POJOs in a distributed multi-tier
application, a Business Object might end up instantiated in multiple VMs or
containers. The application is responsible for ensuring that these multiple
instances maintain consistency and integrity of the business data. This might
require synchronization of state among the instances, and between the
instances and the data store, to guarantee the integrity of the business data
and avoid stale data. On the other hand, when you implement the Business
Objects as entity beans, the container handles the creation, synchronization,

Section 30. Introduction (7 pages)


and other lifecycle management of all instances so you don't need to address
this issue of data integrity.
The pattern has the following drawbacks:
• Adds extra layer of indirection
In some applications, such strict separation of concerns might be considered
a formality rather than a necessity. This is especially true for applications
with a trivial business model and business logic, or if the data model is a
sufficient representation of the business model and letting presentation
components access the data in the resource tier directly using Data Access
Objects is simpler. However, many designers might start out with the
assumption that the data model is sufficient and then later realize that such
an assumption was premature due to insufficient analysis. Fixing this
problem later in the development stage can be expensive.
• Can result in bloated (i.e. excessively large) objects
Certain use cases may only require the intrinsic behavior encapsulated within
a Business Object. A Business Object tends to get bloated as more and more
use case-specific behavior is implemented in it. To avoid bloating the
Business Objects, implement any extrinsic business behavior specific to a
particular use case or client and business behavior that acts on multiple
Business Objects in the form of an Application Service, rather than including
it in the Business Object.
14. Composite Entity
• Increases maintainability
When parents and dependent Business Objects are implemented using
Composite Entity with POJO dependent objects, you can reduce the number
of fine-grained entity beans. When using EJB 2.x, you might want to
implement the dependent objects as local entity beans and leverage other
features, such as CMR and CMP. This improves the maintainability of the
application.
• Improves network performance
Aggregation of the parent and dependent Business Objects into fewer
coarse-grained entity beans improves overall performance for EJB 1.1. This
reduces network overhead because it eliminates inter-entity bean
communication. For EJB 2.x, implementing Business Objects as Composite
Entity using local entity beans has the same benefit because all entity bean
communications are local to the client. However, note that co-location is still
less efficient than working with POJO Business Objects due to container
services for lifecycle, security, and transaction management for entity beans.

Page
• Reduces database schema dependency
Composite Entity provides an object view of the data in the database. The 236
database schema is hidden from the clients, since the mapping of the entity
bean to the schema is internal to the Composite Entity. Changes to the
database schema might require changes to the Composite Entity beans.
However, the clients are not affected since the Composite Entity beans do
not expose the schema to the external world.
• Increases object granularity
With a Composite Entity, the client typically looks up the parent entity bean
instead of locating numerous fine-grained dependent entity beans. The
parent entity bean acts as a Facade [GoF] to the dependent objects and
hides the complexity of dependent objects by exposing a simpler interface.
Composite Entity avoids fine-grained method invocations on the dependent
objects, decreasing the network overhead.

Section 30. Introduction (7 pages)


• Facilitates composite transfer object creation
The Composite Entity can create a composite Transfer Object that contains
all the data from the entity bean and its dependent objects, and returns the
Transfer Object to the client in a single method call. This reduces the number
of remote calls between the clients and the Composite Entity.
15. Transfer Object
• Reduces network traffic
A Transfer Object carries a set of data values from a remote object to the
client in one remote method call, thereby reducing the number of remote
calls. The reduced chattiness of the application results in better network
performance.
• Simplifies remote object and remote interface
The remote objects provides coarse-grained getData() and setData()
methods to get and set the transfer object carrying a set of values. This
eliminates fine-grained get and set methods in the remote objects.
• Transfers more data in fewer remote calls
Instead of multiple client calls over the network to the remote object to get
attribute values, you can provide a single method call that returns
aggregated data. When considering this pattern, you must consider the
trade-off of fewer network calls versus transmitting more data per call.
• Reduces code duplication
You can use the Entity Inherits Transfer Object strategy to reduce or
eliminate the duplication of code between the entity and its transfer object.
The pattern has the following drawbacks:
• Introduces stale transfer objects
Using Transfer Objects might introduce stale data in different parts of the
application. However, this is a common side effect whenever you disconnect
data from its remote source, because the remote objects typically do not
keep track of all the clients that obtained the data, to propagate changes.
• Increases complexity due to synchronization and version control
When using the Updatable Transfer Objects strategy, you must design for
concurrent access. This means that the design might get more complex due
to the synchronization and version control mechanisms.

16. Transfer Object Assembler


• Separates business logic, simplifies client logic

Page
When the client includes logic to manage the interactions with distributed
components, clearly separating business logic from the client tier becomes 237
difficult. The Transfer Object Assembler contains the business logic to
maintain the object relationships and to construct the composite transfer
object representing the model. The client doesn't need to know how to
construct the model or know about the different components that provide
data to assemble the model.
• Reduces coupling between clients and the application model
The Transfer Object Assembler hides the complexity of the construction of
model data from the clients and reduces coupling between clients and the
model. With loose coupling, if the model changes, then the Transfer Object
Assembler requires a corresponding change and insulates the clients from
this change.
• Improves network performance

Section 30. Introduction (7 pages)


The Transfer Object Assembler reduces the number of remote calls required
to obtain an application model from the business tier, since typically it
constructs the application model in a single method invocation. However, the
composite Transfer Object might contain a large amount of data. This means
that, though using the Transfer Object Assembler reduces the number of
network calls, the amount of data transported in a single call increases.
Consider this trade-off when you use this pattern.
• Improves client performance
The server-side Transfer Object Assembler constructs the model as a
composite Transfer Object without using any client resources. The client does
not spend any resources in assembling the model.
The drawback of this pattern is:
• Can introduce stale data
The Transfer Object Assembler constructs an application model as a
composite Transfer Object on demand, as a snapshot of the current state of
the business model. Once the client obtains the composite Transfer Object, it
is local to the client and is not network aware. Subsequent changes made to
the business components are not propagated to the application model.
Therefore, the application model can become stale after it is obtained.
17. Value List Handler
• Provides efficient alternative to EJB finders
Value List Handler provides an alternative way to perform searches, and a
way to avoid using EJB finders, which are inefficient for large searches.
• Caches search results
The result set needs to be cached when a client must display the subset of
the results of a large result set. The result set might be a collection of
transfer objects that can be iterated over when using the DAO Transfer
Object Collection strategy, or the results might be a special List
implementation that encapsulates a JDBC RowSet when you use the DAO
RowSet Wrapper List strategy.
• Provides flexible search capabilities
You can implement a Value List Handler to be flexible by providing ad-hoc
search facilities, constructing runtime search arguments using template
methods, and so on. In other words, a Value List Handler developer can
implement intelligent searching and caching algorithms without being limited
by the EJB finder methods.
• Improves network performance

Page
Network performance improves because only a requested subset of the
results, rather than the entire result set, is sent to the client on demand. If 238
the client/user displays the first few results and then abandons the query,
the network bandwidth is not wasted, since the data is cached on the server
side and never sent to the client.
• Allows deferring entity bean transactions
Caching results on the server side and minimizing finder overhead might
improve transaction management. For example, a query to display a list of
books uses a Value List Handler to obtain the list without using the Book
entity bean's finder methods. At a later point, when the user wants to modify
a book in detail, the client invokes a Session Façade that locates the required
Book entity bean instance with appropriate transaction semantics as needed
for this use-case.
• Promotes layering and separation of concerns

Section 30. Introduction (7 pages)


The Value List Handler encapsulates list management behavior in the
business tier and appropriately uses Data Access Object in the integration
tier. This promotes layering in the application, and keeps business logic in
the business-tier components and data access logic in Data Access Objects.
The drawback is:
• Creating a large list of Transfer Object can be expensive
When the Data Access Object executes a query, and creates a collection of
Transfer Objects, it can consume significant resources if the results of the
query returns a large number of matching records. Instead of creating all the
Transfer Objects instances, limit the number of rows retrieved by the DAO in
the query by specifying the maximum number of results the DAO fetches
from the database. You might also want to use the DAO Cached RowSet and
RowSet Wrapper List strategy.
18. Data Access Object
• Centralizes control with loosely coupled handlers
Filters provide a central place for handling processing across multiple
requests, as does a controller. Filters are better suited to massaging
requests and responses for ultimate handling by a target resource, such as a
controller. Additionally, a controller often ties together the management of
numerous unrelated common services, such as authentication, logging,
encryption, and so on. Filtering allows for much more loosely coupled
handlers, which can be combined in various permutations.
• Enables transparency
Clients can leverage the encapsulation of data sources within the Data
Access Objects to gain transparency to the location and implementation of
the persistent storage mechanisms.
• Provides object-oriented view and encapsulates database schemas
The clients use transfer objects or data cursor objects (RowSet Wrapper List
strategy) to exchange data with the Data Access Objects. Instead of
depending on low-level details of database schema implementations, such as
ResultSets and RowSets, where the clients must be aware of table
structures, column names, and so on, the clients handle data in an object-
oriented manner using the transfer objects and data cursors.
• Enables easier migration
A layer of DAOs makes it easier for an application to migrate to a different
database implementation. The clients have no knowledge of the underlying
data store implementation. Thus, the migration involves changes only to the
DAO layer.

Page
• Reduces code complexity in clients 239
Since the DAOs encapsulate all the code necessary to interact with the
persistent storage, the clients can use the simpler API exposed by the data
access layer. This reduces the complexity of the data access client code and
improves the maintainability and development productivity.
• Organizes all data access code into a separate layer
Data access objects organize the implementation of the data access code in a
separate layer. Such a layer isolates the rest of the application from the
persistent store and external data sources. Because all data access
operations are now delegated to the DAOs, the separate data access layer
isolates the rest of the application from the data access implementation. This
centralization makes the application easier to maintain and manage.
The pattern has the following drawbacks:

Section 30. Introduction (7 pages)


• Adds extra layer
The DAOs create an additional layer of objects between the data client and
the data source that needs to be designed and implemented, to leverage the
benefits of this pattern. While this layer might seem to be extra development
and run-time overhead, it is typically necessary in order to decouple the data
access implementation from the other parts of the application.
• Needs class hierarchy design
When you use a factory strategy, the hierarchy of concrete factories and the
hierarchy of concrete products (DAOs) produced by the factories need to be
designed and implemented. Consider this additional effort if you think you'll
need this extra flexibility, because it increases the complexity of the design.
Use the DAO Factory Method Strategy if that meets your needs, and use
DAO Abstract Factory Strategy only if absolutely required.
• Introduces complexity to enable object-oriented design
While the RowSet Wrapper List strategy encapsulates the data access layer
dependencies and JDBC APIs, and exposes an object-oriented view of the
results data, it introduces considerable complexity in your implementation.
You need to decide whether its benefits outweigh the drawbacks of using the
JDBC RowSet API in the Cached RowSet strategy, or the performance
drawback of using the Transfer Object Collection strategy.

19. Service Activator


• Integrates JMS into enterprise applications
The Service Activator enables you to leverage the power of JMS in POJO
enterprise applications using the POJO Service Activator strategy, and in EJB
enterprise applications using the MDB Service Activator strategy. Regardless
of what platform you are running your application on, as long as you have a
JMS runtime implementation, you can implement and use Service Activator
to provide asynchronous processing capabilities in your application.
• Provides asynchronous processing for any business-tier component
Using the Service Activator pattern lets you provide asynchronous invocation
on all types of enterprise beans, including stateless session beans, stateful
session beans, and entity beans. The Service Activator acts as an
intermediary between the client and the business service, to enable
asynchronous invocation of any component that provides the business
service implementation.
• Enables standalone JMS listener
The POJO Service Activator can be run as a standalone listener without using
any container support. However, in a mission-critical application, Service

Page
Activator needs to be monitored to ensure availability. The additional
management and maintenance of this process can add to application support 240
overhead. An MDB Service Activator might be a better alternative because it
will be managed and monitored by the application server.
20. Domain Store
• Creating a custom persistence framework is a complex task
Implementing Domain Store and all the features required for transparent
persistence is not a simple task due to the nature of the problem and due to
complex interactions between several participants and this pattern
framework. So, consider implementing your own transparent persistence
framework after exhausting all other options.
• Multi-layer object tree loading and storing requires optimization techniques

Section 30. Introduction (7 pages)


Business Object hierarchy and interrelations can be quite complex. When
persisting a Business Object and its dependents, you might only want to
persist those portions of the hierarchy that has been modified. Similarly,
when loading a Business Object hierarchy, you might want to provide
different levels of lazy loading schemes to load the most used part of the
hierarchy first and lazy load other parts when accessed.
• Improves understanding of persistence frameworks
If you are using a third party persistence framework, understanding Domain
Store will greatly improve your understanding of that framework. You can
compare and contrast how that framework implements transparent
persistence with what has been described in Domain Store.
• Improves testability of your persistent object model
Domain Store it lets you separate the persistence logic from the persistent
business objects. This greatly improves testability of your application as you
can test your object model without actually enabling and performing
persistence. Since persistence is transparent, you can always enable it once
you finish testing your Business Object model and business logic.
• Separates business object model from persistence logic
Since Domain Store enables transparent persistence, your Business Objects
do not need to contain any persistence related code. This frees the developer
from dealing with the intricacies of implementing persistence logic for your
persistent Business Objects.
The drawback is:
• A full-blown persistence framework might be overkill for a small object model
For simpler persistence needs where you have a simple object model and
basic persistence needs, a persistence framework using Domain Store may
be an overkill. In such cases, a basic framework using Data Access Object
might be adequately appropriate.

21. Web Service Broker


• Existing remote Session Façades need be refactored to support local access
The drawbacks are:
• Introduces a layer between client and service
• Network performance may be impacted due to web protocols

Page
241

Section 30. Introduction (7 pages)


8. Security ( pages3)

Explain the client-side security model for the Java SE environment,


1 including the Web Start and applet deployment modes.

Given an architectural system specification, select appropriate locations


2 for implementation of specified security features, and select suitable
technologies for implementation of those features.

Identify and classify potential threats to a system and describe how a


3 given architecture will address the threats.

Describe the commonly used declarative and programmatic methods


4 used to secure applications built on the Java EE platform, for example
use of deployment descriptors and JAAS.

Explain the client-side security model for the Java SE environment,


8.1 including the Web Start and applet deployment modes.

• [CORE_SECURITY_PATTERNS] Chapter 3.
Ref. • Java Web Start and Security
• Java Web Start Security

In the first release of the Sun Java Platform, the Java Development Kit 1.0.x (JDK) introduced
the notion of a sandbox-based security model. This primarily supports downloading and
running Java applets securely and avoids any potential risks to the user's resources. With the
JDK 1.0 sandbox security model, all Java applications (excluding Java applets) executed locally
can have full access to the resources available to the JVM. Application code downloaded from
remote resources, such as Java applets, will have access ONLY to the restricted resources
provided within its sandbox. This sandbox security protects the Java applet user from potential
risks because the downloaded applet cannot access or alter the user's resources beyond the
sandbox.

Page
The release of JDK 1.1.x introduced the notion of signed applets, which allowed downloading
and executing applets as trusted code after verifying the applet signer's information. To 242
facilitate signed applets, JDK 1.1.x added support for cryptographic algorithms that provide
digital signature capabilities. With this support, a Java applet class could be signed with digital
signatures in the Java archive format (JAR file). The JDK runtime will use the trusted public
keys to verify the signers of the downloaded applet and then treat it as a trusted local
application, granting access to its resources:

Section 30. Introduction (7 pages)


The release of Java 2 SE introduced a number of significant enhancements to JDK 1.1 and
added such features as security extensions providing cryptographic services, digital certificate
management, PKI management, and related tools. Some of the major changes in the Java 2
security architecture are as follows:
• Policy-driven restricted access control to JVM resources.
• Rules-based class loading and verification of byte code.
• System for signing code and assigning levels of capability.
• Policy-driven access to Java applets downloaded by a Web browser.
In the Java 2 SE security architecture, all code regardless of whether it is run locally or
downloaded remotely can be subjected to a security policy configured by a JVM user or
administrator. All code is configured to use a particular domain (equivalent to a sandbox) and
a security policy that dictates whether the code can be run on a particular domain or not.
• Protection Domains
All local Java applications run unrestricted as trusted applications by default, but
they can also be configured with access-control policies similar to what is defined in
applets and remote applications. This is done by configuring a ProtectionDomain,
which allows grouping of classes and instances and then associating them with a set
of permissions between the resources. Protection domains are generally categorized
as two domains: "system domain" and "application domain". All protected external

Page
resources, such as the file systems, networks, and so forth, are accessible only via
system domains. The resources that are part of the single execution thread are
considered an application domain. So in reality, an application that requires access to
243
an external resource may have an application domain as well as a system domain.
While executing code, the Java runtime maintains a mapping from code to protection
domain as well as to its permissions.

Section 30. Introduction (7 pages)


• Permissions
Permissions determine whether access to a resource of the JVM is granted or denied.
They give specified resources or classes running in that instance of the JVM the
ability to permit or deny certain runtime operations. An applet or an application
using a security manager can obtain access to a system resource only if it has
permission. The Java Security API defines a hierarchy for Permission classes that can
be used to configure a security policy. At the root, java.security.Permission is
the abstract class, which represents access to a target resource; it can also include a

Page
set of operations to construct access on a particular resource. The Permission class
contains several subclasses that represent access to different types of resources. The
subclasses belong to their own packages that represent the APIs for the particular
244
resource.
Some of the commonly used Permission classes are as follows:
○ For wildcard permissions: java.security.AllPermission
○ For named permissions: java.security.BasicPermission
○ For file system: java.io.FilePermission
○ For network: java.net.SocketPermission
○ For properties: java.lang.PropertyPermission
○ For runtime resources: java.lang.RuntimePermission

Section 30. Introduction (7 pages)


○ For authentication: java.security.NetPermission
○ For graphical resources: java.awt.AWTPermission

• Policy
The Java 2 security policy defines the protection domains for all running Java code
with access privileges and a set of permissions such as read and write access or
making a connection to a host. The policy for a Java application is represented by a
Policy object, which provides a way to declare permissions for granting access to its
required resources. In general, all JVMs have security mechanisms built in that allow
you to define permissions through a Java security policy file. A JVM makes use of a
policy-driven access-control mechanism by dynamically mapping a static set of
permissions defined in one or more policy configuration files. These entries are often
referred to as grant entries. A user or an administrator externally configures the
policy file for a J2SE runtime environment using an ASCII text file or a serialized
binary file representing a Policy class.

• SecurityManager
Each Java application can have its own security manager that acts as its primary
security guard against malicious attacks. The security manager enforces the required
security policy of an application by performing runtime checks and authorizing
access, thereby protecting resources from malicious operations. Under the hood, it
uses the Java security policy file to decide which set of permissions are granted to
the classes. However, when untrusted classes and third-party applications use
the JVM, the Java security manager applies the security policy associated with the
JVM to identify malicious operations. In many cases, where the threat model does
not include malicious code being run in the JVM, the Java security manager is
unnecessary.
In cases where the SecurityManager detects a security policy violation, the JVM
will throw an AccessControlException or a SecurityException.
If you wish to have your applications use a SecurityManager and security policy,
start up the JVM with the -Djava.security.manager option and you can also
specify a security policy file using the policies in the -Djava.security.policy
option as JVM arguments. If you enable the Java Security Manager in your
application but do not specify a security policy file, then the Java Security Manager
uses the default security policies defined in the java.policy file in the
$JAVA_HOME/jre/lib/security directory.
• AccessController

Page
The access controller mechanism performs a dynamic inspection and decides
whether the access to a particular resource can be allowed or denied. From a
programmer's standpoint, the Java access controller encapsulates the location, code
245
source, and permissions to perform the particular operation. In a typical process,
when a program executes an operation, it calls through the security manager, which
delegates the request to the access controller, and then finally it gets access or
denial to the resources.
• Bytecode verifier
The Java bytecode verifier is an integral part of the JVM that plays the important role
of verifying the code prior to execution. It ensures that the code was produced
consistent with specifications by a trustworthy compiler, confirms the format of the
class file, and proves that the series of Java byte codes are legal. With bytecode
verification, the code is proved to be internally consistent following many of the rules
and constraints defined by the Java language compiler. The bytecode verifier may

Section 30. Introduction (7 pages)


also detect inconsistencies related to certain cases of array bound-checking and
object-casting through runtime enforcement.
• Keystore and Keytool
The Java 2 platform provides a password-protected database facility for storing
trusted certificate entries and key entries. The keytool allows the users to create,
manage, and administer their own public/private key pairs and associated
certificates that are intended for use in authentication services and in representing
digital signatures.

Java Applet Security


A Java applet downloaded from the Web runs in either a Java-enabled Web browser or a Java
appletviewer, which is provided in the J2SE bundle. From a security standpoint, Java
applets downloaded from the Internet or from any remote sources are restricted from reading
and writing files and making network connections on client host systems. They are also
restricted from starting other programs, loading libraries, or making native calls on the client
host system. In general, applets downloaded from a network or remote sources are
considered untrusted. An applet can be considered trusted, based on the following factors:
• Applets installed on a local filesystem or executed on a localhost.
• Signed applets provide a way to verify that the applet is downloaded from a reliable
source and can be trusted to run with the permissions granted in the policy file.
In a Web browser, a Java plug-in provides a common framework and enables secure
deployment of applets in the browser using the JRE. While downloading an applet, the Java
plug-in enables the browser to install all the class files and then render the applet. A security
manager (SecurityManager implementation) will be AUTOMATICALLY installed during startup
whenever an applet starts running in a Java-enabled Web browser. No downloaded applets are
allowed to access resources in the client host unless they are explicitly granted permission
using an entry in a Java security policy file.
Signed Applets
The Java 2 platform introduced the notion of signed applets. Signing an applet ensures that an
applet's origin and its integrity are guaranteed by a certificate authority (CA) and that it can
be trusted to run with the permissions granted in the policy file. The J2SE bundle provides a
set of security tools that allows the end users and administrators to sign applets and
applications, and also to define local security policy. This is done by attaching a digital
signature to the applet that indicates who developed the applet and by specifying a local
security policy in a policy file mentioning the required access to local system resources.
The Java 2 platform requires an executable applet class to be packaged into a JAR file before it
is signed. The JAR file is signed using the private key of the applet creator. The signature is
verified using its public key by the client user of the JAR file. The public key certificate is sent

Page
along with the JAR file to any client recipients who will use the applet. The client who receives
the certificate uses it to authenticate the signature on the JAR file. To sign the applet, we need
to obtain a certificate that is capable of code signing. For all production purposes, you must
246
always obtain a certificate from a CA such as VeriSign, Thawte, or some other CA.
What is a Java Web Start?

JavaTM Web Start is a new technology for deploying applications -- it gives you the
power to launch full-featured applications with a single click from your Web browser. You
can download and launch applications, such as a program for drawing or sketching
chemical structures, without going through complicated installation procedures. With
Java Web Start, you launch applications simply by clicking on a Web page link. If the
application is not present on your computer, Java Web Start automatically downloads all
necessary files. It then caches the files on your computer so the application is always

Section 30. Introduction (7 pages)


ready to be relaunched anytime you want -- either from an icon on your desktop or from
the browser link. And no matter which method you use to launch the application, the
most current version of the application is always presented to you.

Java Web Start (JWS) is a full-fledged Java application that allows Java client
applications to be deployed, launched, and updated from a Web server. It provides a
mechanism for application distribution through a Web server and facilitates Java rich-
client access to applications over a network. The underlying technology of JWS is the
Java Network Launch protocol (JNLP), which provides a standard way for packaging
and provisioning the Java programs (as JAR files) and then launching Java programs
over a network. The JNLP-packaged applications are typically started from a Web
browser that launches the client-side JWS software, which downloads, caches, and then
executes the application locally. Once the application is downloaded, it does not need to
be downloaded again unless newer updates are made available in the server. These
updates are done automatically in an incremental fashion during the client application
startup. Applications launched using JWS are typically cached on the user's machine
and can also be run offline. Since the release of J2SE 1.4, JWS has been an integral
part of the J2SE bundle, and it does not require a separate download [JWS].

Java Web Start Security Basics

Applications launched with Java Web Start are, by default, run in a restricted
environment, known as a sandbox. In this sandbox, Java Web Start:

• Protects users against malicious code that could affect local files
• Protects enterprises against code that could attempt to access or destroy data on
networks
Unsigned JAR files launched by Java Web Start remain in this sandbox, meaning they
cannot access local files or the network.

Signing JAR Files for Java Web Start Deployment

Java Web Start supports signed JAR files so that your application can work outside of
the sandbox described above, so that the application can access local files and the
network.

Java Web Start verifies that the contents of the JAR file have not changed since it was Page
247
signed. If verification of a digital signature fails, Java Web Start does not run the
application.
When the user first runs an application as a signed JAR file, Java Web Start opens a
dialog box displaying the application's origin based on the signer's certificate. The user
can then make an informed decision regarding running the application.
For more information, see the Signing and Verifying JAR Files section.
Security and JNLP Files

Section 30. Introduction (7 pages)


For a signed JAR file to have access to the local file system and network, you must
specify security settings in the JNLP file. The security element contains security
settings for the application.

The following example provides the application with complete access to the client system
if all its JAR files are signed:
<security>
<all-permissions/>
</security>

Dynamic Downloading of HTTPS Certificates

Java Web Start dynamically imports certificates as browsers typically do. To do this,
Java Web Start sets its own https handler, using the java.protocol.handler.pkgs
system properties, to initialize defaults for the SSLSocketFactory and
HostnameVerifier. It sets the defaults with the methods
HttpsURLConnection.setDefaultSSLSocketFactory and
HttpsURLConnection.setDefaultHostnameVerifier.

If your application uses these two methods, ensure that they are invoked after the Java
Web Start initializes the https handler, otherwise your custom handler will be replaced
by the Java Web Start default handler.
You can ensure that your own customized SSLSocketFactory and HostnameVerifiter
are used by doing one of the following:
• Install your own https handler, to replace the Java Web Start https handler. For
more information, see the document A New Era for Java Protocol Handlers.
• In your application, invoke
HttpsURLConnection.setDefaultSSLSocketFactory or
HttpsURLConnection.setDefaultHostnameVerifier only after the first https
URL object is created, which executes the Java Web Start https handler
initialization code first.
JWS Security Model

Typical to a stand-alone Java application, JWS applications run outside a Web browser using the
sandbox features of the underlying Java platform. JWS also allows defining security attributes for

Page
client-side Java applications and their access to local resources, such as file system access, making
network connections, and so on. These security attributes are specified using XML tags in the JNLP
descriptor file. The JNLP descriptor defines the application access privileges to the local and network
248
resources. In addition, JWS allows the use of digital signatures for signing JAR files in order to verify
the application origin and its integrity so that it can be trusted before it is downloaded to a client
machine. The certificate used to sign the JAR files is verified using the trusted certificates in the client
keystore. This helps users avoid starting malicious applications and inadvertent downloads without
knowing the originating source of the application.
When downloading signed JARs, JWS displays a dialog box that mentions the source of the
application and the signer's information before the application is executed. This allows users to make
decisions regarding whether to grant additional privileges to the application or not. When downloading
unsigned applications (unsigned JARs) that require access to local resources, JWS throws a "Security
Advisory" dialog box notifying the user that an application requires access to the local resources and
prompts the user with a question "Do you want to allow this action?" JWS will allow the user to grant
the client application access to the local resources by clicking the "Yes" button in the Security Advisory
dialog box.

Section 30. Introduction (7 pages)


Signing a JWS application is quite similar to the steps involved in signing an applet, as we saw in the
previous section. To sign a JWS application for production, you must obtain a certificate from a
certificate authority such as VeriSign and Thawte. For testing purposes, you may choose to use the
key management tools provided with the J2SE bundle.
JNLP Settings for Security

To deploy a JWS application, in addition to JAR files, adding a .jnlp file is required. The JNLP file is an
XML-based document that describes the application classes (JAR files), their location in a Web server,
JRE version, and how to launch in the client environment. The client user downloads the JNLP file
from the server, which automatically launches the JWS application on the client side. The JNLP file
uses XML elements to describe a JWS application. The root element is tagged as <jnlp>, which
contains the four core sub-elements: information, security, resources, and application-desc.
To enforce security, the <security> element is used to specify the required permissions. The security
element provides two permission options: <all-permissions/> to provide an application with full access
to the client's local computing resources, and <j2ee-application-client-permissions/> to provide a
selected set of permissions that includes socket permissions, clipboard access permission, printing
permission, and so forth. Example 3-19 is a JNLP file that shows putting all the elements including a
<security> element setting with all permissions.
Example 3-19. JNLP file showing <security> elements

<?xml version="1.0" encoding="UTF-8"?>


<jnlp spec="1.0+" codebase="file:///c:/testarea/jnlp/">
<information>
<title>My Signed Jar</title>
<vendor>Core Security Patterns</vendor>
<homepage href="http://www.sec-patterns.com/signed" />
<description>Java Web start example</description>
</information>
<offline-allowed/>
<security>
<all-permission/>
</security>
<resources>
<j2se version="1.2+" />
<jar href="SignedClientApp.jar"/>
</resources>
<application-desc main-class="SignedClientApp" />
</jnlp>

Given an architectural system specification, select appropriate locations Page


249
8.2 for implementation of specified security features, and select suitable
technologies for implementation of those features.

Ref. • [CORE_SECURITY_PATTERNS] Chapter 4.

The Java platform facilitates an extensible security architectural model via standards-based
security API technologies that provide platform independence and allow interoperability among
vendor implementations. These API technologies add a variety of security features to the core
Java platform by integrating technologies to support cryptography, certificate management,
authentication and authorization, secure communication, and other custom security
mechanisms.

Section 30. Introduction (7 pages)


The architecture in the diagram above shows the location of 6 security features discussed in
the following pages.
1) Java Cryptography Architecture (JCA)
JCA provides basic cryptographic services and algorithms, which include support for
digital signatures and message digests.
In J2SE, the JCA provides the Java platform with cryptographic services and
algorithms to secure messages. JCA defines a notion of provider implementation and
a generic API framework for accessing cryptographic services and implementing
related functionalities. JCA is also designed to provide algorithm and implementation
independence via a standardized API framework and provider implementation.

Page
250

Section 30. Introduction (7 pages)


JCA provides support for various cryptographic algorithms by defining the types and
functionalities of cryptographic services. The cryptographic services include support
for message digests and digital signatures. JCA also ensures interoperability among
the provider implementations using a standardized set of APIs, which implements
those required cryptographic algorithms and services. For example, using the same
algorithms, a key generated by one provider can be usable by another provider;
likewise, a digital signature generated by one provider can be verified using another
provider.
As part of the J2SE bundle, the JCA framework includes a default provider
implementation named SUN, which provides the following features:
○ Implementation of Digital Signature Algorithm (DSA) and Message Digest
Algorithms (MD5 and SHA1)
○ DSA key pair generator for generating public and private keys based on DSA
○ DSA algorithm parameter generator and manager
○ DSA key factory to provide conversations between public and private keys
○ Certificate path builder and validator for X.509 certificates

Page
○ Certificate factory for X.509 certificates and revocation lists
○ Keystore implementation named JKS, which allows managing a repository of
251
keys and certificates

Supported cryptographic operations:


○ Computing a Message Digest Object
Message digest is a one-way secure hash function. Its computed values are
referred to as message digests or hash values and act as fingerprints of
messages. The message digest values are computationally impossible to
reverse and thus protect the original message from being derived.
As a cryptographic technique, message digests are applied for preserving the
secrecy of messages, files, and objects. In conjunction with digital signature,

Section 30. Introduction (7 pages)


message digests are used to support integrity, authentication, and non-
repudiation of messages during transmission or storage. Message digest
functions are publicly available and use no keys. In J2SE, the JCA provider
supports two message digest algorithms: Message Digest 5 (MD5) and
secure hash algorithm (SHA-1). MD5 produces a 128-bit (16-byte) hash and
SHA-1 produces a 160-bit message digest value.
○ Key Pair Generation
Generating key pairs and securely distributing them is one of the major
challenges in implementing cryptographic security. JCA provides the ability to
generate key pairs using digital signature algorithms such as DSA, RSA, and
Diffie-Hellman. JCA also supports using random number algorithms to add a
high degree of randomness, which makes it computationally difficult to
predict and determine the generated values.
○ Digital Signature Generation
A digital signature is computed using public-key cryptographic techniques.
The sender signs a message using a private key and the receiver decodes
the message using the public key. This allows the receiver to verify the
source or signer of the message and guarantee its integrity and authenticity.
2) Java Cryptographic Extension (JCE)
JCE augments JCA functionalities with added cryptographic services that are
subjected to U.S. export control regulations and includes support for encryption and
decryption operations, secret key generation and agreement, and message
authentication code (MAC) algorithms.
JCE was originally developed as an extension package to include APIs and
implementations for cryptographic services that were subject to U.S. export control
regulations. JCE provides a provider implementation and related set of API packages
to provide support for encryption and decryption, secret key generation, and
agreement and message authentication code (MAC) algorithms. The encryption and
decryption support includes symmetric, asymmetric, block, and stream ciphers. JCE
also provides support for secure streams and sealed objects.
JCE facilitates the Java platform with cryptographic services and algorithms by
providing implementations and interfaces for the following:
○ Cryptographic ciphers used for encryption and decryption
○ Password-based encryption
○ Secret key generation used for symmetric algorithms
○ Creation of sealed objects that are serialized and encrypted
○ Key agreement for encrypted communication among multiple parties

Page
○ MAC algorithms to validate information transmitted between parties 252
○ Support for PKCS#11 (RSA Cryptographic Token Interface Standard), which
allows devices to store cryptographic information and perform cryptographic
services. This feature is available in J2SE 5.0 and later versions.

Section 30. Introduction (7 pages)


Because JCE's design is based on the architectural principles of JCA, like JCA it allows
for integration of Cryptographic Service Providers, which implements the JCE-defined
cryptographic services from a vendor. JCE also facilitates a pluggable framework
architecture that allows qualified JCE providers to be plugged in. As part of the J2SE
bundle, the JCE framework provides a default provider implementation named
SunJCE, which provides the following cryptographic services and algorithms:
○ Implementation of Ciphers and Encryption algorithms such as DES (FIPS PUB
461), Triple DES, and Blowfish
○ Modes include Electronic Code Book (ECB), Cipher Block Chaining (CBC),
Cipher Feedback (CFB), Output Feedback (OFB), and Propagating Cipher
Block Chaining (PCBC)
○ Implementation of MAC algorithms such as HMAC-MD5 and HMAC-SHA1
algorithms
○ Key generators for DES, Triple DES, Blowfish, HMAC-MD5, and HMAC-SHA1
algorithms
○ Implementation of the MD5 with DES-CBC password-based encryption (PBE)

Page
algorithm
○ Implementation of key agreement protocols based on Diffie-Hellman 253
○ Implementation of Padding scheme as per PKCS#5
○ Algorithm parameter managers for Diffie-Hellman, DES, Triple DES, Blowfish,
and PBE
○ Support for Advanced Encryption Standard (AES)
○ A keystore implementation named JCEKS

Commonly applied JCE cryptographic operations:


○ Encryption and Decryption

Section 30. Introduction (7 pages)


Encryption is a cryptographic technique for scrambling a message or files or
programs by changing each character string, byte, or bit to another using a
mathematical algorithm. A message that is not encrypted is referred to as
plaintext or cleartext, and an encrypted message is called ciphertext.
Decryption is the reverse process of encryption, which converts the
ciphertext back into plaintext. This process generally requires a
cryptographic key or code.
○ Using Block Ciphers
A block cipher is a symmetric-key encryption algorithm that encrypts and
decrypts a fixed-length block of data (usually 64 bits long) into a block of
ciphertext of the same length. To implement block ciphers it becomes
important that the data required to be encrypted must be in the multiple of
the block size. To fill in the reminder block and to derive the required block
size, block ciphers makes use of padding.
○ Using Stream Ciphers
Stream ciphers are composed of I/O streams and ciphers.
○ Sealed Object
JCE introduced the notion of creating sealed objects. A Sealed object is all
about encrypting a serializable object using a cipher. Sealed objects provide
confidentiality and helps preventing unauthorized viewing of contents of the
object by restricting de-serialization.
○ Password-Based Encryption (PBE)
Password-Based Encryption (PBE) is a technique that derives an encryption
key from a password, which helps in combating dictionary attacks by hackers
and other related vulnerabilities. To use PBE, we have to use a salt (a very
large random number also referred to as seed) and an iteration count.
○ Advanced Encryption Standard (AES)
AES is a new cryptographic algorithm that can be used to protect electronic
data. More specifically, AES is a symmetric-key block cipher that can use
keys of 128, 192, and 256 bits, and encrypts and decrypts data in blocks of
128 bits (16 bytes).
○ Computing Message Authentication Code (MAC) objects
Message Authentication Code (MAC) is generally used for checking the
integrity and validity of the information based on a secret key. MAC uses a
secret key to generate the hash code for a sequence of specific bytes arrays.
○ Using Key Agreement Protocols
A key agreement protocol is a process that allows carrying out an encrypted

Page
communication between two or more parties by securely exchanging a secret
key over a network. The Diffie-Hellman (DH) key agreement protocol allows
254
two users to exchange a secret key over an insecure medium without any
prior secrets. JCE provides support for the Diffie-Hellman key agreement
protocol.
3) Java Certification Path API (CertPath)
CertPath provides the functionality of checking, verifying, and validating the
authenticity of certificate chains.
CertPath provides a full-fledged API framework for application developers who wish
to integrate the functionality of checking, verifying, and validating digital certificates
into their applications.
Digital certificates play the role of establishing trust and credentials when conducting
business or other transactions. Issued by a Certification Authority (CA), a digital

Section 30. Introduction (7 pages)


certificate defines a binding data structure containing the holder name, a serial
number, expiration dates, a public key, and the digital signature of the CA so that a
recipient can verify the authenticity of the certificate. CAs usually obtains their
certificates from their own higher-level authority. Typically in a certificate binding, a
chain of certificates (referred to as a certification chain) starts from the certificate
holder is followed by zero or more certificates of intermediate CAs, and ends with a
root-certificate of some top-level CA. So the process of reading, verifying, and
validating certificate chains becomes important in PKI certificate-enabled applications
and systems.
Java CertPath API Programming Model:
○ Create a Certificate Chain Using CertPath
○ Validate a Certificate Chain Using CertPath

4) Java Secure Socket Extension (JSSE)


JSSE facilitates secure communication by protecting the integrity and confidentiality
of data exchanged using SSL/TLS protocols.
Protecting the integrity and confidentiality of data exchanged in network
communications is one of the key security challenges of network security. During
communication, the potential vulnerability is that the data exchanged can be
accessed or modified by someone with a malicious intent or who is not an intended
client recipient. Secure Socket Layer (SSL) and Transport Layer Security (TLS) are
application-independent protocols developed by IETF that provide critical security
features for end-to-end application communication by protecting the privacy and
integrity of exchanged data. They establish authenticity, trust, and reliability
between the communicating partners. SSL/TLS operates on top of the TCP/IP stack,
which secures communication through features like data encryption, server
authentication, message integrity, and optional client authentication. For data
encryption, SSL uses both public-key and secret-key cryptography. It uses secret-
key cryptography to bulk-encrypt the data exchanged between two applications.
JSSE enables end-to-end communication security for client/server-based network
communications by providing a standardized API framework and mechanisms for
client-server communications. JSSE provides support for SSL and TLS protocols and
includes functionalities related to data encryption, message integrity, and peer
authentication.

Page
255
With JSSE, it is possible to develop client and server applications that use secure
transport protocols, which include:
○ Secure HTTP (HTTP over SSL)
○ Secure Shell (Telnet over SSL)
○ Secure SMTP (SMTP over SSL)
○ IPSEC (Secure IP)
○ Secure RMI or RMI/IIOP (RMI over SSL)

Section 30. Introduction (7 pages)


5) Java Authentication and Authorization Service (JAAS)
JAAS provides the mechanisms to verify the identity of a user or a device to
determine its accuracy and trustworthiness and then provide access rights and
privileges depending on the requesting identity. It facilitates the adoption of
pluggable authentication mechanisms and user-based authorization.
Authentication is the process of verifying the identity of a user or a device to
determine its accuracy and trustworthiness. Authorization provides access rights and
privileges depending on the requesting identity's granted permissions to access a
resource or execute a required functionality.
6) Java Generic Secure Services (JGSS)
JGSS provides functionalities to develop applications using a unified API to support a
variety of authentication mechanisms such as Kerberos based authentication and
also facilitates single sign-on.
The Generic Security Services API (GSS-API) is a standardized API developed by the
Internet Engineering Task Force (IETF) to provide a generic authentication and
secure messaging interface that supports a variety of pluggable security
mechanisms. The GSS-API is also designed to insulate its users from the underlying
security mechanisms by allowing the development of application authentication using
a generic interface.
Sun introduced the Java GSS-API (JGSS) as an optional security package for J2SE
1.4 that provides the Java bindings for the GSS-API. This allows development of
applications that enable uniform access to security services over a variety of
underlying authentication mechanisms, including Kerberos.

Explanations of Cryptographic Algorithms


• One-Way Hash Function Algorithms
One-way hash functions are algorithms that take as input a message (any string of
bytes, such as a text string, a Word document, a JPG file) and generate as output a
fixed-size number referred to as the "hash value" or "message digest." The size of
the hash value depends on the algorithm used, but it is usually between 128 and 256
bits.
The purpose of a one-way hash function is to create a short digest that can be used
to verify the integrity of a message. In communication protocols such as TCP/IP,
message integrity is often verified using a checksum or CRC (cyclic-redundancy
check). The sender of the message calculates the checksum of the message and
sends it along with the message, and the receiver recalculates the checksum and
compares it to the checksum that was sent. If they do not match, the receiver
assumes the message was corrupted during transit and requests that the sender

Page
resend the message. These methods are fine when the expected cause of the
corruption is due to electronic glitches or some other natural phenomena, but if the
expected cause is an intelligent adversary with malicious intent, something stronger
256
is needed. That is where cryptographically strong one-way hash functions come in.
A cryptographically strong one-way hash function is designed in such a way that it is
computationally infeasible to find two messages that compute to the same hash
value. With a checksum, a modestly intelligent adversary can fairly easily alter the
message so that the checksum calculates to the same value as the original
message's checksum. Doing the same with a CRC is not much more difficult. But a
cryptographically strong one-way hash function makes this task all but impossible.
Two examples of cryptographically strong one-way hash algorithms are MD5 and
SHA-1. MD5 was created by Ron Rivest (of RSA fame) in 1992 [RFC1321] and
produces a 128-bit hash value. SHA-1 was created by the National Institute of
Standards and Technology (NIST) in 1995 [FIPS1801] and produces a 160-bit hash

Section 30. Introduction (7 pages)


value. SHA-1 is slower to compute than MD5 but is considered stronger because it
creates a larger hash value.
• Symmetric Ciphers
Symmetric ciphers are mechanisms that transform text in order to conceal its
meaning. Symmetric ciphers provide two functions: message encryption and
message decryption. They are referred to as symmetric because both the sender and
the receiver must share the SAME key to encrypt and then decrypt the data. The
encryption function takes as input a message and a key value. It then generates as
output a seemingly random sequence of bytes roughly the same length as the input
message. The decryption function is just as important as the encryption function.
The decryption function takes as input the same seemingly random sequence of
bytes output by the first function and the same key value, and generates as output
the original message. The term "symmetric" refers to the fact that the same key
value used to encrypt the message must be used to successfully decrypt it.
The purpose of a symmetric cipher is to provide message confidentiality. For
example, if Alice needs to send Bob a confidential document, she could use e-mail;
however, e-mail messages have about the same privacy as a postcard. To prevent
the message from being disclosed to parties unknown, Alice can encrypt the
message using a symmetric cipher and an appropriate key value and e-mail that.
Anyone looking at the message en route to Bob will see the aforementioned
seemingly random sequence of bytes instead of the confidential document. When
Bob receives the encrypted message, he feeds it and the same key value used by
Alice into the decrypt function of the same symmetric cipher used by Alice, which will
produce the original message the confidential document:

Some examples of symmetric ciphers include DES, IDEA, AES (Rijndael), Twofish,
Blowfish and RC2.
• Asymmetric Ciphers Page
257
Asymmetric ciphers provide the same two functions as symmetric ciphers: message
encryption and message decryption. There are two major differences, however. First,
the key value used in message decryption is different than the key value used for
message encryption. Second, asymmetric ciphers are thousands of times slower than
symmetric key ciphers. But asymmetric ciphers offer a phenomenal advantage in
secure communications over symmetric ciphers.
The major advantage of the asymmetric cipher is that it uses TWO key values
instead of one: one for message encryption and one for message decryption. The
two keys are created during the same process and are known as a key pair. The one
for message encryption is known as the public key; the one for message decryption
is known as the private key. Messages encrypted with the public key can only be
decrypted with its associated private key. The private key is kept secret by the

Section 30. Introduction (7 pages)


owner and shared with no one. The public key, on the other hand, may be given out
over an unsecured communication channel or published in a directory.
Using the earlier example of Alice needing to send Bob a confidential document via
e-mail, we can show how the exchange works with an asymmetric cipher. First, Bob
e-mails Alice his public key. Alice then encrypts the document with Bob's public key,
and sends the encrypted message via e-mail to Bob. Because any message
encrypted with Bob's public key can only be decrypted with Bob's private key, the
message is secure from prying eyes, even if those prying eyes know Bob's public
key. When Bob receives the encrypted message, he decrypts it using his private key
and recovers the original document:

If Bob needs to send some edits on the document back to Alice, he can do so by
having Alice send him her public key; he then encrypts the edited document using
Alice's public key and e-mails the secured document back to Alice. Again, the
message is secure from eavesdroppers, because only Alice's private key can decrypt
the message, and only Alice has her private key.
Note the very important difference between using an asymmetric cipher and a
symmetric cipher: No separate, secure channel is needed for Alice and Bob to
exchange a key value to be used to secure the message. This solves the major
problem of key management with symmetric ciphers: getting the key value
communicated to the other party. With asymmetric ciphers, the key value used to
send someone a message is published for all to see. This also solves another
symmetric key management headache: having to exchange a key value with each
party with whom one wishes to communicate. Anyone who wants to send a secure
message to Alice uses Alice's public key.
Some examples of asymmetric ciphers are RSA, Elgamal, and ECC (elliptic-curve
cryptography).
Recall that one of the differences between asymmetric and symmetric ciphers is that

Page
asymmetric ciphers are much slower, up to thousands of times slower. This
issue is resolved in practice by using the asymmetric cipher to communicate an
ephemeral symmetric key value and then using a symmetric cipher and the
258
ephemeral key to encrypt the actual message. The symmetric key is referred to as
ephemeral (meaning to last for a brief time) because it is only used once, for that
exchange. It is not persisted or reused, the way traditional symmetric key
mechanisms require. Going back to the earlier example of Alice e-mailing a
confidential document to Bob, Alice would first create an ephemeral key value to
encrypt the document with a symmetric cipher. Then she would create another
message, encrypting the ephemeral key value with Bob's public key, and then send
both messages to Bob. Upon receipt, Bob would first decrypt the ephemeral key
value with his private key and then decrypt the secured document with the
ephemeral key value (using the symmetric cipher) to recover the original document.

Section 30. Introduction (7 pages)


• Digital Signature
Digital signatures are used to guarantee the integrity of the message sent to a
recipient by representing the identity of the message sender. This is done by signing
the message using a digital signature, which is the unique by-product of asymmetric
ciphers. Although the public key of an asymmetric cipher generally performs
message encryption and the private key generally performs message decryption, the
reverse is also possible. The private key can be used to encrypt a message, which
would require the public key to decrypt it. So, Alice could encrypt a message using
her private key, and that message could be decrypted by anyone with access to
Alice's public key. Obviously, this behavior does not secure the message; by
definition, anyone has access to Alice's public key (it could be posted in a directory)
so anyone can decrypt it. However, Alice's private key, by definition, is known to no
one but Alice; therefore, a message that is decrypted with Alice's public key could
not have come from anyone but Alice. This is the idea behind digital signatures.
The solution is to perform a one-way hash function on the message, and encrypt the
hash value with the private key. For example, Alice wants to confirm a contract with
Bob. Alice can edit the contract's dotted line with "I agree," then perform an MD5
hash on the documents, encrypt the MD5 hash value with her private key, and send
the document with the encrypted hash value (the digital signature) to Bob.

Page
259

Bob can verify that Alice has agreed to the documents by checking the digital
signature; he also performs an MD5 hash on the document, and then he decrypts the

Section 30. Introduction (7 pages)


digital signature with Alice's public key. If the MD5 hash value computed from the
document contents equals the decrypted digital signature, then Bob has verified that
it was Alice who digitally signed the document.

Moreover, Alice cannot say that she never signed the document; she cannot refute
the signature, because only she holds the private key that could have produced the
digital signature. This ensures non-repudiation.
• Digital Certificates
A digital certificate is a document that uniquely identifies information about a party.
It contains a party's public key plus other identification information that is digitally
signed and issued by a trusted third party, also referred to as a Certificate Authority
(CA). A digital certificate is also known as an X.509 certificate and is commonly used
to solve problems associated with key management.
As explained earlier, the advent of asymmetric ciphers has greatly reduced the
problem of key management. Instead of requiring that each party exchange a
different key value with every other party with whom they wish to communicate over
separate, secure communication channels, one simply exchanges public keys with
the other parties or posts public keys in a directory.
However, another problem arises: How is one sure that the public key really belongs
to Alice?
For example, assume Charlie is a third party that both Alice and Bob trust. Alice
sends Charlie her public key, plus other identifying information such as her name,
address, and Web site URL. Charlie verifies Alice's public key, perhaps by calling her
on the phone and having her recite her public key fingerprint. Then Charlie creates a

Page
document that includes Alice's public key and identification, and digitally signs it
using his private key, and sends it back to Alice. This signed document is the digital
260
certificate of Alice's public key and identification, vouched (i.e. confirmed) for by
Charlie.
Now, when Bob goes to Alice's Web site and wants to securely send his credit card
number, Alice sends Bob her digital certificate. Bob verifies Charlie's signature on the
certificate using Charlie's public key (assume Bob has already verified Charlie's
public key), and if the signature is good, Bob can be assured that, according to
Charlie, the public key within the certificate is associated with the identification
within the certificate namely, Alice's name, address, and Web site URL. Bob can
encrypt his credit card number using the public key with confidence that only Alice
can decrypt it:

Section 30. Introduction (7 pages)


Identify and classify potential threats to a system and describe how a
8.3 given architecture will address the threats.

Ref. • [CORE_SECURITY_PATTERNS] Chapter 1.

• Input Validation Failures


Validating the input parameters before accepting the request and resuming the
process is critical to application security. It is also a good practice to validate all
inputs from both trusted and untrusted resources. This practice will help in avoiding
application-level failures and attacks from both intentional hackers and unintentional
abusers. Input validation is a mechanism for validating data such as data type
(string, integer), format, length, range, null-value handling, verifying for character-
set, locale, patterns, context, legal values and validity, and so on. For example, if a
form-based Web application fails to encode square brackets ("[" and "]"), a remote
user can create a specially crafted URL that will cause the target user's browser to
execute some arbitrary scripting code when the URL is loaded. This can cause a
malicious code injection attack, depending on the impact of the scripting code
executed. If an application relies on client-side data validation, any flaw may be
exploited by a hacker. It is always a good practice to re-verify and validate input,
even after client-side validation. From a security perspective, it is very important

Page
that all input data are validated prior to application processing.
• Output Sanitation
261
Re-displaying or echoing the data values entered by users is a potential security
threat because it provides a hacker with a means to match the given input and its
output. This provides a way to insert malicious data inputs. With Web pages, if the
page generated by a user's request is not properly sanitized (i.e. verified and
cleaned) before it is displayed, a hacker may be able to identify a weakness in the
generated output. Then the hacker can design malicious HTML tags to create pop-up
banners; at the worst, hackers may be able to change the content originally
displayed by the site. To prevent these issues from arising, the generated output
must be verified for all known values. Any unknown values not intended for display
must be eliminated. All comments and identifiers in the output response must also
be removed.

Section 30. Introduction (7 pages)


• Buffer Overflow
When an application or process tries to store more data in a data storage or memory
buffer than its fixed length or its capacity can handle, the extra information is likely
to go somewhere in adjacent buffers. This event causes corruption or overwriting in
the buffer that holds the valid data and can abruptly end the process, causing the
application to crash. To design this kind of attack, a hacker passes malicious input by
tampering or manipulating the input parameters to force an application buffer
overflow. Such an act usually leads to denial-of-service attacks. Buffer overflow
attacks are typically carried out using application weaknesses related to input
validation, output sanitization, and data injection flaws.
• Data Injection Flaw
Security intruders can piggyback user data or inject malicious code together with
user data while exploiting a weakness in the user data input environment. Data
injection flaws are often found in browsers with pop-up windows (window injection
vulnerability) or in SQL statements when external input is transmitted directly into
SQL (SQL injection vulnerability). In a window injection flaw scenario, security
intruders can "hijack" a named Web browser window after a user opens both a
malicious Web site and a trusted Web site in separate browser windows. This
assumes that the trusted Web site opens up a pop-up window and that the malicious
Web site is aware of the name of the pop-up window. To avoid data injection flaws, it
is important to enforce thorough input validation; that is, all input values, query
strings, form fields, cookies, client-side scripts must be validated for known and valid
values only. The rest of them must be rejected.
• Cross-Site Scripting (XSS)
With XSS, a Web application can gather information by using a hyperlink or script
that contains malicious content. An attacker typically uses this mechanism to inject
malicious code into a target Web server or to deliver to users a malicious link that
redirects them to another Web server. The attackers frequently use JavaScript,
VBScript, ActiveX, HTML, or Flash in a vulnerable Web application to gather data
from the current user. Based on the user interaction with the target Web server, the
script may hijack the user's account information, change user privileges, steal cookie
or session information, poison the user-specific content, and so on. Thus, it is
important to diagnose and test the application for XSS risks and vulnerabilities.
• Improper Error Handling
Most applications are susceptible to security issues related to error handling when
they display detailed internal error messages about application conditions such as
out of memory, null pointer exceptions, system call failure, database access failure,
network timeout, and so on. This information usually reveals internal details of
implementation, failure conditions, and the runtime environment. Hackers can make
use of this information to locate a weak point in the application and design an attack.

Page
This information helps hackers crash applications or cause them to throw error
messages by sending invalid data that forces the applications to access non-existent
databases or resources. Adopting proper error handling mechanisms will display
262
error messages as user-specific messages based on user input; no internal details
related to the application environment or its components will be revealed. All user-
specific error messages are mapped to underlying application-specific error
conditions and stored as log files for auditing. In the event of an attack, the log files
provide diagnostic information for verifying the errors and for further auditing.
• Insecure Data Transit or Storage
Confidentiality of data in transit or storage is very important, because most security
is compromised when data is represented in plain text. Adopting cryptographic
mechanisms and data encryption techniques helps ensure the integrity and
confidentiality of data in transit or storage.
• Weak Session Identifiers

Section 30. Introduction (7 pages)


Issuing or using session identifiers before authentication or over unencrypted
communication channels allows hackers to steal session information and then hijack
the associated user sessions for unauthorized business transactions. Representing
the session identifiers as cleartext helps the hacker to spoof the user identity
information using the session attributes. This weakness intensifies if the service
provider or Web applications do not validate the identity information obtained from
the session identifier of the service requester or if they do not set an expiry time for
the session. To prevent these issues, the application should issue encrypted session
identifiers after initiating a secure communication channel using SSL that ensures
confidentiality and integrity of the session information.
• Weak Security Tokens
Weak security tokens refer to the use of password security tokens that allow hackers
to guess passwords by using a dictionary or token decrypting tools and to
impersonate the user. Some Web applications may also echo back their passwords
as Base64 values that are susceptible to an attack and are easily reproducible. If the
HTML scripts or Web applications echo the password or security token, hackers may
intercept them and then impersonate the user for unauthorized access. A weak
security token is a common security problem in authentication and application
session management. To address the vulnerabilities they cause, adopting strong
authentication or multifactor authentication mechanisms using digital certificates,
biometrics, or smart cards are usually considered. Thus, it is important to protect the
password files and also ensure that the passwords being used on accounts cannot
easily be guessed or cracked by hackers.
• Weak Password Exploits
Passwords are the weakest mechanisms for user authentication because they can be
easily guessed or compromised by a hacker who is watching the keystrokes or using
password-cracking tools to obtain data from password files. When a password is
stolen, it is very difficult to identify the culprit (i.e. guilty party) while an application
is being abused or attacked. Thus, it is important to protect password files by using
encrypted files and to ensure that the stored passwords cannot be retrieved, easily
guessed, or cracked by hackers. Adoption of strong authentication or multifactor
authentication mechanisms using digital certificates, biometrics, or smart cards is
strongly recommended. Weak password exploits are one of the most common
security issues in network-enabled applications.
• Weak Encryption
Encryption allows the scrambling of data from plaintext to ciphertext by means of
cryptographic algorithms. Using computers with lots of processing power can
compromise weaker algorithms. Algorithm key-lengths exceeding 56 bits are
considered strong encryption, but in most cases, using 128-bits and above is usually
recommended.

Page
• Session Theft
Also referred to as session hijacking, session theft occurs when attackers create a 263
new session or reuse an existing session. Session theft hijacks a client-to-server or
server-to-server session and bypasses the authentication. Hackers do not need to
intercept or inject data into the communication between hosts. Web applications that
use a single SessionID for multiple client-server sessions are also susceptible to
session theft, where session theft can be at the Web application session level, the
host session level, or the TCP protocol. In a TCP communication, session hijacking is
done via IP spoofing techniques, where an attacker uses source-routed IP packets to
insert commands into an active TCP communication between the two communicating
systems and disguises himself as one of the authenticated users. In Web-based
applications, session hijacking is done via forging or guessing SessionIDs and
stealing SessionID cookies. Preventing session hijacking is one of the first steps in
hardening Web application security, because session information usually carries
sensitive data such as credit card numbers, PINs, passwords, and so on. To prevent

Section 30. Introduction (7 pages)


session theft, always invalidating a session after a logout, adopting PKI (public
key infrastructure) solutions for encrypting session information, and adopting a
secure communication channel (such as SSL/TLS) are often considered best
practices.

• Issues of Configuration Data


A variety of configuration-related issues in the application or its server infrastructure
impact the security of business applications, particularly in the Web Tier and the
Business Tier. The most common examples are miss-configured SSL certificates and
encryption settings, use of default certificates, default accounts with default
passwords, and miss-configured Web server plug-ins. To prevent issues, it is
important to test and verify the environment for configuration-related weaknesses.
• Broken Authentication
Broken authentication is caused by improper configuration of authentication
mechanisms and flawed credential management that compromise application
authentication through password change, forgotten password, account digital
update, certificate issues, and so on. Attackers compromise vulnerable applications
by manipulating credentials such as user passwords, keys, session cookies, or
security tokens and then impersonating a user. To prevent broken authentication,
the application must verify its authentication mechanisms as well as authenticate the
requesting party of the user's credentials prior to granting access to the application.
• Broken Access Control
Access control determines an authenticated user's rights and privileges for access to
an application or data. Any access control failure leads to loss of confidential
information and unauthorized disclosure of protected resources such as application
data, functions, files, folders, databases, and so on. Access control problems are
directly related to the failure to enforce application-specific security policies and the
lack of policy enforcement in application design. To prevent access control failures, it
is important to verify the application-specific access control lists (i.e. permissions)
for all known risks and to run a penetration test to identify potential failures.
• Policy Failures
Security policy provides rules and conditions under which actions should be taken in
response to defined events. In general, business and organizations adopt security
policies to enforce access control in IT applications, firewalls, anti-spam processing,
message routing, service provisioning, and so on. If there are insufficient or missing
rules, invalid conditions or prerequisites, or conflicting rules, the security policy
processing will not be able to enforce the defined security rules. Applications can
thus be vulnerable due to policy failures. With such failures, hackers can discover
and exploit any resource loophole. Policy failure is a security issue for application
design and policy management.
• Audit and Logging Failures Page
264
Auditing and logging mechanisms facilitate non-repudiation services that provide
irrefutable evidence about all application events. They help to record all key
application events. Any audit or logging failure can cripple the ability of an
application to diagnose suspicious activity and foil malicious attacks. Applications
also cannot trace exceptions and specific bugs if audit and logging failure is present.
Monitoring of auditing and logging processes of applications with high-availability is
vital. Log files must be secured by restricted access.
• Denial of Service (DoS) and Distributed DOS (DDoS)
DoS and DDoS are the worst form of network-level attacks. They can affect
applications in many ways, including excessive consumption of nonrenewable
resources such as network bandwidth, memory, CPU utilization, storage, and so on.
They can also cause destruction of host configuration information, resulting in

Section 30. Introduction (7 pages)


application failures and OS crashes. Traditional DOS is an attack by a single machine
on another machine; DDoS initiates an attack by distributing and coordinating it
from several machines. Hackers initiate DoS or DDoS attacks by exploiting
application weaknesses and flaws related to resource management, authentication,
error handling and application configuration. Web-based applications are highly
susceptible to DoS and DDoS attacks, and in some cases it is impossible to identify
whether the incoming service request is an attack or ordinary traffic. It is extremely
difficult to adopt preventive measures for DoS and DDoS, although possible
approaches include hostname verification and implementation of router filtering to
drop connections from attacks initiated from untrusted hosts and networks. The
preventive measures for DoS and DDoS include implementing router filtering to drop
connections from untrusted hosts and networks and configuring fault-tolerant and
redundant server resources. In addition, the Web/Application server must be
configured to perform host-name verification, identifying fake requests and denying
them from further processing. At the application level, the Web server may adopt
security patterns such as Secure Pipe, Intercepting Web Agent, and Intercepting
Validator.
• Man-in-the-Middle (MITM)
A MITM attack is a security attack in which the hacker is able to read or modify
business transactions or messages between two parties without either party knowing
about it. Attackers may execute man-in-the-middle attacks by spoofing the business
transactions, stealing user credentials, or exploiting a flaw in the underlying public
key infrastructure or Web browser. Man-in-the-middle is a security issue in
application design and application infrastructure.
The preventive measures for safeguarding Web tier components and Web services
communication is done by implementing transport-layer security using SSL/TLS or
IPSEC protocols. At the application level, the components can make use of Secure
Pipe pattern.
• Multiple Sign-On Issues
Multiple sign-on is a common issue in an enterprise application integration
environment. It requires a user to log on multiple times because the integrated
application does not share a common sign-on mechanism within the environment.
This makes an application vulnerable due to the required multiple sign-on actions.
When a user switches applications within a server, hackers can compromise security
by using credentials from previous sign-on sessions. In addition, users are required
to explicitly sign off from every application session within the server. This can result
in an increase in human error, loss of productivity, and frequent failure to access all
the applications in which they have access rights.
Adopting Single Sign-On (SSO) mechanisms solves these problems by eliminating
the need for users to remember usernames and passwords other than their initial
application login. SSO also increases productivity, because users no longer need to

Page
physically enter repetitive usernames and passwords or other forms of authentication
credentials.
265
• Deployment Problems
Many security exposure issues and vulnerabilities occur by chance because of
application deployment problems. These include inconsistencies within and conflicts
between application configuration data and the deployment infrastructure (hosts,
network environment, and so on). Human error in policy implementation also
contributes to these problems. In some cases, deployment problems are due to
application design flaws and related issues. To prevent these problems, it is
important to review and test all infrastructure security policies and to make sure
application-level security policies reflect the infrastructure security policies, and vice
versa. Where there are conflicts, the two policies will need to be reconciled. Some
trade-offs in constraints and restrictions related to OS administration, services,
protocols, and so on may need to be made.

Section 30. Introduction (7 pages)


• Coding Problems
Coding practices greatly influence application security. Coding issues also cause
flaws and erroneous conditions in programming and application program flow. Other
issues related to input validation, race conditions, exceptions, runtime failures, and
so on may also be present. To ensure better coding practices are followed, it is
always recommended to adopt a coding review methodology followed by source code
scanning, which is a systematic Scanning for Malicious Source Code, so that all
potential risks and vulnerabilities can be identified and corrected.

Describe the commonly used declarative and programmatic methods


8.4 used to secure applications built on the Java EE platform, for example
use of deployment descriptors and JAAS.

Ref. • [CORE_SECURITY_PATTERNS] Chapter 4.

The J2EE container-based security services primarily address the security requirements of the
application tiers and components. They provide authentication and authorization mechanisms
by which callers and service providers prove each other's identities, and then they provide
access control over the resources to which an identified user or system has access.
A J2EE container supports two kinds of security mechanisms. Declarative security allows
enforcement of security using a declarative syntax applied during the application's
deployment. Programmatic security allows expressing and enforcing security decisions at the
application's invoked methods and its associated parameters.

Declarative Security
In a declarative security model, the application security is expressed using rules and
permissions in a declarative syntax specific to the J2EE application environment. The security
rules and permissions will be defined in a deployment descriptor document packaged along
with the application component. The application deployer is responsible for assigning the
required rules and permissions granted to the application in the deployment descriptor. Figure
below shows the deployment descriptors meant for different J2EE components:

Page
266

Section 30. Introduction (7 pages)


Declarative security can be supplemented by programmatic security in the application code
that uses J2EE APIs to determine user identity and role membership and thereby enforce
enhanced security. In cases where an application chooses not to use a J2EE container,
configurable implementation of security similar to Container Managed Security can still be
designed by using JAAS-based authentication providers and JAAS APIs for programmatic
security.

Programmatic Security
In a programmatic security model, the J2EE container makes security decisions based on the
invoked business methods to determine whether the caller has been granted a privilege to
access or deny a resource. This determination is based on the parameters of the call, its
internal state, or other factors based on the time of the call or its processed data.
For example, an application component can perform fine-grained access control with the
identity of its caller by using EJBContext.getCallerPrincipal (EJB component) or
HttpServletRequest.getUserPrincipal (Web component) and by using
EJBContext.isCallerInRole (EJB component) and
HttpServletRequest.isUserInRole (Web component). This allows determining whether
the identity of the caller has the privileged role to execute a method for accessing a protected
resource.

Page
Using programmatic security helps when declarative security is not sufficient to build the
security requirements of the application component and where the component access control
decisions need to use complex and dynamic rules and policies.
267
Java Authentication and Authorization Service (JAAS)
Authentication is the process of verifying the identity of a user or a device to determine its
accuracy and trustworthiness. Authorization provides access rights and privileges depending
on the requesting identity's granted permissions to access a resource or execute a required
functionality.
JAAS provides API mechanisms and services for enabling authentication and authorization in
Java-based application solutions. JAAS is the Java implementation of the Pluggable
Authentication Module (PAM) framework originally developed for Sun's Solaris operating
system. PAM enables the plugging in of authentication mechanisms, which allows applications
to remain independent from the underlying authentication technologies. Using PAM, JAAS

Section 30. Introduction (7 pages)


Authentication modules allow integrating authentication technologies such as Kerberos, RSA,
smart cards, and biometric authentication systems. Figure below illustrates JAAS-based
authentication and authorization using pluggable authentication modules:
In an end-to-end application security model, JAAS provides authentication and authorization
mechanisms to the Java applications and also enables them to remain independent from JAAS
provider implementations. The JAAS API framework features can be categorized into two
concepts:
• Authentication - JAAS provides reliable and secure API mechanisms to verify and
determine the identity of who is executing the code.
• Authorization - Based on an authenticated identity, JAAS applies access control
rights and privileges to execute the required functions. JAAS extends the Java
platform access control based on code signers and codebases with fine-grained
access control mechanisms based on identities.

JAAS Authentication
In a JAAS authentication process, the client applications initiate authentication by instantiating
Page
a LoginContext object. The LoginContext then communicates with the LoginModule,
268
which performs the actual authentication process. As the LoginContext uses the generic
interface provided by a LoginModule, changing authentication providers during runtime
becomes simpler without any changes in the LoginContext. A typical LoginModule will
prompt for and verify a username and password or interface with authentication providers
such as RSA SecureID, smart cards, and biometrics. LoginModules use a CallbackHandler
to communicate with the clients to perform user interaction to obtain authentication
information and to notify login process and authentication events.
• Configuring JAAS LoginModule for an application
The JAAS LoginModules are configured with an application using a JAAS configuration
file (e.g., my-jaas.conf), which identifies one or more JAAS LoginModules intended
for authentication. Each entry in the configuration file is identified by an application

Section 30. Introduction (7 pages)


name, and contains a list of LoginModules configured for that application. Each
LoginModule is specified via its fully qualified class name and an authentication Flag
value that controls the overall authentication behavior. The authentication process
proceeds down the specified list of entries in the configuration file. The following is
the list of authentication flag values:
○ Required - Defines that the associated login module must succeed with
authentication. Even if it succeeds or fails, the authentication still continues
to proceed down the LoginModule list.
○ Requisite - Defines that the associated login module must succeed for the
overall authentication to be considered as successful. If it succeeds, the
authentication still continues to proceed down the LoginModule list;
otherwise, it terminates authentication and returns to the application.
○ Sufficient - Defines the associated login module's successful authentication
sufficient for the overall authentication. If the authentication is successful,
the control is returned back to the application and it is not required to
proceed down the LoginModule list. If the authentication fails, then the
authentication still continues down the list of other login modules.
○ Optional - Defines that the associated login module authentication is not
required to succeed. Even if the authentication succeeds or fails, the
authentication still continues down the list of other login modules.
JAAS Authorization
JAAS authorization enhances the Java security model by adding user, group, and role-based
access control mechanisms. It allows setting user and operational level privileges for enforcing
access control on who is executing the code.
When a Subject is created as a result of an authentication process, the Subject represents
an authenticated entity. A Subject usually contains a set of Principals, where each
Principal represents a caller of an application. Permissions are granted using the policy for
selective Principals. Once the user logged in is authenticated, the application associates
the Subject with the Principal based on the user's access control context.

Page
269

Section 30. Introduction (7 pages)


9. Bibliography ( pages3)
 [JAVA_DESIGN] Kirk Knoernschild. Java Design: Objects, UML, and Process.
http://www.amazon.com/Java-Design-Objects-UML-Process/dp/0201750449.
 [JEE_5_TUTORIAL] The Java EE 5 Tutorial, Third Edition.
http://java.sun.com/javaee/5/docs/tutorial/doc/JavaEETutorial.pdf.
 [DESIGNING_ENTERPRISE_APPLICATIONS] Designing Enterprise Applications with the
J2EE Platform, Second Edition.
http://java.sun.com/blueprints/guidelines/designing_enterprise_applications_2e/.
 [SUN_SL_425] Architecting and Designing J2EE Applications.
http://www.sun.com/training/catalog/courses/SL-425.xml.
 [DESIGN_PATTERNS] Gamma, Erich; Richard Helm, Ralph Johnson, and John Vlissides
(1995). Design Patterns: Elements of Reusable Object-Oriented Software.
http://www.amazon.com/Design-Patterns-Object-Oriented-Addison-Wesley-
Professional/dp/0201633612.
 [CORE_J2EE_PATTERNS] Deepak Alur, Dan Malks, John Crupi. Core J2EE Patterns: Best
Practices and Design Strategies, Second Edition. http://www.amazon.com/Core-J2EE-
Patterns-Practices-Strategies/dp/0131422464.
 [CORE_SECURITY_PATTERNS] Christopher Steel, Ramesh Nagappan, Ray Lai. Core
Security Patterns: Best Practices and Strategies for J2EE, Web Services, and Identity
Management. http://www.amazon.com/Core-Security-Patterns-Strategies-
Management/dp/0131463071.
 [SCEA-051] Sun Certified Enterprise Architect for Java EE Study Guide Exam 310 051,
McGraw-Hill Osborne; 2 edition (1 Aug 2007), by Paul Allen (Author), Joseph Bambara
(Author).
 [Simpler Programming Model] The Java Persistence API - A Simpler Programming Model
for Entity Persistence
 [Ease of Development] Ease of Development in Enterprise JavaBeans Technology.mht
 [SAMP ARCHITECTURES] ADDRESSING SYSTEMIC QUALITIES IN SAMP ARCHITECTURES,
Marina Fisher, Sun Startup Essentials Amanda Waite, ISV-Engineering OSS.
 [12-Steps] 12 Steps to Useful Software Metrics, Linda Westfall, the Westfall Team Page
270
 [web service] Web Services for J2EE, Version 1.0
 [SOA for J2EE] Implementing Service-Oriented Architectures (SOA) with the Java EE 5
SDK
 [Frameworks Driving Innovation] Challenges in the J2EE Web Tier While Frameworks
Driving Innovation
 [Web services J2EE] Web Services for J2EE, Version 1.0
 [.Net J2EE] Application Interoperability: Microsoft .NET and J2EE
 Web Services Technology Deployment Issues, Gerald W. Edgar & Pranab K. Baruah, IT
Architecture & e-Business, Commercial Airplanes Group, The Boeing Company

Section 30. Introduction (7 pages)