You are on page 1of 23


Enterprise Application Integration Cautam Shroff
Technology Review#2000-2
Cautam Shroff
April 2000
Enterprise Application Integration Cautam Shroff
© Copyright 2000 Tata Consultancy Services. All rights reserved.
No part of this document may be reproduced or distributed in any form by any means
without the prior written authorization of Tata Consultancy Services.
Enterprise Application Integration Cautam Shroff
1 INTRODUCTION ..........................................................................................1
2 THE ENTERPRISE INTEGRATION PROBLEM ...............................................2
2.1 OPERAT!ONAL AND ANALYT!CAL !NTECRAT!ON ........................................................2
2.2 APPL!CAT!ON !NTERFACES .................................................................................2
2.3 LEvELS OF !NTECRAT!ON...................................................................................5
2.+ !NTECRAT!ON STRATEC!ES.................................................................................5
2.5 EA! TECHNOLOCYf PRODUCT SURvEY ..................................................................7
3.1 OvERALL APPROACH ......................................................................................12
3.2 NASTERCRAFT DEvELOPNENT ENv!RONNENT .......................................................12
3.3 PRS FRANEWORK FOR EA!..............................................................................12
3.+ NETHODOLOCY.............................................................................................13
3.5 ARCH!TECTURE.............................................................................................1+
3.6 TECHNOLOCY ...............................................................................................15
4 PROCESS INTEGRATION...........................................................................16
5 CONCLUSIONS...........................................................................................1S
Enterprise Application Integration Cautam Shroff 1 of 18
1 Introduction
Over the years, most organizations have invested in diverse !T systems, each catering to
different functions, departments, and stages in the product life cycle. While the number
and complexity of these applications increases regularly, the need for all of them to work
together in an integrated manner, in near real-time, becomes ever more necessary. The
need for a consolidated view of organizational information on the one hand, and on the
other, for an operational process, supported smoothly by all the !T systems in the
enterprise, is deeply felt by many an !T department, under continuous pressure from its
ever more demanding users.
To address these problems, many organizations have put in place processes, automated
and otherwise, to manage minimal levels of integration between different applications.
The growing complexity and unmanageability of these solutions leads many an
organization to consider re-engineering all its systems into a supposedly `ideal' new
system. Unfortunately, the time required for such an exercise is likely to be unacceptable,
with the business needs sure to have changed considerably in any such time frame.
An approach to re-structure the !T framework into an aøapI¡vø architecture is what is
required. An adaptive architecture enables continuous change, allowing the !T
infrastructure to evolve at the rapid pace set by technology advances, business growth,
and business change. Elements of an adaptive architecture include component-based
design, object-orientation, distributed object technology, and an £nIørpr¡sø AppI¡¿aI¡øn
1nIøgraI¡øn (EA!) strategy. The latter is a crucial element of the overall solution that
enables an organization to move towards and put in place an adaptive architecture, while
solving many of its integration problems along the way.
This paper outlines the issues in EA! from the experiences of the TCS Corporate
Architecture Croup. The problems in EA! are classified, and the technologies and
products available in the market are surveyed. The relationships between EA!, adaptive
architectures and development environments are elucidated. Finally, a ¨Component
!nterface Object Nodel" approach to EA! is proposed. This approach outlines a strategy
for integrating applications while moving towards an adaptive component-based
architecture, while also addressing critical issues of co-existence with, and migration from
legacy systems. The role of the development environment in this approach is played by
the NasterCraft development environment of TCS. The role of object-oriented
frameworks in this strategy is demonstrated with the PRS (Parallel Replication System)
framework of TCS. An EA! architecture is described using these tools. The overall
approach is generic and can be adopted using other tools and technology as well.
Finally, we explore the goals of process integration, where the integration of business
processes in different applications in a flexible manner allows the organization to change
its !T processes with changing business process needs. While these latter goals are not
yet achievable in all their glory, a roadmap is provided that promises concrete steps
towards highly flexible process integration.
Enterprise Application Integration Cautam Shroff 2 of 18
2 The Enterprise Integration Problem
2.1 Operational and Analytical !ntegration
The problem of integration has its roots in the manner in which !T systems of most large
enterprises have evolved over the years. Departmental solutions addressing a specific
operational function, or alternatively a specific analytical need, have evolved
independently. As a result, the same data is potentially kept in many systems, not
consciously, but rather as a result of the way systems have evolved. Often, copies of the
`same' data record in different systems may be inconsistent. Thus, the organization is
unable to either obtain correct data for global analysis and planning, or for processing
business processes in an integrated and seamless manner. Thus there are, in fact, two
integration problems - anaI)I¡¿aI and øpøraI¡ønaI.
Analytical needs require information integration, where consistent data is gathered from
the diverse operational systems of the enterprise and processed into a form on which
decision support systems can operate efficiently. Data warehousing and mining
methodologies have been evolved to address this problem in a variety of effective ways.
We do not address analytical integration in this paper.
On the other hand, øpøraI¡ønaI ¡nIøgraI¡øn is needed for the organization to function in a
smoother manner, with faster turn-around time for its processes, availability of current
information to support transactional decisions, and to provide correct information for
tracking and controlling the business process in order to improve services levels.
Whereas analytical information aggregates, collates and cleans data in order to provide a
platform on which a variety of questions can be answered, operational integration aims
to enable the development of an integrated business process, where operational data at
the lowest level of detail is available currently and correctly. Such data must always be in
synch across whatever systems it needs to be maintained. Further, a unified view of the
business process is available to the users of the enterprise's !T (not only the analysts),
transcending the diverse systems that are employed in the enterprise; additionally, this
process must be flexible to support the changing needs of the organization, and it must
be open enough to allow for new systems to be added, and old ones to be replaced
without disturbance. Finally, all this must take place in real-time to support the business
process on-line. !n many ways, operational integration is often more difficult to achieve
than analytical integration.
2.2 Application !nterfaces
!n order to integrate applications, one needs to understand the different types of
interfaces offered by applications. !ntegration can only take place via such interfaces, and
the restrictions f limitations of each add to the complexity of the integration problem.
The lowest level interface that an application can possibly make available is its data
schema. Direct access to the application's data then allows interfacing to it. There are
numerous serious disadvantages of this approach; even so, it is a widely used strategy,
especially in mainframe applications.
Using data as a means to interface to an application results in exposing its schema
design. As a result, changing the application is, in general, impossible without analyzing
Enterprise Application Integration Cautam Shroff 3 of 18
and, if required, changing the interface. !ntegration usually requires triggering actions in
external systems in response to changes to data. With data interfaces, this usually
involves employing DB-specific triggers to external procedures if available, or inefficient
polling strategies; this is not a clean solution. Finally, with a data level interface, the
responsibility for maintaining integrity of data is shared by external applications that need
to understand the structure of the application they are trying to integrate with. !n the
end, this results in an unmaintainable `spaghetti' system.
Application software, such as ERP packages, often provide application programmer
interfaces embodied in software libraries that can be used to access and interact with
these applications. !ntegrating using AP!s is a far cleaner solution, where possible, than
data-level integration. Problems arise, however, when the AP! libraries may not be
available on the platform required (i.e. programming language, operating system,
hardware, etc.). Overcoming such problems usually involves low level systems coding,
which can result in `integration code' that is highly proprietary and, in the long run,
difficult to maintain. Other common problems with AP!s are that they often change when
the underlying product is upgraded, without always maintaining upward compatibility;
usually, AP!s are provided by third party vendors, and need not keep pace with the
development plans of the packaged software application. Finally, AP!s rarely solve the
problem of automatically informing external systems when events occur in the target
application; thus deploying inefficient layers of polling is often the only way to enable
information to flow out of an application in an unsolicited manner.
Component Interfaces
Distributed object technology based on the CORBA standards, and more recently, the EJB
and J2EE standards, have made portability of application interfaces an order of
magnitude easier.
Using CORBA-based object request brokers enables applications on different platforms
and written in different languages to communicate via a common interface specification
(in `!DL' format). The details of the communication, including issues such as data
formats, mapping of structures in one language to another, etc. are taken care of by the
ORB, in accordance with the CORBA specification. CORBA provides a standard calling
procedure, using any vendor-supplied ORB product, with which any `!DL' interface can be
invoked. For unsolicited data transfer, the CORBA `event' service allows an application to
publish events that can be trapped by the ORB (if it supports events); the problem of
unsolicited data flow outside the application can thus be addressed. Thus, if an
application publishes a CORBA interface, potentially another application can be integrated
with it.
While CORBA eases many problems, practically, numerous problems abound with this
technology. Often ORBs from different vendors do not always work with each other.
Further, the CORBA specifications for events, security, etc. are optional; thus, many ORB
vendors do not support these services. The performance overheads of CORBA interfaces
vary tremendously from vendor to vendor. Finally, an application needs to provide a
CORBA interface. !n the absence of this, a wrapper interface needs to be built using
other means to access the application, such as data or AP!, and a CORBA interface to the
outside needs to be provided.
Enterprise Application Integration Cautam Shroff + of 18
The emergence of Java as a server platform for enterprise applications is clearly visible.
The Java 2 Enterprise Edition standards integrating varieties of Java-based technologies,
including Enterprise Java Beans, have lent additional momentum to this trend. Java
interfaces are also gravitating towards the !DL standard. The EJB component model
additionally provides a standard for sørvør side portability across, thus enabling future
applications to be moved wholesale from one operating environment to another,
circumventing many potential integration problems. Further, EJB and CORBA are moving
ever closer together, each providing solutions to different parts of the interfacing puzzle -
CORBA !DL a standard for the client, or calling, side, and EJB a standard for deploying
server side functionality on a Java platform. Finally, the Java platform provides the
standard for integrating web functionality to an application; which often plays a crucial
part in the overall integration solution. The major problems with the Java platform today
consist of its relative immaturity and volatility. Nevertheless, it is here to stay.
Message Queue
Nessaging interfaces are an important, yet often neglected, mechanism for integration.
Nany legacy applications provide interfaces based on files - this is a primitive form of a
message interface. Nodern message queue systems provide guaranteed delivery across a
variety of platforms and network communication protocols. Applications merely enqueue
data into a queue that has as its destination an AP!, Component !nterface, or custom
integration service for the target application. The messaging system manages many of
the communication details.
Nessage queue-based interfacing is essential for asynchronous interfacing, where the
output of one application needs to be routed to another application for later processing.
Traditional batch interfaces developed using files can be transformed and made highly
efficient when message queues are used instead. Artificial latencies introduced by batch
scheduling are removed, and data becomes available as soon as it is made available. This
feature is extremely important to exploit correctly while designing the EA! solution, since
it can eliminate many bottlenecks in the traditional legacy system interfaces, and can
streamline processes rapidly.
The message content is another area where much attention is being paid. The
incorporation of XNL-based data models for messages represents an important step in
the direction of application integration beyond a single organization's boundaries.
However, these standards are yet to evolve in many areas, and their adoption has not
User Interface
Nany legacy applications offer user interfaces that are themselves a candidate for
application interfacing. This is traditionally referred to as the `screen-scraping' approach.
Such an approach is highly practical when the underlying AP!s are not available in a
usable form. The user interface provides a well-defined mechanism of interaction with
the target application, and guarantees that the appropriate business validations will be
carried out while processing a transaction (which cannot be guaranteed by data-level
User interface integration is relatively straightforward in mainframe systems where
screen-scraping techniques can be used. By contrast, client f server systems are far more
Enterprise Application Integration Cautam Shroff 5 of 18
difficult to interface with in this manner. Screen-scraping techniques do not apply
directly, and in the absence of specific tools provided by the +CL environment, it is not
2.3 Levels of !ntegration
As we have seen from the discussion above, interfacing with a target application can
occur using different types of interfaces. Each of these corresponds to a different `level'
of application integration. Thus, we have:
1. Data-level !ntegration: direct interface to data
2. AP!-level !ntegration: using AP!s
3. Nethod-level !ntegration: using component interfaces
+. User !nterface-level !ntegration: using screen-scraping techniques.
The choice of the appropriate integration level is largely determined by the nature of the
target application and the interface options available. !deally, method-level integration
using component interfaces is to be preferred wherever such interfaces are available.
This approach offers the opportunity to share prø¿øssøs across applications; for example,
an application can re-use an interest calculation engine in another system if it is
published as a component interface. Enquiries and updates to a target application's data
can of course be done as well. However, method-level integration is not easy to achieve
in practice, because of mismatches in the interface parameters. !t is rare that the
interface required is exactly that which is provided; some integration application always
needs to be written to marry the two. !n the absence of component interfaces, AP!-level
integration is the next option, with similar advantages and issues; of course AP!s need to
be available.
!n case method- or AP!-level integration is not possible, the choice comes down to data
and user interface-level integration. Traditionally, back office integration has followed the
data-level route, initially using batch files, and adding message queues to make these
mechanisms closer to real-time. When the issue is to enable data flow between two, or a
small number of applications, this approach is adequate, and usually quite effective.
However, data-level integration is the method of choice only when point-to-point
integration is being used, as explained below. For cross-application integration, which can
yield far greater benefits, user interface level integration is a better choice.
2.+ !ntegration Strategies
!ntegration of one application directly to another can be achieved through the
appropriate interfacing mechanism. This is the 'point-to-point' strategy. However,
using this strategy when the number of such application pairs increases is clearly not
scalable; for n application we could require to the order of n
integration interfaces to be
constructed. Point-to-point integration also runs the risk of being implemented on a case
by case basis for each application pair, without a consistent integration strategy across
the enterprise. At the same time, it is appropriate for small-scale application integration
needs, where less than four applications are involved.
Enterprise Application Integration Cautam Shroff 6 of 18
A more scalable alternative is to use a 'point-to-hub' strategy, where a router is used
to bring the number of interfaces down to a linear one. The hub approach requires the
enterprise to develop a common integration backbone into which different applications
read and write data, with each interface using the integration level appropriate for the
application concerned. The common integration backbone is often implemented using a
message queuing system, with a common data format across the applications.
!ncreasingly, organizations are looking at developing such a common data format using
an XNL-based language.
While point-to-hub integration achieves many integration goals, in terms of scalability,
common integration model, etc., it still requires applications to be accessed (by users) via
their native user interfaces. !n other words, the control still lies with the individual
applications. Users still see multiple applications. Further, tuning the backbone to
implement the required business process can be difficult.
'Cross-application' integration seeks to go a step beyond, by developing a fresh
`integration application', that sits on top of other applications and provides a common
point of user interfacing. This goes beyond screen-scraping, however; the integration
application is based on a common object model that captures the interfaces of all the
underlying applications. However, rather than maintain these common objects in its own
data store, wherever possible, it interfaces with the target applications at the appropriate
Fig. 2 Point-to-Hub
Application A Application B
Application C
Application D
Common Messaging Backbone (XML)
Application A Application B
Application C
Application D
Fig. 1 Point-to-Point
Enterprise Application Integration Cautam Shroff 7 of 18
integration level. Further, the common object model can be component-based, so as to
provide opportunities for enhancing the applicability of legacy applications without
disturbing them, and eventually replacing their functionality by fresh applications. Finally,
user interfaces are defined and developed on the common object model, providing an
integrated view of all the applications in the enterprise.
While there are many advantages to the cross-application integration approach, there are
some important issues to consider before deciding on such a strategy: the replacement
of the user interface of legacy applications could pose operational issues that need to be
accepted by the organization. Developing a common component interface object model
requires significant effort beyond that required for merely integrating applications. On the
positive side, one is in effect creating a new view of the enterprise. The opportunities for
developing flexibility of business processes at this level are attractive. The fresh user
interface can be web-based, solving numerous infrastructure, deployment and software
distribution issues. Finally, the cross application approach provides the first step towards
the migration away from selected legacy applications in a natural way.
2.5 EA! Technologyf Product Survey
!n this section we briefly survey leading technologies which define the EA! environment
of today. We also consider selected products available in the market that address a
variety of EA! issues.
Transaction Processing Nonitors such as Tuxedo and C!CS provide scalability through
load balancing features that manage requests to underlying resources, such as
databases, through queuing mechanisms. They additionally provide communications,
messaging, and naming services in a distributed environment. Traditionally, TP monitors
have been the backbone of any large-scale transaction processing application. Even
today, it is not recommended to deploy large volume transaction processing applications
without a TP monitor-based architecture.
CORBA and DCON are distributed object technologies that facilitate client-server
connectivity across diverse platforms and programming languages. These technologies
define how inter-component interfaces can be specified in an interface definition
language (!DL). CORBA-based products, or object request brokers (ORBs) then provide
Fig. 3 Cross-Application
Application A Application B Application C Application D
!ntegration Application
Common !nterface Object Nodel
Common User !nterface
Enterprise Application Integration Cautam Shroff 8 of 18
the mechanism to translate these !DL specifications into interface libraries for client-
server communication, managing the mapping across platforms and languages. DCON
does the same specifically for Nicrosoft platforms and languages.
EJBfJ2EE and Web Application Servers are defining features of the Java platform. The
use of Java as a server side development language is becoming increasingly popular due
to the evolution of a pørIaDIø platform services specification, via the Enterprise Java
Beans specification from Sun. Thus, just as Java programs are portable across Java
runtime environments, the surrounding platform services such as security, persistence,
state management, etc., which would otherwise need to be custom-developed for each
application, are also portable through a Web Application Server that supports the EJB
specification. The EJB specification, together with elements of the Java 1.2.x release, and
technology for server-side dynamic web page generation comprise the crucial elements
of the Java 2 Enterprise Edition. vendors supporting J2EE are thus able to offer a
complete Java-based web-based development architecture. For medium-sized systems,
this is rapidly becoming the platform architecture of choice. Where required, connectivity
to non-Java systems can be achieved using CORBA, which is also now part of the Java
platform. Further, by using a TP monitor as the underlying technology, some vendors
such as !BN and BEA have been able to develop Web Application Servers that can scale
to large applications as well.
NQSeries is the message queuing product from !BN that is rapidly emerging as a
standard in message-oriented middleware. Nessage queues enable batch interfaces to
become pseudo on-line, by guaranteeing reliable delivery of messages across diverse
platforms, obviating the need for file-based, scheduled batch transfers. NQSeries offers a
standard AP! and is available across a wide variety of platforms, including mainframes.
Thus, it has also become the most popular mechanism to interface with mainframe
applications in an asynchronous manner. (For synchronous communications, C!CS
remains applicable.)
Oracle AQ Facility One problem with integrating applications using message queue
products, such as NQSeries, arises when data level integration is required, and an
external event needs to be triggered by an action on a particular data object. Since
products such as NQSeries require a source process thread to enqueue a message, this
does not integrate well with triggers in a database. Oracle's Advanced Queuing Facility,
available in Oracle 8, builds in a message queue with the database. On the database
side, the queue looks like a table; however, inserting data into it results in delivery to an
queue whose destination could be a target process, outside the scope of the database
system, which dequeues messages and processes them. This is an efficient mechanism
for data-level integration with Oracle applications. !t can be combined with NQSeries
integration where required, such as when some of the platforms involved do not run
Oracle (e.g. mainframes).
XNL has emerged rapidly as a popular mechanism for developing cross-enterprise data
formats to enable industry-wide meta-data definition. !t is increasingly likely that
application interfaces will be specified using XNL. !nter-application interaction, both
within an organization or with external parties, is likely to be using XNL. Further,
application access mechanisms across widely distributed locations, is likely to be XNL
over HTTP, or within a message queue, rather than other communication protocols based
on CORBA or Java (i.e. !!OP or RN!). The adoption of XNL for such applications is likely
Enterprise Application Integration Cautam Shroff 9 of 18
to begin only after the J2EEfEJB component model itself stabilizes and merges
completely with CORBA. The widespread use of XNL is a long-term trend, and is likely to
become a standard, once shared data formats evolve and are accepted.
Cross Worlds Software has generated much of the publicity for the EA! market. Cross
Worlds' cross-application integration approach focuses on business logic as well as on
connectivity to a variety of packaged applications. !t offers `collaborations modules', that
include pre-packaged components of an integration application for specific business
areas. These collaboration modules represent what Cross Worlds believes are cross-
application business objects, including class models, processes and rules. Together with
these, `connectors' provide connectivity mechanisms to a variety of packaged ERP
applications. An integration application constructed using collaboration components and
connectors can isolate connectivity issues from those of the common object model. While
Cross Worlds' vision is similar to a cross-application approach, as outlined above, as well
as our approach detailed in the sections below, the utility of the pre-packaged object
models is certainly an area concern; matching these to an organization's needs could
result in yet another integration problem to be solved. Cross Worlds' `designer' is a
development environment that allows collaborations to be customized - however, at the
same time it is not a complete development environment. From the technology
perspective, Cross Worlds' `interchange server' provides a distributed run-time
environment, which can run on top of the NQ Series, as well as other middleware.
Overall, Cross Worlds' suite represents a platform on which to build; since adopting their
object models directly will be difficult for most large organizations.
Constellar Corporation's product, Constellar Hub, is a data-level integration product
targeted initially to the analytical application integration market. !t is now projected as an
operational integration tool as well, that fits into a point-to-hub strategy described in the
previous section. !ts traditional batch processing origins have been upgraded by
compatibility with NQSeries middleware, potentially enabling it to be used for on-line
integration needs. ¨Constellar Hub is a tool for complex data transformation and interface
management that provides a complete environment for developing and deploying
interfaces between heterogeneous applications." The hub acts as a central repository for
meta-data, interface mappings, transformation rules and data validation rules.
Additionally, it also stores data during the transformation process, as well as migration
scheduling and migration configuration. Neta-data can be directly extracted from Oracle,
as well as from certain CASE tools. The types of data transformation supported by the
Constellar Hub can be very complex, including pre- and post-processing rules that may
call for cross-application joins, etc. From the technology side, Constellar is originally an
Oracle-based product. While its powerful features for data migration are very useful, its
overall applicability is best suited for Oracle environments.
Template Software's `Enterprise !ntegration Template' (E!T) product is designed for
point-to-hub or cross-application integration, at the method-level. The E!T consists of an
object-oriented operational in-memory data store. Surrounding this is an architecture and
methodology for using this data store for application integration. Connectors to this
operational data store are generated from CORBA !DL specifications, database schemas,
or using T!DE, Template's development tool. The operational data store f model of E!T
(called OON) itself offers AP!s for a variety of application languages, including C++ and
Enterprise Application Integration Cautam Shroff 10 of 18
Java. !n addition to a development environment for the operational object model, T!DE
includes the SNAP CU! development language. Template Software's offerings are
attractive for very low latency requirements where the in-memory data store becomes a
winning feature. The architecture is also sound and roughly corresponds with the
approach outlined in this paper below. The absence of pre-defined connectors is a weak
point, with the value addition from its proprietary interface generation tools being
questionable, as compared to standard distributed object technology, i.e. CORBA, Java,
or DCON. Finally, the development environment is still not a complete end-to-end
development tool, i.e. lacking a modeling tool, etc. Summarizing, E!T's utility lies in its
operational object model - specifically the in-memory data caching feature. !t should be
incorporated into an overall EA! architecture to when it is required to exploit this aspect.
Candle Corporation's Roma product is being projected for many application integration
needs. !n fact, Roma is essentially a generalized middleware AP!. !t provides a directory
service as well as a runtime manager that can interface to a different underlying
messaging middleware, for example, !BN NQSeries and Nicrosoft NSNQ, or C!CS and
Tuxedo transaction processing monitors. The Roma environment is therefore primarily a
framework for deploying distributed application components. Together with this core
technology, Roma offers some pre-packaged messaging components, for example, to
connect to SW!FT, etc. !t also provides some development tools to generate Roma
program interfaces, using AP!s from specifications. To summarize, Roma is a middleware
wrapper technology, and does not represent a complete enterprise application integration
platform. !t is however, a credible tool due to its focus on a specific problem, i.e.
middleware inter-operability, and from its historical strength in the networking and
systems software segment.
SuperNova Corporation's approach to application integration is different from that of a
traditional product vendor. !t recognizes that a single product is unlikely to solve an
organization's integration needs. Rather, it offers a suite of connectors to legacy systems
and visual tools for defining integration strategies. The `Universal !ntegration Engine'
from SuperNova includes a `process integrator', a `data integrator', and a `component
developer', which together can be used to define an integration strategy and assist in the
process of developing the elements of the integration application. They do not claim to
generate a complete integration application, only to define the framework. !n addition,
there are integration components that are run-time connectors: to SAP Rf3, !nformix CU!
and Database, etc. We have included SuperNova in this survey because it appears to
take a pragmatic approach to integration and provides a few well-defined tools. !t needs
to be complemented with an architecture and development methodology, in the manner
we describe below in this paper.
!BN f Neon Software The !BN NQSeries !ntegrator product from !BN is a toolkit that
provides many of the technical components needed to implement enterprise-wide
application integration. !BN NQSeries !ntegrator includes the !BN NQSeries - queuing
system, together with NEONRules and NEONFormatter from Neon Corporation. !BN
NQSeries !ntegrator manages message transformation between applications.
NEONFormatter adds a dynamic formatting mechanism that can be integrated with
intelligent message routing and reliable message delivery, greatly simplifying message
exchange across the enterprise. Each sender and receiver sees only their own send and
receive formats, while the !BN NQSeries !ntegrator accomplishes the transformation.
NEONRules adds rules-processing capabilities within the !BN NQSeries !ntegrator. A rule
Enterprise Application Integration Cautam Shroff 11 of 18
may consist of arguments, operators, and values or comparison columns used to test
message contents. Each operation tests the existence of a message column or field. Each
operation can also test the value of a message column or field. Using these rules and the
data formatting capabilities, complex strategies for message routing and integration can
be implemented between applications. !BN NQSeries !ntegrator is rapidly emerging as
one of the most popular platforms onto which EA! solutions will deliver in the future. At
the same time, it is not a complete architecture or development environment; rather, it
offers sophisticated messaging, data formatting, and rules processing support that needs
to be incorporated into an overall architectural approach, such as that outlined below, in
the remainder of this paper.
2.6 Software Architecture, Development Nethodology and EA!
The task of defining the software architecture at an enterprise level is rapidly emerging
as one of the most crucial exercises for an organization. The variety of technologies
available, together with rapid changes of technology, business, and volumes require a
sound architectural approach to ensure that the enterprise's !T strategy is consistent and
follows a well-defined roadmap. At the same time, it is becoming increasingly clear that it
is impossible to predict change. As a result, we will always have legacy applications that
need to be adapted or migrated; we will always be writing fresh applications, that will
need to be integrated with each other and with legacy systems. Thus, the task of
defining software architecture at the enterprise-level is more that of defining a
framework for change and integration rather than a design that is cast in stone.
Application integration strategies form the core of software architecture definition for the
enterprise. Architecture definition for individual applications within this framework is then
guided by the decisions made at the enterprise level, to ensure that integration is
achievable, for immediate as well as future needs. The decisions to be made at the
enterprise architecture level include deciding on the integration strategy, as well as the
integration levels. !ntegration technology and products are then selected based on these
decisions, in a consistent manner. Together with the integration approach, the enterprise
software architecture also defines the roadmap for evolution, as to how application
components can be migrated and replaced, and what core features will be adhered to by
fresh applications that may be developed in the future.
We have also seen in the discussion above that integration strategies must usually go
beyond the problem of connecting two applications. The point-to-hub and cross-
application approaches each involve the development of a common interface model. !n
the case of point-to-hub, a messaging application is developed that can route integration
flows in the correct manner after transforming to and from the format defined by the
common interface model. !n the case of cross-application integration, a separate
integration application is created that controls other applications in order to integrate
them. Thus, we see that application integration requires the adoption of a development
methodology specifically for these cases. Development tools supporting object-oriented
distributed systems need to be incorporated in any EA! approach, and the strengths or
weaknesses of the development environments will have a direct bearing on the
enterprise application integration effort.
!n the next section, we describe an approach to application integration that uses selected
architectural design patterns and a development environment developed by TCS.
Enterprise Application Integration Cautam Shroff 12 of 18
3 Component Interface Object Model Approach To EAI
3.1 Overall Approach
We outline here an approach to application integration that can be applied to both `point-
to-hub' as well as `cross-application' strategies. An integration application is developed; in
the case of a point-to-hub strategy, this application plays a messaging role directing data
flows to the appropriate destinations. !n the cross-application case, the integration
application additionally supports a user interface, and, optionally, data replication within
the integration application, for added efficiency. The integration application is developed
using `NasterCraft', TCS's object-oriented development environment.
A key component of the integration application's architecture is the `Parallel Replication
System', which is an object-aware distributed middleware layer that transparently
manages routing of object methods to one or more targets; the PRS layer enables
transparent object-dependent replication and distribution, as well as masks the
integration with underlying distributed systems technology, such as message queues or
CORBA ORBs, etc.
The overall architecture has been implemented under the Tuxedo TP middleware.
However the design patterns and approach is general. The development tools and PRS
layer are therefore easily deployable under a different platform and can integrate with a
variety of technologies.
3.2 NasterCraft Development Environment
The NasterCraft development environment is an integrated suite of software tools for the
development of medium to large-scale multi-tier applications. NasterCraft helps to
organize and manage software development systematically, addressing the needs of
developing generic as well as custom products. The NasterCraft technology has been
proven in the development of a variety of demanding applications that are operational in
several countries around the world.
NasterCraft provides an integrated, object-oriented suite of tools for building the client
and server components of an application. NasterCraft supports a component-based,
repository-driven development process and has an object modeling tool, a CU! modeler
and an object-oriented specification language. A set of application generators and class
libraries aid in the development of large-scale, distributed applications.
NasterCraft can generate code to support data handling, business services and
middleware interfaces as well as to implement user interfaces. !t has a separate rule-
based engine for the specification of business rules. A NasterCraft generated
implementation is an object-oriented, component-based solution to a business
3.3 PRS Framework for EA!
The `Parallel Replication Server' is a C++ framework within which the distribution design
patterns for managing a variety of object distribution scenarios can be implemented
transparently. The framework consists of run-time components implementing the
Enterprise Application Integration Cautam Shroff 13 of 18
distributed data access, as well as a development framework consisting of classes and
methods, using which application components are developed.
The application architecture assumed is a multi-tier application with CU! clients, object
servers, and relational database layers. The currently available PRS v1.1 runs under the
Tuxedo transaction processing system; thus, applications which use PRS v1.1 need to be
Tuxedo applications. PRS v1.2 will support general CORBAfEJB-based applications as
Client applications invoke interface methods published by application components on the
server. The interface methods are defined on, or instantiate `persistent objects' and
`query objects' (types provided by PRS). Application methods act on these objects to
manipulate persistent data. The PRS layer intercepts these data access methods to
resolve data object location and ownership, ensure propagation of updates to replicas,
and execute distributed processing of queries, where required.
3.+ Nethodology
The cornerstone of the integration application is the common interface object model. This
defines the model into which applications read and write interface messages, as well as
the model for which the user interface provides a view (in the case of cross-application
integration). The approach used to arrive at this model defines the EA! development
methodology. The underlying technology and architecture is the other dimension, which
is instantiated via the target platforms specified to the NasterCraft development tools
and the PRS middleware.
The methodology used to arrive at an interface object model is best illustrated with an
example. We will use this same example to illustrate the architecture, in the next section.
Consider two applications: (i) a legacy application for payments processing, running on a
mainframe system, and (ii) a CRN application based on a 2-tier client-server architecture.
Let us focus on one particular entity, ¿usIømør, that has representations (let's call them
`replicas' for convenience) in each of these applications. The common interface object
model representation of the customer is achieved as follows:
!n the case of a point-to-hub strategy, the common model will have a customer class,
each of whose attributes can be mapped to DøIn replicas. !n the case of three or more
applications being integrated, each attribute must map to Iwø or more replicas of the
customer class. Thus, only those attributes are included, which are involved in some
inter-application communication. An attribute that is used only in a single application is
not included in the interface object model. As a result, it is possible that the interface
object model ma) be considerably smaller than each individual data model of the target
applications. (On the other hand, it may not.)
!n the case of cross-application integration, in addition to the attributes above, those
attributes that are required in the user-interface and are not already covered above will
also be included in the interface object model of the integration application. Thus, this
object model could be much larger than in the case of point-to-hub.
The integration application is responsible for providing møInøøs that manage the
mapping of the customer class in the interface object model to their corresponding
replicas. These methods handle simple issues such as field sizes and formats, as well as
Enterprise Application Integration Cautam Shroff 1+ of 18
complex issues having to do with keys. A simple issue could be that the customer
address is represented by three fields in one application and by two in another. The
mapping and possible truncation of the address into one of the replicas is an example of
a mapping task that a <røpI¡¿a>-¿usIømør-upøaIø method would take care of, in the
integration application. A more complex issue arises when the customer identification is
achieved via different sequences in different replicas. The integration application will
usually map all customer identifiers to a common customer id; often this will be identical
to one of the replicas, with appropriate mappings for the others. For such a mapping, the
integration application may implement a pre-defined algorithm, perform a query on the
target application to retrieve matching records, or maintain mappings between identifiers
in its own persistent storage. !t could also be a combination of all the above, customized
according to which replica is being considered. These mapping methods will be deployed
and triggered via the architectural framework defined by the PRS middleware, in the
manner described in the next section on the EA! architecture.
!n the case of a cross-application integration strategy, the integration application
additionally needs to provide a common user interface; the interface object model will
therefore include additional classes for this, as required. Finally, our example has
considered only one class; in practice, there will be a large number of such classes. The
integration application will be organized into components, each of which owns certain
classes and provides interface services to other components as well as to user interface
elements. Therefore, our approach is called a `component interface object model'
approach. The object model is componentized; at the same time it is for an integration
application rather than one that delivers functionality by itself. This approach opens up
the possibility to develop fresh components that provide the same interfaces as those of
the integration application's components, and replace legacy systems by replacing these
components. Note that this approach is not as straightforward as it sounds, since unless
a legacy application is completely replaced, it cannot be switched off. Until then, any
replicas it has of the classes in the new component must be maintained in synchrony.
Our EA! approach naturally handles this problem - the new application component is
simply treated as another application to be integrated. The legacy application now merely
receives data updates from the integration framework, rather than calls and inquiries
coming directly from the user interface, for the functionality taken over by the fresh
component. For other functionality, it is accessed as before - either through its own
interface (point-to-hub strategy), or through the common interface (cross-application
3.5 Architecture
The `Common !nterface Object Nodel' EA! architecture is illustrated in Fig. +. The core
integration application instantiates the common object model's class structure,
constructed using the methodology described in the previous section. This class structure
is in-memory. However, instances of classes in this model may or may not be maintained
in memory beyond a transaction's scope of execution. !n the simplest case, actual
objects, i.e. instances of classes, are created when a transaction begins, populated using
interfaces to the target applications, and stored back into these applications in a
synchronous or asynchronous manner when the transaction is complete. Thus, the
integration application executes the basic manipulation methods on its classes, such as
gøI(}, ¿røaIø(}, møø¡/)(} and øøIøIø(} through interfaces with target applications. Nore
complex situations may involve selected objects being maintained with the integration
Enterprise Application Integration Cautam Shroff 15 of 18
application, either as replicas on its own database, or as in-memory copies. Further,
other methods, apart from the `standard' manipulation methods mentioned above, may
be implemented in the integration application, especially in the case of cross-application
integration, when a user interface needs to be supported.
The management of multiple replicas of an object in different target applications is done
by the `Parallel Replication System', or PRS, layer. Each class' manipulation methods, i.e.
gøI(}, ¿røaIø(}, møø¡/)(}, øøIøIø(} as well as any others identified in the integration
application, are registered with PRS. When the integration application is built with the
PRS layer, each data manipulation method first goes through PRS, which decides the
target applications on which it needs to be applied. This decision process can be based
upon the class, as well as the individual object instance; for example, `individual'
customer updates could be routed to one application, while `corporate' customer updates
are routed to another, or both, target applications. PRS also maintains information about
the nature of the interface to the target application, through its `service routing' feature.
!n effect, for each data manipulation method call, PRS invokes the appropriate
<røpI¡¿a>-<møInøø>(} calls, in the appropriate operating system processes f
environments, in a synchronous, asynchronous, or queued manner as prescribed by the
setup. These calls themselves are written using the interfaces of the target application.
Where appropriate, they may use connectors and other EA! products to manage data
transformation issues, etc. from f into the target application.
The reverse flow, i.e. when a method call in a target application needs to be
communicated to the integration application, is implemented through message queues.
The interface between the target application and the message queue needs to be
developed in the manner appropriate for the level of integration being used for that
application. The message queue triggers the data manipulation call (which is essentially
signaling an `event') in the integration application, which passes through PRS as before,
to determine the destination(s) of each message as described above.
An important concept implemented in PRS is that of øaIa øwnørsn¡p. Only one of the
many replicas of an object can `own' the object. The owner is determined dynamically at
runtime from the class, or individual object details. Any update sent by a `non-owner'
application needs to first be executed successfully in the owner application before being
propagated to other applications, including the originating application. Similarly, any
enquiry on an object is answered from the owner application, as opposed to any arbitrary
replica. The ownership concept, while essential for correct behaviour in any case, also
facilitates smooth migration and takeover; for example, a legacy application could be the
`owner' of all its data until a freshly developed component takes over some of the
functionality - which may include selected classes only. Transferring the ownership of
these objects through their configuration in PRS results in the new component
automatically becoming the primary source for answering inquiries on these classes, with
updates still being propagated to the legacy system to maintain its internal consistency.
3.6 Technology
The EA! architecture described above does not explicitly mention the underlying
technology. Currently, this architecture is under the Tuxedo OLTP monitor. However, its
structure and design can easily be adapted to another platform, such as those in Section
Enterprise Application Integration Cautam Shroff 16 of 18
Further, the EA! architecture can incorporate appropriate connectors to legacy systems
and other target applications, such as those included in many of the products mentioned
in the survey in Section 2.5. Such connectors would be used while developing the
<røpI¡¿a>-<møInøø>(} methods used to communicate with the target applications; these
methods are then invoked by the PRS layer in the EA! architecture, as described above.
4 Process Integration
A software system instantiates aspects of an underlying business process. Changes in the
underlying process result in the need to change the behaviour of the software system.
The pace of change in recent years has increased dramatically, due to increases in the
rate of change of business, increases in transaction and data volumes, and improvements
in technology. Software systems also need to change, as and when required, in order to
keep pace with these ¿ønI¡nuøus environmental and organizational changes.
A highly desirable goal of an enterprise software architecture is that it should be /Iøx¡DIø,
i.e. the software systems built on such an architecture should be able to adapt over time,
and keep pace with any required re-structuring and extension of the business processes
they instantiate. !mplementing these changes in the software should, ideally, not require
any fresh software development, and in general, the amount of development and
especially testing, should be minimized. Thus, changes in one part of the process should
not require testing of other, unrelated, parts. (The process of change needs to also
indicate the areas where testing is required.) Further, the changes should become visible
in the software rapidly, i.e. the change cycle should be as small as possible.
!ntegration Application on a
Component !nterface Object Nodel
Parallel Replication Framework for EA!
Common User !nterface
(for cross-application case)
Selected !nterface
Data - replicated
Application -
PRS !nterface
Nessage Queue
Fig. 4 µ&,20¶ EAI Architecture
Enterprise Application Integration Cautam Shroff 17 of 18
Operational enterprise application integration, as described in this paper, addresses some
of the issues required for process-level flexibility. Civen a number of applications, each
addressing departmental concerns, the EA! approach outlined here allows these
applications to function, in some sense, as a single application for the enterprise. Data
flows between applications, maintenance of consistency across data replicas, as well as
the possibilities for a common user interface for the enterprise, and last but not least, the
common object model itself, all serve to enable a unified view of the multifarious
applications into one enterprise application that instantiates the business process of the
At the same time, the issue of ¿øn/¡gur¡ng the overall enterprise business process at a
higher level has not been addressed. Once successful application integration has been
achieved, this is the next logical step. The goal should be to establish a prø¿øss
architecture that is merely a /ramøwørK, within which the business process is
instantiated, in the manner that the integration application is a framework in which the
individual target applications reside. The goal should be to move towards a situation
where adjustments to the process are rendered efficiently, on-line, and at run-time. Such
adjustments can include the modification of the underlying data schema, changes to user
interactions, introduction f deletion of user interactions, and the introduction of additional
processing within the existing process, through new applications or enhancements to
existing ones.
Such an architecture would include a prø¿øss møøøI and design patterns based on this
model, that enable it to provide a high level of flexibility in process definition and the
efficient instantiation of these designs. This process model would lie orthogonal to the
integration application and would control wn¡¿n methods are called in response to inputs,
just as the PRS layer controls wnørø anø nøw these methods are executed. This vision is
depicted in Figure 5 below:
Defining such an approach to process-level integration is an active area of research in
TCS. Prototypes are being developed to define a flexible process architecture and process
model that can achieve this vision.
Fig. 5 Process Integration
Application A Application B Application C Application D
!ntegration Application
Common !nterface Object Nodel
Common User !nterface
Enterprise Application Integration Cautam Shroff 18 of 18
Enterprise application integration is a pressing issue in most large organizations.
!ntegration at the operational level has been distinguished from that at an analytical
level, which can be addressed through data warehousing. Solving the operational EA!
puzzle requires a long-term software architectural view. !n this paper we have described
the issues in operational EA!, the products and technologies available, as well as our
common interface object model approach to EA!. This EA! architecture is usable with a
variety of technologies and provides a framework in which selected EA! products can be
utilized for enabling inter-application connectivity.
Achieving a suitable level of integration across an enterprise's application suite is a
challenging task. Once this is achieved, using the suggested approach and architecture,
the next step is to achieve process integration, wherein the business process of the
organization is made flexible to greater and greater degrees through the introduction of a
process model-based architecture. This vision has been outlined, and it is suggested that
the long-term view of an organization must include this in its architectural plans, as the
next step beyond EA!.
Finally, this paper has tried to emphasize that technologies and products are merely
enablers in the road towards integration. Approach and architecture are far more
important to get right; the rest can be replaced as technologies evolve.
Tel91222024827Fax 91222040711