You are on page 1of 39

SEMINAR

REPORT

ON
Service-Oriented Architecture

SUBMITTED BY:-NamrataMahapatra
140704ITR015
MscIT 4th Sem

Address:- Centurion University of Technology


& Management
Bhubaneswar
Ramachandrapur,
Khordha, Jatni,
752050
Service-Oriented Architecture
INTRODUCTI
ON enterprise that make it difficult to
Service- service customers, meet
oriented production demands, and
architecture manage large volumes of
(SOA) information. Developing an SOA
provides a that guarantees service
means of performance, scalable
integrating throughput, high availability, and
disparate reliability is both a critical
applications imperative and a huge challenge
within an for today’s large enterprises.
enterprise, The increasing rate of change in
improving the modern business
reuse of environment demands greater
application agility in an organization’s
logic while technology infrastructure,
eliminating which has a direct impact on
duplication of data management. SOA offers
production the promise of less
environments. interdependence between
An SOA projects and, thus, greater
avoids silos of responsiveness to business
disconnected challenges. But it also raises
information many questions for enterprise
within the
architects:
• How will don’t fail when underlying
services fail?
data
• What happens when the
access
database server reaches full
services
capacity? And how can I
be
ensure the availability of
affected
reliable data services even
by the
when the database becomes
increasi
unavailable?
ng
When choosing an SOA strategy,
number
corporations must rely on
of solutions that ensure data
services availability, reliability,
and performance, and scalability.
applicati They must also avoid “weak
ons that link” vulnerabilities that can
depend sabotage SOA strategies.
on A data grid infrastructure, built
them? with clustered caching,
• How can I addresses these concerns. It
ensure that provides a framework for
my services improved data access that can
create a competitive edge,
improve the financial
performance of corporations, and
sustain selecting an SOA strategy, how
customer an SOA can improve data
loyalty. availability and reliability, and
This paper how clustered caching can
looks at the improve SOA performance and
challenges of ensure scalability for very large-
scale transaction volumes.
SOA CHALLENGES
The following should be taken
into consideration when selecting
an SOA strategy:
The Structure of an SOA Environment
In an SOA environment, there are
several types of components to
consider. In order of increasing
consolidation, these can be
grouped into data services,
business services, and business
processes. Data services provide
consolidated access to data.
Business services contain
business logic for specific, well-
defined tasks and perform
business transactions via data
services. Business processes
coordinate multiple business
services within the context of a
workflow.
Figure 1: SOA environments

typically comprise three

types of components: data

services, business services,

and business processes.

Data within an SOA generally


falls into one of two categories:
• Conversational state – The
conversational state is
managed by business
services and processes and
corresponds to currently
executing operations,
processes, sessions, and
workflows.
• Persistent data – Persistent
data is managed by data
services and is usually stored
in databases.

Consolidation of Data
Services Raises Scale and
Performance Issues
The role of data services is to
provide access to enterprise
data by expressing the data in
terms of the business without
requiring any external
knowledge of how the data is
actually managed. The value of
data services lies in the
consolidation that
they bring, allowing centralized
control of data without the
proliferation of data silos
throughout the enterprise.
Unfortunately, this centralization
also brings significant scalability
and performance challenges.
Scalability issues arise when
many business services depend
on a single data service,
overwhelming back-end
datasources. Performance issues
result directly from scalability
limitations, because poorly
scaling data services will
become bottlenecks and
requests to those services will
queue. need more data, resulting in
Performance more service invocations. In
is also either case, performance will be
influenced affected, and with application
significantly service level agreement (SLA)
by the requirements moving toward
granularity of response times measured in
an SOA data milliseconds, every data service
service, request can represent a
which often significant portion of the
provides application response time.
either too
Reliability and
little or too
Availability Can Be
much data.
Compromised by
Data
Complex Workflows
services built
Reliability and availability may
around a
also be affected. As business
specific use
services are integrated into
case will
increasingly complex workflows,
provide too
the added dependencies
much data
decrease availability. If a
for simpler
business process depends on
use cases,
several services, the availability
and more-
of the process is actually the
complex use
product of the weaknesses of all
cases will
the SOA Environments Differ from
composed Traditional User-Centric
services. For Applications
example, if a Conversational state, such as the
business hypertext transfer protocol
process (HTTP) session state utilized by
depends on Web services, is often short-lived,
rapidly modified, and repeatedly
six services,
used. The life span of the data
each of which
may be a matter of seconds,
achieves 99
spanning a dozen requests, each
percent
of which may need to read or
uptime, the
update the data. Moving from
business
traditional user-centric
process itself
applications to an SOA
will have a
environment means that, in
maximum of
addition to users, machines are
94 percent
now accessing services—at
uptime,
machine speed. This means that
meaning
the “user count” increases
more than
dramatically while the average
500 hours of
“think time” decreases to almost
unplanned
nothing, causing the maximum
downtime
sustained request rate to far
each year.
exceed the original specification.
The result is that technologies
that were l state is critical, but its rapid
capable of churn rate and transient nature
handling make it particularly difficult to
traditional manage by traditional means.
user loads Using database servers is the
are almost traditional solution for scalable
inevitably data services, but they cannot
crushed by cost-effectively meet the
the increased throughput and latency
load requirements of modern large-
associated scale SOA environments. Most in-
with an SOA memory solutions depend on
deployment. compromises such as queued
Ensuring the (asynchronous) updates,
reliability and master/slave high-availability
integrity of (HA) solutions, and static
conversationa partitioning to hide scalability
issues, all at the
cost of substantially reduced
reliability and scalability. Most
SOA vendors go as far as to
strongly recommend avoiding
stateful services if at all possible,
due to these scaling and
performance challenges.

DATA RELIABILITY AND


AVAILABILITY IN AN SOA
The stakes for data reliability
and availability in mission-
critical environments are high:
crucial business decisions,
financial results, customer
satisfaction, employee
productivity, and a company’s
reputation all depend on it.
SOA Demands High Data
Reliability and Availability
Making sure that services have a
consistent, coherent view of
data is critical to ensure
reliability and availability.
Transactionally consistent data
services are essential for
scalable, prioritize availability and
reliable data reliability over features, because
processing. SOA adoption results in
Products enterprise systems that are
used to more prone to outage as the
manage data number of service-dependencies
must have increases. This is the natural
data integrity consequence of compositional
“in their complexity and represents an
genes,” engineering trade-off resulting
supporting from the elimination of
both application silos. This risk
optimistic becomes further pronounced as
and systems are consolidated,
pessimistic because service interruptions will
transactions, have an increasingly greater
synchronous impact on the organization.
server
Eliminating Single Points of
redundancy, Failure
and reliable SOA introduces a set of new
configuration challenges to the continuous
s. availability of complex systems,
Data- but the solutions for both service
management and system availability are well
products for understood and proven. Service
SOA must
availability services on which the system
requires the depends.
elimination of When architecting a service for
all single high availability, it is necessary
points of to ensure that the service host
failure
itself is highly available.
(SPOFs) within
a given Clustering is accepted as the
service and standard approach to increasing
the insulation availability, but in a traditional
—to the clustered architecture, adding
maximum servers to a cluster will decrease
extent its reliability even as it increases
possible— its availability. There are several
against reasons for this, including the
failures in the likely interruption of service
service’s during failover and failback and
natural the increased incidence of server
dependencies. failures in direct proportion to the
System total number of servers.
availability Static Partitioning Does Not
requires Increase Data Availability
similar To achieve scalability, other
insulation solutions use static partitioning
from the across a collection of primary
failure of servers, each with its own
dedicated ensure availability, but this
backup model is fundamentally crippled:
server to
• Static partitioning makes the
service unable to dynamically
increase capacity, meaning
that it cannot participate in a
capacity-on-demand
architecture.
• Static partitioning requires
massive overprovisioning to
prevent peak loads from
overwhelming the service.
• Reliance on dedicated backup
servers means that the cluster
heals much more slowly—or
may not heal at all—when a
primary server dies and thus
increases the window of
opportunity for catastrophic
data loss by allowing an SPOF
to remain within a production
environment.
• Static partitioning with
dedicated backup tends to
make failback processing
much more difficult, if not
impossible strategy instead of an N+1
. strategy.
• Using Clustered Caching Ensures
dedicated Reliability and Availability
backups Oracle Coherence is a trusted in-
for each of memory data management
the solution for ensuring reliability
primary and high availability for Java-
servers based service hosts, such as Java
can Platform, Enterprise Edition (Java
significantl EE) application servers. It makes
y increase sharing and managing data in a
cluster as simple as on a single
infrastruct
server. It accomplishes this by
ure costs
coordinating updates to the data
and
by using clusterwide concurrency
doubles
control, replicating and
the
distributing data modifications
required
across the cluster by using the
number of
highest-performing clustered
servers by protocol available, and delivering
employing notifications of data
an N+N modifications to any servers that
availability request them.
Oracle protocol, has no SPOFs. It
Coherence, automatically and transparently
which fails over and redistributes its
provides clustered data management
replicated services when a server becomes
and inoperative or is disconnected
distributed from the network. When a new
(partitioned) server is added or when a failed
data server is restarted, it
management automatically joins the cluster
and caching and Oracle Coherence fails
services on services back to it, transparently
top of a redistributing the cluster load.
reliable, Oracle Coherence includes
highly network-level fault-tolerance
scalable peer- features and transparent soft-
to-peer restart capabilities to enable
clustering servers to self-heal.
Figure 2: Caching is used to decouple

components, yielding increased

performance, throughput, and reliability.

Without Oracle Coherence, the servers and the


service processes that run on those servers
each represent an SPOF. With Oracle
Coherence, a service is composed as an
aggregation of all of those service processes on
all those servers, achieving resiliency by
redundancy. A well-designed service can
survive a machine failure without any impact
on any of the service clients, because Oracle
Coherence provides continuous service
availability, even when servers die. When
architected with Oracle Coherence, even a
stateful service will survive server failure
without any impact on the availability of the
service, without any loss of data, and without
missing any transactions. Oracle Coherence
provides a fully reliable in-memory data store
for the service, transparently managing server
faults, and making it unnecessary for the
service logic to deal with complicated leasing
and retry algorithms.
The Oracle Coherence dynamic mesh
architecture increases both reliability and
availability, by making failover and failback
nearly instantaneous. Oracle Coherence
illustrates the difference between simple high
availability and true fault tolerance. Moreover,
Oracle Coherence supports dynamic capacity
on demand by expanding its resilient data
fabric to incorporate additional servers as soon
as they come online.
State Management Through
Virtualization
Fully stateless services (such
as static-content HTTP
servers) are very easy to
manage for high availability,
but very few services are
actually stateless. Many
services manage
conversational state, and even
those that do not will usually
manage some state, such as
caches, internally.
The key to achieving continuous
availability and full reliability is to
implement stateful services as if
they were stateless by
delegating all service state
management to Oracle
Coherence. If the service
implementation is stateless,
server failure will not be able to
cause any loss, thus enabling
another server to perform the
necessary service request on
behalf of the failed server.
Oracle Coherence provides the
resilient and reliable state
management on which these
services are built, with true
server location transparency and
system fault tolerance. It
manages the service state in a
manner that completely and
dynamically eliminates SPOFs
and single points of bottleneck
(SPOBs) and fully virtualizes the
service state across any number
of servers.
Transparent Data
Partitioning Achieves
Continuous Availability and
Reliability
A major factor for service
availability is ensuring that any
service host can handle any
request at any time. Failing to
do this will diminish the ability
of the service cluster to reliably
respond to service requests
while a failure is occurring. Not
having fully transparent data
partitioning means that
• Any delays during failover or
failback will reduce reliability.
• Failures will occur during the
failover process.
• Failures will occur during
failback, if it is possible to fail
back at all.
• Each service implementation
will have to include custom
fault detection and retry logic
to recover from misdirects and
redirects.
• Rebalancing after a server
failure will be failure-prone or
impossible.
• Reliable and dynamic
expansion and contraction
of the cluster will be
impossible.
Oracle Coherence provides fully
transparent data partitioning,
and the resulting service
implementations automatically
achieve continuous availability
and reliability. No custom fault
detection or retry logic is
required.

Avoiding Distributed
Computing
A key differentiator for
clustering products is whether
they assume full responsibility
for virtualizing the cluster or
delegate the difficult
responsibilities
back to the application
developer. Support for clustering
features such as transactions,
“no lease” data access, and
clusterwide locking is critical in
enabling developers to
implement cluster-safe services
without needing to develop
custom distributed computing
algorithms for each business
process. Oracle Coherence takes
responsibility for handling server
and network failures.
Failover and Failback
With the increasing complexity
of SOA environments, automated
and transparent failover and
failback become even more
critical. Ensuring that these
transitions do not result in an
interruption of service means
that applications will be not only
highly available but also highly
reliable.
Oracle services must be able to
Coherence is compensate for the lack of
designed for availability of those services. This
lights-out and is particularly critical for services
zero- composed of other services.
administratio The key to preventing failures in
n one service from affecting
environments another is to appropriately
and employs decouple the two services. For
self-healing services that support loose
network coupling, interposing read-
capabilities to
through/write-behind caching
not only
between the consuming and
survive
producing services can provide
failures but
an effective means of isolating
also repair
reliability issues.
them.
In many cases, it is possible for a
Insulation service to use write-behind
from
caching to continue operating—
Failures in
Other even when an underlying
Services database is unavailable. In such
Any service a configuration, Oracle
that depends Coherence will queue the
on other transactions for the database
until the environment are similar in many
database is ways to those faced in
brought online traditional applications. The less
and its distance and transformation that
transaction is required for using a piece of
logs are information, the more efficient
recovered. the application will be. However,
This capability the scale of the problems has
is absolutely been dramatically increased.
crucial for Furthermore, in addition to the
continuously traditional challenges related to
available conversational state and data
systems, access, there is now the added
because element of increased “distance”
system between the various services
maintenance that are working together to
is inevitable. handle each incoming request.
SCALABLE Enormous Loads Challenge
PERFORMAN Scalability
CE VITAL IN Enterprise systems built on an
SOA
SOA face a host of challenges
Scalability
and relating to unprecedented scale.
performance Request volume is growing in
challenges in multiple dimensions at once.
a services
There are more requests, more
parameters, and more resulting
data per request. Additionally, as
business processes become
increasingly automated, load can
increase enormously. In financial
services, the use of algorithmic
trading systems has increased
load in some cases by several
orders of magnitude, and growth
is anticipated to continue at a
breakneck pace. Retailers are
seeing similar swells from
personalization and closed-loop
analytics. The travel and
hospitality industries are working
to address the need for real-time
pricing and the increased load
generated by third-party
inventory engines. Many other
industries are experiencing
similar exponential growth in
service load.
The most fundamental challenge
for large-scale, data-intensive
systems is of the major strengths of the
to provide Oracle Coherence peer-to-peer
multipoint architecture is that it enables
access to data to be pushed and/or pulled
shared data within a distributed
while environment, either on-demand
preserving a or as the data is updated. It
true single efficiently and directly moves
system data to where it is needed
image (SSI). without depending on time-
Oracle based invalidation or other
Coherence artificial means of
offers fully synchronization. This means
coherent that services have the full
caching in a benefit of instantaneous data
clustered access from any server without
environmen the possibility of accidentally
t, achieving obtaining out-of-date data.
linear The Data Grid Agent feature in
scalability Oracle Coherence ensures ultra
on low transactional latency without
commodity compromising throughput or
hardware fault-tolerance. The next step—
with a fixed platform-portable invocation and
worst-case data services—is a giant leap in
latency. One Oracle’s Fusion Middleware
strategy. Oracle Coherence is well known
Together, for its singular ability to handle
these enormous transaction volumes
capabilities (300,000+ transactions per
make high- second) for conversational state
volume, without compromising read
transactional performance or fault tolerance.
ly consistent Although there are clustering
data and solutions that support scale-out
event or high availability, Oracle
streams Coherence remains the only
universally viable option for applications
available to that need to sustain intense
business read/write data access without
services resorting to non-fault-tolerant
throughout techniques such as
the asynchronous updates.
enterprise.
Deployment Flexibility
Large Through the Data Grid
Transaction Oracle Coherence enables
Volumes capacity on demand in two key
Handled
steps. First, it helps move
Without
Compromis conversational state out of the
e application and into the Oracle
Coherence Data Grid. This
enables manual provisioning of data.
requests to Oracle Coherence’s mesh
be routed to architecture also means that
any additional application instances
application can be started on the fly,
instance without the need for manual
without the repartitioning of data and with
need for minimal delay, because
application state is already
prepared in the data grid. This
compares admirably to products
that depend on static
partitioning and “buddy”
replication for failover.
Second, the Oracle Coherence
Data Grid is designed for lights-
out management/zero
administration (LOM/ZA), which
provides the ability to expand
and contract Oracle Coherence
almost instantaneously in
response to changing demand.
The Oracle Coherence mesh
architecture becomes
increasingly nimble as the cluster
size increases, with rebalancing
occurring even faster and server
failures having smaller and
smaller impacts.
By shifting
state into
Oracle
Coherence
and using
Oracle
Coherence’s
dynamic
mesh
architecture
to Figure 3: In an SOA
dynamically
scale data environment, state
managemen management is the
t,
responsibility of the data
applications
can achieve grid, which easily and cost-
near-real-
effectively scales data access
time
provisioning far beyond what can be
without
achieved with a database and
risking loss
or abortion also delivers significantly
of requests. better scalable performance
and data Require Scalable
Performance
integrity SOAs and Web services in
for general exhibit the same
requirements for scalable
conversatio
performance as any other line-
nal state. of-business or outward-facing
At the application. In the same way
that advanced Web applications
same time,
manage HTTP sessions to
the data provide conversational state on
the server on behalf of the
grid results
user, Web services and other
in a

substantial

increase in

deployment

agility.

Web
Services
SOA infrastructures often have to
implement stateful conversations
and workflow.
In fact, many Web services
implementations simply use
HTTP sessions.
On a request-by-request basis,
the data access requirements for
Web services appear to be
significantly higher than for Web
applications, due to the nature of
Web services, in which ancillary
data is often included to
eliminate the need for
subsequent requests. In some
cases, the request volumes are
also significantly higher and
growing at a much higher rate,
largely because the service
clients are no longer humans
impeded by think time.
CONCLUSION
Clustered caching and data grid
infrastructures ensure
availability, reliability, and
scalable performance for SOA.
SOA environments are adopting
these two technologies much
more rapidly than earlier
architectures, due to their
combined value. As the
recognized market leader in
clustered caching, Oracle has
been at the forefront of making
SOA a reality, with Oracle
Coherence already powering
many of the world’s largest and
most demanding SOA
environments.

You might also like