You are on page 1of 15

Grid computing

Grid computing is a term referring to the combination of computer resources from multiple administrative
domains to reach a common goal. The Grid can be thought of as a distributed system with non-interactive
workloads that involve a large number of files. What distinguishes grid computing from conventional high
performance computing systems such as cluster computing is that grids tend to be more loosely coupled,
heterogeneous, and geographically dispersed. Although a grid can be dedicated to a specialized application, it
is more common that a single grid will be used for a variety of different purposes. Grids are often constructed
with the aid of general-purpose grid software libraries known as middleware.

Grid size can vary by a considerable amount. Grids are a form of distributed computing whereby a “super
virtual computer” is composed of many networked loosely coupled computers acting together to perform very
large tasks. Furthermore, “Distributed” or “grid” computing in general is a special type of parallel computing that
relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.)
connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet.
This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a
local high-speed computer bus.

Contents
[hide]

• 1 Overview

• 2 Comparison of grids and conventional supercomputers

• 3 Design considerations and variations

• 4 Market segmentation of the grid computing market

o 4.1 The provider side

o 4.2 The user side

• 5 CPU scavenging

• 6 History

• 7 Fastest virtual supercomputers

• 8 Current projects and applications

• 9 Definitions

• 10 See also

o 10.1 Concepts and related technology

o 10.2 Alliances and organizations

o 10.3 Production grids


o 10.4 International projects

o 10.5 National projects

o 10.6 Standards and APIs

o 10.7 Software implementations and middleware

o 10.8 Visualization frameworks

• 11 References

o 11.1 Notes

o 11.2 Bibliography

• 12 External links

[edit]Overview

Grid computing combines computers from multiple administrative domains to reach common goal.[1] to solve a
single task and may then disappear just as quickly.

One of the main strategies of grid computing is to use middleware to divide and apportion pieces of a program
among several computers, sometimes up to many thousands. Grid computing involves computation in a
distributed fashion, which may also involve the aggregation of large-scale cluster computing-based systems.

The size of a grid may vary from small—confined to a network of computer workstations within a corporation,
for example—to large, public collaborations across many companies and networks. "The notion of a confined
grid may also be known as an intra-nodes cooperation whilst the notion of a larger, wider grid may thus refer to
an inter-nodes cooperation".[2]

Grids are a form of distributed computing whereby a “super virtual computer” is composed of many
networked loosely coupled computers acting together to perform very large tasks. This technology has been
applied to computationally intensive scientific, mathematical, and academic problems through volunteer
computing, and it is used in commercial enterprises for such diverse applications as drug discovery,economic
forecasting, seismic analysis, and back office data processing in support for e-commerce and Web services.

[edit]Comparison of grids and conventional supercomputers


“Distributed” or “grid” computing in general is a special type of parallel computing that relies on complete
computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to
a network (private, public or the Internet) by a conventional network interface, such as Ethernet. This is in
contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-
speed computer bus.[citation needed]
The primary advantage of distributed computing is that each node can be purchased as commodity hardware,
which, when combined, can produce a similar computing resource as multiprocessor supercomputer, but at a
lower cost. This is due to the economies of scale of producing commodity hardware, compared to the lower
efficiency of designing and constructing a small number of custom supercomputers. The primary performance
disadvantage is that the various processors and local storage areas do not have high-speed connections. This
arrangement is thus well-suited to applications in which multiple parallel computations can take place
independently, without the need to communicate intermediate results between processors.[citation needed] The high-
end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity
between nodes relative to the capacity of the public Internet.[citation needed]

There are also some differences in programming and deployment. It can be costly and difficult to write
programs that can run in the environment of a supercomputer, which may have a custom operating system, or
require the program to address concurrency issues. If a problem can be adequately parallelized, a “thin” layer
of “grid” infrastructure can allow conventional, standalone programs, given a different part of the same problem,
to run on multiple machines. This makes it possible to write and debug on a single conventional machine, and
eliminates complications due to multiple instances of the same program running in the same
shared memory and storage space at the same time.[citation needed]

[edit]Design considerations and variations


One feature of distributed grids is that they can be formed from computing resources belonging to multiple
individuals or organizations (known as multiple administrative domains). This can facilitate commercial
transactions, as in utility computing, or make it easier to assemblevolunteer computing networks.[citation needed]

One disadvantage of this feature is that the computers which are actually performing the calculations might not
be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or
malicious participants from producing false, misleading, or erroneous results, and from using the system as an
attack vector. This often involves assigning work randomly to different nodes (presumably with different
owners) and checking that at least two different nodes report the same answer for a given work unit.
Discrepancies would identify malfunctioning and malicious nodes.[citation needed]

Due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of
the network at random times. Some nodes (like laptops or dialup Internet customers) may also be available for
computation but not network communications for unpredictable periods. These variations can be
accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and
reassigning work units when a given node fails to report its results in expected time.[citation needed]

The impacts of trust and availability on performance and development difficulty can influence the choice of
whether to deploy onto a dedicatedcomputer cluster, to idle machines internal to the developing organization,
or to an open external network of volunteers or contractors.[citation needed] In many cases, the participating nodes
must trust the central system not to abuse the access that is being granted, by interfering with the operation of
other programs, mangling stored information, transmitting private data, or creating new security holes. Other
systems employ measures to reduce the amount of trust “client” nodes must place in the central system such
as placing applications in virtual machines.[citation needed] ___ Public systems or those crossing administrative
domains (including different departments in the same organization) often result in the need to run
on heterogeneous systems, using different operating systems and hardware architectures. With many
languages, there is a trade off between investment in software development and the number of platforms that
can be supported (and thus the size of the resulting network). Cross-platform languages can reduce the need
to make this trade off, though potentially at the expense of high performance on any given node (due to run-
time interpretation or lack of optimization for the particular platform).[citation needed]

Various middleware projects have created generic infrastructure to allow diverse scientific and commercial
projects to harness a particular associated grid or for the purpose of setting up new grids. BOINC is a common
one for various academic projects seeking public volunteers;[citation needed] more are listed at the end of the article.

In fact, the middleware can be seen as a layer between the hardware and the software. On top of the
middleware, a number of technical areas have to be considered, and these may or may not be middleware
independent. Example areas include SLA management, Trust and Security, Virtual organization management,
License Management, Portals and Data Management. These technical areas may be taken care of in a
commercial solution, though the cutting edge of each area is often found within specific research projects
examining the field.[citation needed]

[edit]Market segmentation of the grid computing market


According to IT-Tude.com, for the segmentation of the grid computing market, two perspectives need to be
considered: the provider side and the user side:

[edit]The provider side


The overall grid market comprises several specific markets. These are the grid middleware market, the market
for grid-enabled applications, the utility computing market, and the software-as-a-service (SaaS) market.

Grid middleware is a specific software product, which enables the sharing of heterogeneous resources, and
Virtual Organizations. It is installed and integrated into the existing infrastructure of the involved company or
companies, and provides a special layer placed among the heterogeneous infrastructure and the specific user
applications. Major grid middlewares are Globus Toolkit, gLite, and UNICORE.

Utility computing is referred to as the provision of grid computing and applications as service either as an open
grid utility or as a hosting solution for one organization or a VO. Major players in the utility computing market
are Sun Microsystems, IBM, and HP.
Grid-enabled applications are specific software applications that can utilize grid infrastructure. This is made
possible by the use of grid middleware, as pointed out above.

Software as a service (SaaS) is “software that is owned, delivered and managed remotely by one or more
providers.” (Gartner 2007) Additionally, SaaS applications are based on a single set of common code and data
definitions. They are consumed in a one-to-many model, and SaaS uses a Pay As You Go (PAYG) model or a
subscription model that is based on usage. Providers of SaaS do not necessarily own the computing resources
themselves, which are required to run their SaaS. Therefore, SaaS providers may draw upon the utility
computing market. The utility computing market provides computing resources for SaaS providers.

[edit]The user side


For companies on the demand or user side of the grid computing market, the different segments have
significant implications for their IT deployment strategy. The IT deployment strategy as well as the type of IT
investments made are relevant aspects for potential grid users and play an important role for grid adoption.

[edit]CPU scavenging
CPU-scavenging, cycle-scavenging, cycle stealing, or shared computing creates a “grid” from the unused
resources in a network of participants (whether worldwide or internal to an organization). Typically this
technique uses desktop computer instruction cycles that would otherwise be wasted at night, during lunch, or
even in the scattered seconds throughout the day when the computer is waiting for user input or slow devices.

Many Volunteer computing projects, such as BOINC, use the CPU scavenging model.

In practice, participating computers also donate some supporting amount of disk storage space, RAM, and
network bandwidth, in addition to raw CPU power. Heat produced by CPU power in rooms with many
computers can be used for fine heating premises.[3] Since nodes are likely to go "offline" from time to time, as
their owners use their resources for their primary purpose, this model must be designed to handle such
contingencies.

[edit]History

The term grid computing originated in the early 1990s as a metaphor for making computer power as easy to
access as an electric power gridin Ian Foster's and Carl Kesselman's seminal work, "The Grid: Blueprint for a
new computing infrastructure" (2004).

CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in
1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive
research problems.[citation needed]
The ideas of the grid (including those from distributed computing, object-oriented programming, and Web
services) were brought together by Ian Foster, Carl Kesselman, and Steve Tuecke, widely regarded as the
"fathers of the grid".[4] They led the effort to create the Globus Toolkitincorporating not just computation
management but also storage management, security provisioning, data movement, monitoring, and a toolkit for
developing additional services based on the same infrastructure, including agreement negotiation, notification
mechanisms, trigger services, and information aggregation. While the Globus Toolkit remains the de facto
standard for building grid solutions, a number of other tools have been built that answer some subset of
services needed to create an enterprise or global grid.

In 2007 the term cloud computing came into popularity, which is conceptually similar to the canonical Foster
definition of grid computing (in terms of computing resources being consumed as electricity is from the power
grid). Indeed, grid computing is often (but not always) associated with the delivery of cloud computing systems
as exemplified by the AppLogic system from 3tera.[citation needed]

[edit]Fastest virtual supercomputers

 BOINC – 5.128PFLOPS as of Apr 24th 2010.[5]

 Folding@Home – 5 PFLOPS, as of March 17, 2009 [6]

 As of April 2010, MilkyWay@Home computes at over 1.6 PFLOPS, with a large amount of this work
coming from GPUs.[7]

 As of April 2010, SETI@Home computes data averages more than 730 TFLOPS.[8]

 As of April 2010, Einstein@Home is crunching more than 210 TFLOPS.[9]

 As of April 2010, GIMPS is sustaining 44 TFLOPS.[10]

[edit]Current projects and applications


Main article: List of distributed computing projects

Grids computing offer a way to solve Grand Challenge problems such as protein folding,
financial modeling, earthquake simulation, andclimate/weather modeling. Grids offer a way of using the
information technology resources optimally inside an organization. They also provide a means for offering
information technology as a utility for commercial and noncommercial clients, with those clients paying only for
what they use, as with electricity or water.

Grid computing is being applied by the National Science Foundation's National Technology Grid, NASA's
Information Power Grid, Pratt & Whitney, Bristol-Myers Squibb Co., and American Express.[citation needed]

One of the most famous cycle-scavenging networks is SETI@home, which was using more than 3 million
computers to achieve 23.37 sustained teraflops (979 lifetime teraflops) as of September 2001.[11]
As of August 2009 Folding@home achieves more than 4 petaflops on over 350,000 machines.

The European Union has been a major proponent of grid computing. Many projects have been funded through
the framework programme of the European Commission. Many of the projects are highlighted below, but two
deserve special mention: BEinGRID and Enabling Grids for E-sciencE.[citation needed]

BEinGRID (Business Experiments in Grid) is a research project partly funded by the European
commission[citation needed] as an Integrated Project under the Sixth Framework Programme (FP6) sponsorship
program. Started on June 1, 2006, the project will run 42 months, until November 2009. The project is
coordinated by Atos Origin. According to the project fact sheet, their mission is “to establish effective routes to
foster the adoption of grid computing across the EU and to stimulate research into innovative business models
using Grid technologies”. To extract best practice and common themes from the experimental implementations,
two groups of consultants are analyzing a series of pilots, one technical, one business. The results of these
cross analyzes are provided by the website IT-Tude.com. The project is significant not only for its long duration,
but also for its budget, which at 24.8 million Euros, is the largest of any FP6 integrated project. Of this, 15.7
million is provided by the European commission and the remainder by its 98 contributing partner companies.

The Enabling Grids for E-sciencE project, which is based in the European Union and includes sites in Asia and
the United States, is a follow-up project to the European DataGrid (EDG) and is arguably the largest computing
grid on the planet. This, along with the LHC Computing Grid[12] (LCG), has been developed to support the
experiments using the CERN Large Hadron Collider. The LCG project is driven by CERN's need to handle
huge amounts of data, where storage rates of several gigabytes per second (10 petabytes per year) are
required. A list of active sites participating within LCG can be found online[13] as can real time monitoring of the
EGEE infrastructure.[14] The relevant software and documentation is also publicly accessible.[15] There is
speculation that dedicated fiber optic links, such as those installed by CERN to address the LCG's data-
intensive needs, may one day be available to home users thereby providing internet services at speeds up to
10,000 times faster than a traditional broadband connection.[16]

Another well-known project is distributed.net, which was started in 1997 and has run a number of successful
projects in its history.

The NASA Advanced Supercomputing facility (NAS) has run genetic algorithms using the Condor cycle
scavenger running on about 350 Sunand SGI workstations.

Until April 27, 2007, United Devices operated the United Devices Cancer Research Project based on its Grid
MP product, which cycle-scavenges on volunteer PCs connected to the Internet. As of June 2005, the Grid MP
ran on about 3.1 million machines.[17]

Another well-known project is the World Community Grid. The World Community Grid's mission is to create the
largest public computing grid that benefits humanity. This work is built on belief that technological innovation
combined with visionary scientific research and large-scale volunteerism can change our world for the better.
IBM Corporation has donated the hardware, software, technical services, and expertise to build the
infrastructure for World Community Grid and provides free hosting, maintenance, and support.[citation needed]

[edit]Definitions

Today there are many definitions of grid computing:

 In his article “What is the Grid? A Three Point Checklist”,[1] Ian Foster lists these primary attributes:

 Computing resources are not administered centrally.

 Open standards are used.

 Nontrivial quality of service is achieved.

 Plaszczak/Wellner[18] define grid technology as "the technology that enables resource virtualization, on-
demand provisioning, and service (resource) sharing between organizations."

 IBM defines grid computing as “the ability, using a set of open standards and protocols, to gain access
to applications and data, processing power, storage capacity and a vast array of other computing
resources over the Internet. A grid is a type of parallel and distributed system that enables the sharing,
selection, and aggregation of resources distributed across ‘multiple’ administrative domains based on their
(resources) availability, capacity, performance, cost and users' quality-of-service requirements”.[19]

 An earlier example of the notion of computing as utility was in 1965 by MIT's Fernando Corbató.
Corbató and the other designers of the Multics operating system envisioned a computer facility operating
“like a power company or water company”.http://www.multicians.org/fjcc3.html

 Buyya/Venugopal[20] define grid as "a type of parallel and distributed system that enables the sharing,
selection, and aggregation of geographically distributed autonomous resources dynamically at runtime
depending on their availability, capability, performance, cost, and users' quality-of-service requirements".

 CERN, one of the largest users of grid technology, talk of The Grid: “a service for sharing computer
power and data storage capacity over the Internet.” [21]

Grids can be categorized with a three stage model of departmental grids, enterprise grids and global grids.
These correspond to a firm initially utilising resources within a single group i.e. an engineering department
connecting desktop machines, clusters and equipment. This progresses to enterprise grids where nontechnical
staff's computing resources can be used for cycle-stealing and storage. A global grid is a connection of
enterprise and departmental grids that can be used in a commercial or collaborative manner.

High-performance computing
From Wikipedia, the free encyclopedia
This article needs attention from an expert on the subject. See the talk page for
details.WikiProject Computing or the Computing Portal may be able to help recruit an
expert. (November 2008)
This article does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may
be challenged andremoved. (November 2008)

The Center for Nanoscale Materials at theAdvanced Photon Source

High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced
computation problems. Today, computer systems approaching the teraflops-region are counted as HPC-
computers.

Contents
[hide]

• 1 Overview

• 2 Top 500

• 3 See also

• 4 External

links

[edit]Overview

The term is most commonly associated with computing used for scientific research or computational science. A
related term, high-performance technical computing (HPTC), generally refers to the engineering applications of
cluster-based computing (such as computational fluid dynamics and the building and testing of
virtual prototypes). Recently, HPC has come to be applied to business uses of cluster-based supercomputers,
such as data warehouses, line-of-business (LOB) applications, and transaction processing.

High-performance computing (HPC) is a term that arose after the term "supercomputing." HPC is sometimes
used as a synonym for supercomputing; but, in other contexts, "supercomputer" is used to refer to a more
powerful subset of "high-performance computers," and the term "supercomputing" becomes a subset of "high-
performance computing." The potential for confusion over the use of these terms is apparent.
[edit]Top 500
A list of the most powerful high-performance computers can be found on the TOP500 list. The TOP500 list
ranks the world's 500 fastest high-performance computers, as measured by the High Performance Linpack
(HPL) benchmark. Not all computers are listed, either because they are ineligible (e.g., they cannot run the
HPL benchmark) or because their owners have not submitted an HPL score (e.g., because they do not wish
the size of their system to become public information for defense reasons). In addition, the use of the single
Linpack benchmark is controversial, in that no single measure can test all aspects of a high-performance
computer. To help overcome the limitations of the Linpack test, the U.S. government commissioned one of its
originators, Dr. Jack Dongarra of the University of Tennessee, to create a suite of benchmark tests that
includes Linpack and others, called the HPC Challenge benchmark suite. Those evolving suite has been used
in some HPC procurements, but, because it is not reducible to a single number, it has been unable to
overcome the publicity advantage of the less useful TOP500 Linpack test. The TOP500 list is updated twice a
year, once in June at the ISC European Supercomputing Conference and again at a US Supercomputing
Conference in November.

Many ideas for the new wave of grid computing were originally borrowed from HPC.

Middleware
From Wikipedia, the free encyclopedia

This article is about integration software. For video game engine software, see Game engine#Middleware.

It has been suggested that Enterprise service bus, Message-oriented


middleware, Message broker and Enterprise messaging
system be merged into this article or section. (Discuss)

Middleware is computer software that connects software components or some people and their applications.
The software consists of a set of services that allows multiple processes running on one or more machines to
interact. This technology evolved to provide for interoperabilityin support of the move to coherent distributed
architectures, which are most often used to support and simplify complex distributed applications. It
includes web servers, application servers, and similar tools that support application development and delivery.
Middleware is especially integral to modern information technology based on XML, SOAP, Web services,
and service-oriented architecture.

Middleware sits "in the middle" between application software that may be working on different operating
systems. It is similar to the middle layer of a three-tier single system architecture, except that it is stretched
across multiple systems or applications. Examples include EAIsoftware, telecommunications
software, transaction monitors, and messaging-and-queueing software.
The distinction between operating system and middleware functionality is, to some extent, arbitrary. While core
kernel functionality can only be provided by the operating system itself, some functionality previously provided
by separately sold middleware is now integrated in operating systems. A typical example is the TCP/IP stack
for telecommunications, nowadays included in virtually every operating system.

In simulation technology, middleware is generally used in the context of the high level architecture (HLA) that
applies to many distributed simulations. It is a layer of software that lies between the application code and
the run-time infrastructure. Middleware generally consists of a library of functions, and enables a number of
applications–simulations or federates in HLA terminology—to page these functions from the common library
rather than re-create them for each application

Contents
[hide]

• 1 Definitions

• 2 Origins

• 3 Organizations

• 4 Use of middleware

• 5 Types of middleware

o 5.1 Message-oriented Middleware

 5.1.1 Enterprise messaging system

 5.1.1.1 Message broker

 5.1.2 Enterprise Service Bus

o 5.2 Other

o 5.3 Hurwitz classification system

 5.3.1 Remote Procedure Call

 5.3.2 Message Oriented Middleware

 5.3.3 Object Request Broker

 5.3.4 SQL-oriented Data Access

 5.3.5 Embedded middleware

• 6 See also

• 7 References

• 8 External links

[edit]Definitions
Software that provides a link between separate software applications. Middleware is sometimes called
plumbing because it connects two applications and passes data between them. Middleware allows data
contained in one database to be accessed through another. This definition would fit enterprise application
integration and data integration software.

ObjectWeb defines middleware as: "The software layer that lies between the operating system and applications
on each side of a distributed computing system in a network."[1] Middleware is computer software that connects
software components or applications. The software consists of a set of services that allows multiple processes
running on one or more machines to interact. This technology evolved to provide for interoperability in support
of the move to coherent distributed architectures, which are most often used to support and simplify complex,
distributed applications. It includes web servers, application servers, and similar tools that support application
development and delivery. Middleware is especially integral to modern information technology based on XML,
SOAP, Web services, and service-oriented architecture.

In simulation technology, middleware is generally used in the context of the high level architecture (HLA) that
applies to many distributed simulations. It is a layer of software that lies between the application code and the
run-time infrastructure. Middleware generally consists of a library of functions, and enables a number of
applications—simulations or federates in HLA terminology—to page these functions from the common library
rather than re-create them for each application.

[edit]Origins

Middleware is a relatively new addition to the computing landscape. It gained popularity in the 1980s as a
solution to the problem of how to link newer applications to older legacy systems, although the term had been
in use since 1968.[2] It also facilitated distributed processing, the connection of multiple applications to create a
larger application, usually over a network.

[edit]Organizations

IBM, Red Hat, and Oracle Corporation are major vendors providing middleware software. Vendors such
as Axway, SAP, TIBCO, Informatica,Pervasive and webMethods were specifically founded to provide Web-
oriented middleware tools. Groups such as the Apache Software Foundation, OpenSAF and the ObjectWeb
Consortium encourage the development of open source middleware. Microsoft .NET “Framework” architecture
is essentially “Middleware” with typical middleware functions distributed between the various products, with
most inter-computer interaction by industry standards, open APIs or RAND software licence.

[edit]Use of middleware
Middleware services provide a more functional set of application programming interfaces to allow an application
to:
 Locate transparently across the network, thus providing interaction with another service or application

 Filter data to make them friendly usable or public via anonymization process for privacy protection (for
example)

 Be independent from network services

 Be reliable and always available

 Add complementary attributes like semantics

when compared to the operating system and network services.

Middleware offers some unique technological advantages for business and industry. For example, traditional
database systems are usually deployed in closed environments where users access the system only via
a restricted network or intranet (e.g., an enterprise’s internal network). With the phenomenal growth of
the World Wide Web, users can access virtually any database for which they have proper access rights from
anywhere in the world. Middleware addresses the problem of varying levels of interoperability among different
database structures. Middleware facilitates transparent access to legacy database management
systems (DBMSs) or applications via a web server without regard to database-specific characteristics [3].

Businesses frequently use middleware applications to link information from departmental databases, such as
payroll, sales, and accounting, or databases housed in multiple geographic locations [4]. In the highly
competitive healthcare community, laboratories make extensive use of middleware applications for data
mining, laboratory information system (LIS) backup, and to combine systems during hospital mergers.
Middleware helps bridge the gap between separate LISs in a newly formed healthcare network following a
hospital buyout [5].

Wireless networking developers can use middleware to meet the challenges associated with wireless sensor
network (WSN), or WSN technologies. Implementing a middleware application allows WSN developers to
integrate operating systems and hardware with the wide variety of various applications that are currently
available [6].

Middleware can help software developers avoid having to write application programming interfaces (API) for
every control program, by serving as an independent programming interface for their applications. For Future
Internet network operation through traffic monitoring in multi-domain scenarios, using mediator tools
(middleware) is a powerful help since they allow operators, searchers and service providers to
supervise Quality of service and analyse eventual failures in telecommunication services.[7]

Finally, e-commerce uses middleware to assist in handling rapid and secure transactions over many different
types of computer environments[8]. In short, middleware has become a critical element across a broad range of
industries, thanks to its ability to bring together resources across dissimilar networks or computing platforms.
In 2004 members of the European Broadcasting Union (EBU) carried out a study of Middleware with respect to
system integration in broadcast environments. This involved system design engineering experts from 10 major
European broadcasters working over a 12 month period to understand the effect of predominantly software
based products to media production and broadcasting system design techniques. The resulting reports Tech
3300 and Tech 3300s were published and are freely available from the EBU web site.[9][10]

[edit]Types of middleware
[edit]Message-oriented Middleware
Message-oriented middleware is middleware where transactions or event notifications are delivered between
disparate systems or components by way of messages, often via an enterprise messaging system.

[edit]Enterprise messaging system

An enterprise messaging system is a type of middleware that facilitates message passing between disparate
systems or components in standard formats, often using XML, SOAP or web services.

[edit]Message broker
Part of an enterprise messaging system, message broker software may queue, duplicate, translate and deliver
messages to disparate systems or components in a messaging system.

[edit]Enterprise Service Bus

Enterprise Service Bus (ESB) is defined by the Burton Group [11] as "some type of integration middleware
product that supports both MOMand Web services".

[edit]Other

Other sources[citation needed] include these additional classifications:

 Transaction processing monitors — Provides tools and an environment to develop


and deploy distributed applications.[citation needed]

 Application servers — software installed on a computer to facilitate the serving (running) of other
applications.[citation needed]

[edit]Hurwitz classification system


Judith Hurwitz created a classification system for middleware in her article Sorting Out Middleware.[12].

[edit]Remote Procedure Call

With Remote Procedure Call middleware, a client makes calls to procedures running on remote systems. Can
be asynchronous orsynchronous.
[edit]Message Oriented Middleware

With Message Oriented Middleware, messages sent to the client are collected and stored until they are acted
upon, while the client continues with other processing.

[edit]Object Request Broker

With Object Request Broker middleware, it is possible for applications to send objects and request services in
an object-oriented system.

[edit]SQL-oriented Data Access

SQL-oriented Data Access is middleware between applications and database servers.

[edit]Embedded middleware

Embedded middleware provides communication services and integration interface software/firmware that
operates between embedded applications and the real time op.

You might also like