You are on page 1of 170

i

i hsbo
2001/
page i
i

C++ Network Programming

Systematic Reuse with ACE and Frameworks

Dr. Douglas C. Schmidt

Stephen D. Huston

i
i

i
i

i hsbo
2001/
page i
i

i
i

i hsb
2001
page
i

Contents

About this Book

vii

1 Object-Oriented Frameworks for Network Programming


1.1
An Overview of Object-Oriented Frameworks . . . . . .
1.2
Applying Frameworks to Network Programming . . . .
1.3
An Overview of the ACE Frameworks . . . . . . . . . . .
1.4
Comparing Frameworks with Other Reuse Techniques
1.5
Example: A Networked Logging Service . . . . . . . . .
1.6
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

1
1
4
7
10
16
18

2 Service and Configuration Design Dimensions


2.1
Service Design Dimensions . . . . . . . . . . . .
2.1.1
Short- vs. Long-Duration Services . . . .
2.1.2
Internal vs. External Services . . . . . . .
2.1.3
Stateful vs. Stateless Services . . . . . . .
2.1.4
Layered/Modular vs. Monolithic Services
2.1.5
Single- vs. Multi-Service Servers . . . . .
2.1.6
One-shot vs. Standing Servers . . . . . . .
2.2
Configuration Design Dimensions . . . . . . . . .
2.2.1
Static vs. Dynamic Naming . . . . . . . . .
2.2.2
Static vs. Dynamic Linking . . . . . . . . .
2.2.3
Static vs. Dynamic Configuration . . . . .
2.3
Summary . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

23
23
24
24
26
27
30
31
33
33
33
35
37

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

iii

i
i

i hsbo
2001/
page i
i

iv

CONTENTS

3 The ACE Reactor Framework


3.1
Overview . . . . . . . . . . . . .
3.2
The ACE Time Value Class . . .
3.3
The ACE Event Handler Class .
3.4
The ACE Timer Queue Classes
3.5
The ACE Reactor Class . . . . .
3.6
The ACE Select Reactor Class .
3.7
The ACE TP Reactor Class . . .
3.8
The ACE WFMO Reactor Class
3.9
Summary . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

39
39
42
44
54
61
73
81
84
90

4 The ACE Service Configurator Framework


91
4.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91
4.2
The ACE Service Object Class . . . . . . . . . . . . . . . . .
94
4.3
The ACE Service Repository and ACE Service Repository Iterator
Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
99
4.4
The ACE Service Config Class . . . . . . . . . . . . . . . . . 109
4.5
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5 The ACE Task Framework
5.1
Overview . . . . . . . . .
5.2
The ACE Message Queue
5.3
The ACE Task Class . . .
5.4
Summary . . . . . . . . .

. . . .
Class
. . . .
. . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

121
121
123
139
150

i
i

i hsbo
2001/
page v
i

List of Figures

1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8

Defining Characteristics of a Framework . . . . . . . . . . . .


Levels of Abstraction for Network Programming . . . . . . . .
The Layered Architecture of ACE . . . . . . . . . . . . . . . .
The Frameworks in ACE . . . . . . . . . . . . . . . . . . . . .
Class Library vs. Framework Architectures . . . . . . . . . .
Applying Class Libraries to Develop and Use ACE Frameworks
Component Architecture . . . . . . . . . . . . . . . . . . . . .
Processes and Daemons in the Networked Logging Service .

3
5
8
9
11
14
14
18

2.1
2.2
2.3
2.4
2.5

Internal vs. External Services . . . . . . .


Layered/Modular vs. Monolithic Services
Single-service vs. Multi-service Servers . .
One-shot vs. Standing Servers . . . . . . .
Static Linking vs. Dynamic Linking . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

25
27
30
32
34

3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9

The ACE Reactor Framework Classes . . . . . . . .


The ACE Time Value Class . . . . . . . . . . . . . .
The ACE Event Handler Class . . . . . . . . . . . .
The ACE Timer Queue Classes . . . . . . . . . . . .
The ACE Reactor Class . . . . . . . . . . . . . . . .
The ACE Reactor Class Hierarchy . . . . . . . . . .
Architecture of Reactor-based Logging Server . . .
The ACE Select Reactor Framework Internals . .
The ACE Select Reactor Notification Mechanism

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

40
43
46
55
62
68
70
75
78

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

i
i

i hsbo
2001/
page v
i

vi

LIST OF FIGURES

4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8

The ACE Service Configurator Framework Classes . . . . . .


The ACE Service Object Class . . . . . . . . . . . . . . . . .
The ACE Service Repository Class . . . . . . . . . . . . . .
The ACE Service Repository Iterator Class . . . . . . . .
The ACE Service Config Class . . . . . . . . . . . . . . . . .
BNF for the ACE Service Config Scripting Language . . . .
A State Diagram for Configuring the Server Logging Daemon
A State Diagram for Reconfiguring the Server Logging Daemon

5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8

The ACE Task Framework Classes . . . . . . .


The ACE Message Queue Class . . . . . . . . . .
The Structure of an ACE Message Queue . . . .
Multi-threaded Client Logging Daemon . . . . .
The ACE Task Class . . . . . . . . . . . . . . . .
Task Activate Behavior . . . . . . . . . . . . . .
Passing Messages Between ACE Task Objects .
Architecture of the Thread Pool Logging Server

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

92
95
99
102
111
116
118
119

. 122
. 125
. 126
. 132
. 140
. 142
. 142
. 144

i
i

i hsbo
2001/
page v
i

About this Book

Writing high-quality networked applications is hardits expensive, complicated, and error-prone. The patterns, C++ features, and object-oriented
design principles presented in [SH02], help to minimize complexity and
mistakes in concurrent networked applications by re-factoring [FBB+ 99]
common structure and functionality into reusable wrapper facade class libraries. If these class libraries are rewritten for each new project, however,
many benefits of reuse will be lost.
Historically, many network software projects began by designing and
implementing demultiplexing and dispatching infrastructure mechanisms
that handle timed events and I/O on multiple socket handles. Next, they
added service instantiation and handling mechanisms atop the demultiplexing and dispatching layer, along with message buffering and queueing
mechanisms. Finally, application-specific behavior was implemented using
this ad hoc host infrastructure middleware.
The development process outlined above has happened many times
in many companies, by many projects in parallel and, worse, by many
projects serially. Regrettably, this continuous rediscovery and reinvention
of core concepts and software has kept costs unnecessarily high throughout the development lifecycle. This problem is exacerbated by the inherent
diversity of todays hardware, operating systems, compilers, and communication platforms, which keep shifting the foundations of networked application software.
Object-oriented frameworks [FJS99a, FJS99b] are one of the most flexible and powerful techniques for addressing the problems outlined above. A
framework is a reusable, semi-complete application that can be specialvii

i
i

i hsbo
2001/
page v
i

viii

About this Book

ized to produce custom applications [JF88]. Frameworks help to reduce the


cost and improve the quality of networked applications by reifying proven
software designs and patterns into concrete source code. By emphasizing
the integration and collaboration of application-specific and applicationindependent classes, moreover, frameworks enable larger-scale reuse of
software than is possible by reusing individual classes or stand-alone functions.
In 1992, Doug Schmidt started the open-source ACE project at the University of California, Irvine and Washington University, St. Louis. Over the
next decade, the ACE toolkit yielded some of the most powerful and widely
used object-oriented frameworks written in C++. By applying reusable software patterns and a lightweight OS portability layer, the ACE toolkit provides frameworks for synchronous and asynchronous event handling, service initialization, concurrency, connection management, and hierarchical
service integration.
ACE has changed the way complex networked applications and middleware are being designed and implemented on the worlds most popular
operating systems, such as AIX, HP/UX, Linux, MacOS X, Solaris, and
Windows, as well as real-time embedded operating systems, such as VxWorks, LynxOS, ChorusOS, QNX, pSOS, and WinCE. ACE is being used
by thousands of development teams ranging from large Fortune 500 companies to small startups. Its open-source development model and selfsupporting culture is similiar in spirit and enthusiasm to Linus Torvalds
popular Linux operating system.

Intended Audience
This book is intended for hands-on C++ developers or advanced students interested in understanding how to design and apply object-oriented
frameworks to program concurrent networked applications. We show you
how to enhance your design skills and take advantage of C++, frameworks,
and patterns to produce flexible and efficient object-oriented networked
applications quickly and easily. The code examples we use to reinforce the
design discussions illustrate how to use the frameworks in ACE. These examples help you begin to apply key object-oriented design principles and
patterns to your concurrent networked applications right away.

i
i

i hsbo
2001/
page i
i

About this Book

ix

Structure and Content


This book describes a family of object-oriented network programming frameworks provided by the ACE toolkit. These frameworks help reduce the
cost and improve the quality of networked applications by reifying proven
software designs and implementations. ACEs framework-based approach
expands reuse technology far beyond what can be achieved by reusing individual classes or even class libraries. We describe the design of these
frameworks, show how they can be applied to real networked applications,
and summarize the design rules and lessons learned that ensure effective
use of these frameworks.
This books primary application is a networked logging service that
transfers log records from client applications to a logging server, which
stores the records in a file or database. We use this service as a running
example throughout the book to




Show concretely how ACE frameworks can help achieve efficient, predictable, and scalable networked applications and
Demonstrate key design and implementation considerations and solutions that will arise when you develop your own concurrent objectoriented networked applications.

Chapter 1 introduces the concept of an object-oriented framework and


shows how frameworks differ from other reuse techniques, such as class
libraries, components, and patterns. We also outline the frameworks in
the ACE toolkit that are covered in subsequent chapters. The ACE frameworks are based on a pattern language [BMR+ 96, SSRB00] that has been
applied to thousands of production and research networked applications
and middleware systems world-wide.
Chapter 2 presents a domain analysis of service and configuration design dimensions that address key networked application properties, such
as duration and structure, how networked services are identified, and the
time at which they are bound together to form complete applications.
Chapter 3 describes the design and use of the ACE Reactor framework,
which implements the Reactor pattern [SSRB00] to allow event-driven applications to demultiplex and dispatch service requests that are delivered
to an application from one or more clients.
Chapter 4 describes the design and use of the ACE Service Configurator framework, which implements the Component Configurator pat-

i
i

i hsbo
2001/
page x
i

About this Book

tern [SSRB00] to allow an application to link/unlink its component implementations at run-time without having to modify, recompile, or relink
the application statically.
Chapter 5 describes the design and effective use of the ACE Task framework, which can be used to implement key concurrency patterns, such as
Active Object and Half-Sync/Half-Async [SSRB00].
Chapter ?? describes the design and effective use of the ACE AcceptorConnector framework, which implements the Acceptor-Connector pattern
[SSRB00] to decouple the connection and initialization of cooperating peer
services in a networked system from the processing they perform once
connected and initialized.
Chapter ?? describes the design and use of the Proactor framework,
which implements the Proactor pattern [SSRB00] to allow event-driven applications to efficiently demultiplex and dispatch service requests triggered
by the completion of asynchronous operations.
Chapter ?? describes the design and use of the ACE Streams framework, which implements the Pipes and Filters pattern [BMR+ 96] to provide
a structure for systems that process a stream of data.
Chapter ?? describes the design and use of the ACE Logging Service
framework, which uses the ACE Reactor, Service Configurator, Task, and
Acceptor-Connector frameworks to implement and configure the various
processes that constitute the networked logging service.
Appendix ?? describes the design rules to follow when using the ACE
frameworks. Appendix ?? summarizes the lessons weve learned during
the past decade developing the reusable object-oriented networked application software in the ACE toolkit and deploying ACE successfully in a wide
range of commercial applications across many domains. Appendix ?? provides a synopsis of all the ACE classes and frameworks presented in the
two volumes of C++ Network Programming.
The book concludes with a glossary of technical terms, an extensive list
of references for further research, and a general subject index.

Related Material
This book focuses on abstracting commonly recurring object structures,
behaviors, and usage patterns into reusable frameworks to make it easier
to develop networked applications more efficiently and robustly. The first

i
i

i hsbo
2001/
page x
i

About this Book

xi

volume in this seriesC++ Network Programming: Mastering Complexity


with ACE and Patterns [SH02]is primarily concerned with using objectoriented patterns and language features to abstract away the details of
programming low-level APIs. We recommend that you read the Volume 1
first before reading this book.
This book is based on ACE version 5.2, released in October, 2001. ACE
5.2 and all the sample applications described in our books are open-source
software that can be downloaded at http://ace.ece.uci.edu and http:
//www.riverace.com. These sites also contain a wealth of other material
on ACE, such as tutorials, technical papers, and an overview of other ACE
wrapper facades and frameworks that arent covered in this book. We encourage you to obtain a copy of ACE so you can follow along, see the actual
ACE classes and frameworks in complete detail, and run the code examples
interactively as you read through the book. Pre-compiled versions of ACE
can also be purchased at a nominal cost from http://www.riverace.com.
To learn more about ACE, or to report any errors you find in the book,
we recommend you subscribe to the ACE mailing list, ace-users@cs.
wustl.edu. You can subscribe by sending email to the Majordomo list
server at ace-users-request@cs.wustl.edu. Include the following command in the body of the email (the subject is ignored):
subscribe ace-users [emailaddress@domain]

You must supply emailaddress@domain only if your messages From address is not the address you wish to subscribe.
Archives of postings to the ACE mailing list are available at http://
groups.yahoo.com/group/ace-users. Postings to the ACE mailing list
are also forwarded to the USENET newsgroup comp.soft-sys.ace.

Acknowledgements
Champion reviewing honors go to ..., who reviewed the entire book and
provided extensive comments that improved its form and content substantially. Naturally, we are responsible for any remaining problems.
Many other ACE users from around the world provided feedback on
drafts of this book, including Vi Thuan Banh, Kevin Bailey, Alain Decamps,
Dave Findlay, Don Hinton, Martin Johnson, Nick Pratt, Eamonn Saunders,
Michael Searles, Kalvinder Singh, Henny Sipma, Leo Stutzmann, Tommy
Svensson, Dominic Williams, Johnny Willemsen, and Vadim Zaliva.

i
i

i hsbo
2001/
page x
i

xii

About this Book

We are deeply indebted to all the members, past and present, of the
DOC groups at Washington University in St. Louis and the University of
California, Irvine, as well as the team members at Object Computing Inc.
and Riverace Corporation, who developed, refined, and optimized many of
the ACE capabilities presented in this book. This group includes...
We also want to thank the thousands of C++ developers from over fifty
countries whove contributed to ACE during the past decade. ACEs excellence and success is a testament to the skills and generosity of many
talented developers and the forward looking companies that have had the
vision to contribute their work to ACEs open-source code base. Without their support, constant feedback, and encouragement we would never
have written this book. In recognition of the efforts of the ACE opensource community, we maintain a list of all contributors thats available
at http://ace.ece.uci.edu/ACE-members.html.
We are also grateful for the support from colleagues and sponsors of
our research on patterns and development of the ACE toolkit, notably the
contributions of ...
Very special thanks go to....
Finally, we would also like to express our gratitude and indebtedness to
the late W. Richard Stevens, the father of network programming literature.
His books brought a previously-unknown level of clarity to the art and
science of network programming. We endeavor to stand on his virtual
shoulders, and extend the understanding that Richards books brought
into the world of object-oriented design and C++ programming.

Steves Acknowledgements
Dougs Acknowledgements

i
i

i hsbo
2001/
page 1
i

C HAPTER 1

Object-Oriented Frameworks for


Network Programming

C HAPTER S YNOPSIS
Object-oriented frameworks help reduce the cost and improve the quality
of networked applications by reifying software designs and pattern languages that have proven effective in particular application domains. This
chapter illustrates what frameworks are and shows how they compare and
contrast with other popular reuse techniques, such as class libraries, components, and patterns. We then outline the frameworks in ACE that are
the focus of this book. These frameworks are based on a pattern language
that has been applied to thousands of production networked applications
and middleware systems world-wide.

1.1

An Overview of Object-Oriented Frameworks

Although computing power and network bandwidth have increased dramatically in recent years, the design and implementation of networked application software remains expensive, time-consuming, and error-prone.
The cost and effort stems from the increasing demands placed on networked software and the continual rediscovery and reinvention of core
software design and implementation artifacts throughout the software industry. Moreover, the heterogeneity of hardware architectures, diversity of
OS and network platforms, and stiff global competition are making it increasingly hard to build networked application software from scratch while
1

i
i

i hsbo
2001/
page 2
i

Section 1.1 An Overview of Object-Oriented Frameworks

also ensuring that it has the following qualities:








Portability, to reduce the effort required to support applications across


heterogeneous OS platforms, programming languages, and compilers
Flexibility, to support a growing range of multimedia data types, traffic patterns, and end-to-end quality of service (QoS) requirements
Extensibility, to support successions of quick updates and additions
to take advantage of new requirements and emerging markets
Predictability and efficiency, to provide low latency to delay-sensitive
real-time applications, high performance to bandwidth-intensive applications, and usability over low-bandwidth network, such as wireless links
Reliability, to ensure that applications are robust, fault tolerant, and
highly available and
Affordability, to ensure that the total ownership costs of software
acquisition and evolution are not prohibitively high.

Developing application software that achieves these qualities is hard;


systematically developing high quality reusable software middleware for
networked applications is even harder [GHJV95]. Reusable software is inherently abstract, which makes it hard to engineer its quality and to manage its production. Moreover, the skills required to develop, deploy, and
support reusable networked application software have traditionally been a
black art, locked in the heads of expert developers and architects. When
these technical reuse impediments are combined with the myriad of nontechnical impediments, such as organizational, economical, administrative, political, and psychological factors, its not surprising that significant
levels of software reuse have been slow to materialize in most projects and
organizations [Sch00c].
During the past decade, weve written hundreds of thousands of lines
of C++ code developing widely reusable middleware for networked applications as part of our research and consulting with dozens of telecommunication, aerospace, medical, and financial service companies. As a
result of our experience, weve documented many patterns and pattern
languages [SSRB00] that guided the design of the middleware and applications. In addition, weve taught hundreds of tutorials and courses on reuse,
middleware, and patterns for thousands of developers and students. Despite formidable technical and non-technical challenges, weve identified a
solid body of work that combines design knowledge, hands-on experience,

i
i

i hsbo
2001/
page 3
i

Section 1.1 An Overview of Object-Oriented Frameworks

and software artifacts that can significantly enhance the systematic reuse
of networked application software.
At the heart of this body of work are object-oriented frameworks [FJS99a,
FJS99b], which are a powerful technology for achieving systematic reuse
of networked application software.1 Figure 1.1 illustrates the following
NETWORKING

EVENT
LOOP

CALLBACKS

APPLICATIONSPECIFIC

GUI
CALL
EVENT

FUNCTIONALITY

DOMAIN-SPECIFIC
FRAMEWORK

BACKS
LOOP

CAPABILITIES

CALLBACKS

DATABASE

EVENT
LOOP

Figure 1.1: Defining Characteristics of a Framework


framework characteristics:
1. A framework provides an integrated set of domain-specific structures and functionality. Systematic reuse of software depends largely
on how well a framework captures the pressure points of stability and
variability in an application domain, such as business data processing, telecom call processing, graphical user interfaces, or distributed
object computing middleware.
2. A framework exhibits inversion of control at run-time via callbacks. A callback is an object registered with a dispatcher that calls
back to a method on the object when a particular event occurs, such
as a connection request or data arriving on a socket handle. Inversion
of control decouples the canonical detection, demultiplexing, and dispatching steps within a framework from the application-defined event
handlers managed by the framework. When events occur, the framework calls back to hook methods in the registered event handlers,
which then perform application-defined processing on the events.
3. A framework is a semi-complete application that programmers
customize to form complete applications by inheriting and instantiat1
In the remainder of this book we use the term framework to mean object-oriented framework.

i
i

i hsbo
2001/
page 4
i

Section 1.2 Applying Frameworks to Network Programming

ing framework classes. Inheritance enables the features of framework


base classes to be shared selectively by subclasses. If a base class
provides default implementations of its methods, application developers need only override those virtual methods whose default behavior
is inadequate.
These characteristics of a framework yield the following benefits:




Since a framework reifies the key roles and relationships of classes in


an application domain, the amount of reusable code is increased and
the amount of code rewritten for each application is decreased.
Since a framework exhibits inversion of control, it simplifies application design since the frameworkrather than an applicationruns
the event loop that detects events, demultiplexes events to event handlers, and dispatches hook methods on the handlers to process the
events.
Since a framework is a semi-complete application, it enables largerscale reuse of software than reusing individual classes or stand-alone
functions because it integrates application-defined and applicationindependent classes. A framework abstracts the canonical control
flow of applications in a particular domain into families of related
classes, which then collaborate to integrate generic and customizable
application-independent code with concrete customized applicationdefined code.

Appendix ?? evaluates the pros and cons of frameworks in more detail.

1.2 Applying Frameworks to Network Programming


One reason why its hard to write robust, extensible, and efficient networked applications is because developers must master many complex
networking programming concepts and mechanisms, including:






Network addressing and service identification/discovery


Presentation layer conversions, such as encryption, compression, and
marshaling/demarshaling, to handle heterogeneous end-systems with
alternative processor byte-orderings
Local and remote inter-process communication (IPC) mechanisms
Synchronous and asynchronous event demultiplexing and event handler dispatching and

i
i

i hsbo
2001/
page 5
i

Section 1.2 Applying Frameworks to Network Programming

Process/thread lifetime management and synchronization.

Application programming interfaces (APIs) and tools have evolved over the
years to simplify the development of networked applications and middleware. Figure 1.2 illustrates the IPC APIs available on many OS platforms,
such as UNIX and many real-time operating systems. This figure shows
HI

HOST INFRASTRUCTURE MIDDLEWARE


SOCKETS

LEVEL OF
ABSTRACTION

& TLI

open()/close()/putmsg()/getmsg()

STREAMS
LO

FRAMEWORK

TPI
NPI
DLPI

USER
SPACE
KERNEL
SPACE

Figure 1.2: Levels of Abstraction for Network Programming


how applications can access networking APIs for local and remote IPC at
several levels of abstraction. Below, we outline each level of abstraction,
starting from the low-level kernel APIs to the native OS user-level networking APIs and host infrastructure middleware.
Kernel-level networking APIs. Lower-level networking APIs are available
in an OS kernels I/O subsystem. For example, the UNIX putmsg() and
getmsg() system calls can be used to access the transport provider interface (TPI) [OSI92b] and the data-link provider interface (DLPI) [OSI92a]
available in System V STREAMS [Rit84]. Its also possible to develop network services, such as routers [KMC+ 00], network file systems [WLS+ 85],
or even Web servers [JKN+ 01], that reside entirely within an OS kernel.
Programming directly to kernel-level networking APIs is rarely portable
between different OS platforms. Its often not even portable across different versions of the same OS! Since kernel-level programming isnt used
in most networked applications, we wont discuss it further in this book.
See [Rag93] and [SW93] for coverage of these topics in the context of System V UNIX and BSD UNIX, respectively.
User-level networking APIs. Networking protocol stacks in most commercial operating systems reside within the protected address space of the
OS kernel. Applications running in user-space access the kernel-resident
protocol stacks via OS IPC APIs, such as the Socket or TLI APIs. These

i
i

i hsbo
2001/
page 6
i

Section 1.2 Applying Frameworks to Network Programming

APIs collaborate with an OS kernel to access the capabilities shown in the


following table:
Capability
Local context
management
Connection
establishment
and
connection
termination
Options
management
Data transfer
mechanisms
Network
addressing

Description
Manage the lifetime of local and remote communication endpoints.
Enable applications to establish connections actively or passively with remote peers and to shutdown all or part of the
connections when transmissions are complete.

Negotiate and enable/disable certain options.


Exchange data between connected peer applications.
Convert humanly-readable names to low-level network addresses and vice versa.

These capabilities are described in Chapter 2 of [SH02] in the context of


the Socket API.
Many IPC APIs are modeled loosely on the UNIX file I/O API, which
defines the open(), read(), write(), close(), ioctl(), lseek(), and
select() functions [Rit84]. Networking APIs provide additional functionality thats not supported directly by the standard UNIX file I/O APIs due
to syntactic and semantic differences between file I/O and network I/O.
For example, the pathnames used to identify files on a UNIX system arent
globally unique across hosts in a heterogeneous distributed environment.
Different naming schemes, such as IP host addresses and TCP/UDP port
numbers, have therefore been devised to uniquely identify communication
endpoints used by networked applications.
Host infrastructure middleware frameworks. Many networked applications exchange messages with clients using various types of synchronous
and asynchronous request/response protocols in conjunction with host infrastructure middleware frameworks. Host infrastructure middleware encapsulates OS concurrency and IPC mechanisms to automate many tedious and error-prone aspects of networked application development, including:




Connection management and event handler initialization


Synchronous and asynchronous event detection, demultiplexing, and
event handler dispatching

i
i

i hsbo
2001/
page 7
i

Section 1.3 An Overview of the ACE Frameworks







Message framing atop bytestream protocols, such as TCP


Presentation conversion issues involving network byte-ordering and
parameter (de)marshaling
Concurrency models and synchronization of concurrent operations
Networked application composition from dynamically configured services and
Management of QoS properties, such as scheduling access to processors, networks, and memory.

The increasing availability and popularity of high-quality and affordable


host infrastructure middleware is helping to raise the level of abstraction
at which networked application developers can work effectively. Even so,
its wise to understand how lower-level IPC mechanisms work to fully comprehend the challenges that will arise when designing your networked applications.

1.3

An Overview of the ACE Frameworks

The ADAPTIVE Communication Environment (ACE) is a highly portable, widelyused, open-source host infrastructure middleware that can be downloaded
from http://ace.ece.uci.edu/ or http://www.riverace.com. The ACE
library contains 240,000 lines of C++ code and 500 classes. To separate
concerns, reduce complexity, and permit functional subsetting, ACE is designed using a layered architecture [BMR+ 96], shown in Figure 1.3. The
foundation of the ACE toolkit is its combination of an OS adaptation layer
and C++ wrapper facades [SSRB00], which together encapsulate core OS
concurrent network programming mechanisms. The higher layers of ACE
build upon this foundation to provide reusable frameworks, networked service components, and standards-based middleware. Together, these middleware layers simplify the creation, composition, configuration, and porting of networked applications without incurring significant performance
overhead.
The ACE wrapper facades for native OS IPC and concurrency mechanisms and the ACE standards-based middleware based upon and bundled with ACE were described in [SH02]. This book focuses on the ACE
frameworks that help developers produce portable, scalable, efficient, and
robust networked applications. These frameworks have also been used
to build higher-level standards-based middleware, such as The ACE ORB

i
i

i hsbo
2001/
page 8
i

Section 1.3 An Overview of the ACE Frameworks

NETWORKED
SERVICE
COMPONENTS
LAYER

TOKEN
SERVER

LOGGING
SERVER

C++

WRAPPER
FACADE
LAYER

C
APIS

GATEWAY
SERVER

NAME
SERVER

PROCESS/
THREAD
MANAGERS
SYNCH
WRAPPERS

(TAO)

TIME
SERVER

SERVICE
HANDLER

FRAMEWORK
LAYER

JAWS ADAPTIVE STANDARDS-BASED MIDDLEWARE


WEB SERVER
THE ACE ORB

ACCEPTOR

STREAMS
LOG
MSG
SPIPE
SAP

SOCK SAP/
TLI SAP

FIFO
SAP

OS
PROCESSES/
THREADS

CORBA
HANDLER

CONNECTOR

WIN32 NAMED
PIPES & UNIX
STREAM PIPES

PROCESS/THREAD
SUBSYSTEM

SOCKETS/

TLI

REACTOR/
PROACTOR

SERVICE
CONFIGURATOR

SHARED
MALLOC
MEM
MAP

FILE
SAP

ADAPTATION LAYER

UNIX
FIFOS

COMMUNICATION
SUBSYSTEM

SELECT/
IO COMP

DYNAMIC
LINKING

SHARED
MEMORY

FILE SYS
APIS

VIRTUAL MEMORY & FILE


SUBSYSTEM

GENERAL OPERATING SYSTEM SERVICES

Figure 1.3: The Layered Architecture of ACE


(TAO) [SLM98], which is a CORBA-compliant [Obj01] Object Request Broker (ORB) implemented using the frameworks in ACE. Figure 1.4 illustrates
the ACE frameworks described in this book, which implement a pattern
language for programming concurrent object-oriented networked applications.
ACE Reactor and Proactor frameworks. These frameworks implement
the Reactor and Proactor patterns [SSRB00], respectively. Reactor is an architectural pattern that allows event-driven applications to synchronously
process requests that are delivered to an application from one or more
clients. Proactor is an architectural pattern that allows event-driven applications to process requests triggered by the completion of asynchronous
operations to achieve the performance benefits of concurrency without incurring many of its liabilities. The Reactor and Proactor frameworks automate the detection, demultiplexing, and dispatching of application-defined
handlers in response to various types of I/O-based, timer-based, signalbased, and synchronization-based events. Chapter 3 describes the Reactor
framework and Chapter ?? describes the Proactor framework.

i
i

i hsbo
2001/
page 9
i

Section 1.3 An Overview of the ACE Frameworks

Active
Object
AcceptorConnector
Reactor

Streams
Half-Sync
Half-Async
Proactor

Service
Configurator
Monitor
Object

Figure 1.4: The Frameworks in ACE


ACE Service Configurator framework. This framework implements the
Component Configurator pattern [SSRB00], which is a design pattern that
allows an application to link/unlink its component implementations at
run-time without having to modify, recompile, or relink the application
statically. The ACE Service Configurator framework supports the configuration of applications whose services can be assembled dynamically late
in the design cycle, i.e., at installation-time and/or run-time. Applications
with high availability requirements, such as mission-critical systems that
perform on-line transaction processing or real-time industrial process automation, often require such flexible configuration capabilities. Chapter 4
describes this framework in detail.
ACE Task concurrency framework. This framework implements various
concurrency patterns [SSRB00], such as Active Object and Half-Sync/HalfAsync:




Active Object is a design pattern that decouples the thread that executes a method from the thread that invoked it. Its purpose is to enhance concurrency and simplify synchronized access to objects that
reside in their own threads of control.
Half-Sync/Half-Async is an architectural pattern that decouples asynchronous and synchronous processing in concurrent systems, to simplify programming without reducing performance unduly. This pattern introduces two intercommunicating layers, one for asynchronous
and one for synchronous service processing. A queueing layer mediates communication between services in the asynchronous and synchronous layers.

i
i

i hsbo
2001/
page 1
i

10

Section 1.4 Comparing Frameworks with Other Reuse Techniques

Chapter 5 describes the ACE Task framework in detail.


ACE Acceptor-Connector framework. This framework leverages the Reactor and Proactor frameworks by reifying the Acceptor-Connector pattern [SSRB00]. Acceptor-Connector is a design pattern that decouples the
connection and initialization of cooperating peer services in a networked
system from the processing they perform once connected and initialized.
The Acceptor-Connector framework decouples the active and passive initialization roles from application-defined service processing performed by
communicating peer services after initialization is complete. Chapter ??
describes this framework in detail.
ACE Streams framework. This framework implements the Pipes and Filters pattern [BMR+ 96], which is an architectural pattern that provides a
structure for systems that process a stream of data. The ACE Streams
framework simplifies the development and composition of hierarchicallylayered services, such as user-level protocol stacks and network management agents [SS94]. Chapter ?? describes this framework in detail.
When used together, these ACE frameworks enable the development of
networked applications that can be updated and extended without the need
to modify, recompile, relink, or restart running applications. ACE achieves
this unprecedented flexibility and extensibility by combining





C++ language features, such as templates, inheritance, and dynamic


binding [Bja00]
Patterns, such as Strategy [GHJV95] and Component Configurator
[SSRB00] and
OS mechanisms, such as event demultiplexing, IPC, dynamic linking,
multithreading, and synchronization [Ste99].

1.4 Comparing Frameworks with Other Reuse Techniques


Object-oriented frameworks dont exist in isolation. Many other reuse techniques are in widespread use, such as class libraries, components, and
patterns. In this section, we compare frameworks with these other reuse
techniques to illustrate their similarities and differences.

i
i

i hsbo
2001/
page 1
i

11

Section 1.4 Comparing Frameworks with Other Reuse Techniques

Comparing Frameworks and Class Libraries


Class libraries represent the most common first-generation object-oriented
reuse technique [Mey97]. A class provides a general-purpose, reusable
building block that can be applied across a wide range of applications.
Class libraries support component reuse-in-the-small more effectively than
function libraries since classes enhance the cohesion of data and methods
that operate on the data. Their scope is somewhat limited, however, since
they dont capture the canonical control flow, collaboration, and variability
among families of related software artifacts. Developers who apply class
library-based reuse must therefore reinvent and reimplement the overall
software architecture and much of the control logic for each new application.
Frameworks are second-generation reuse techniques that extend the
benefits of class libraries in the following two ways:
1. Frameworks are semi-complete applications that embody domainspecific object structures and functionality. Class libraries are often
domain-independent and provide a fairly limited scope of reuse, e.g., the
C++ standard library [Bja00] provides classes for strings, vectors, and various abstract data type (ADT) containers. Although these classes can be
reused in most application domains, they are relatively low-level. For example, application developers are responsible for (re)writing the glue code
APPLICATION-

LOCAL

SPECIFIC

INVOCATIONS

FUNCTIONALITY

MATH
CLASSES

ADT
CLASSES

DATABASE
CLASSES

EVENT
LOOP

GLUE
CODE

GUI
CLASSES

NETWORKING

NETWORK

IPC

CLASSES

(1) CLASS LIBRARY ARCHITECTURE

EVENT
LOOP

CALLBACKS

GUI

APPLICATIONSPECIFIC EVENT HANDLER


FUNCTIONALITY

CALL
BACKS

EVENT
LOOP

CALLBACKS

DATABASE

EVENT
LOOP

(2) FRAMEWORK ARCHITECTURE

Figure 1.5: Class Library vs. Framework Architectures


that performs the bulk of the application control flow and class integration

i
i

i hsbo
2001/
page 1
i

12

Section 1.4 Comparing Frameworks with Other Reuse Techniques

logic, as shown in Figure 1.5 (1). As a result, the total amount of reuse
is relatively small, compared with the amount of application-defined code
that must be rewritten for each application.
In contrast, classes in a framework collaborate to provide a reusable
architecture for a family of related applications. Frameworks can be classified by the techniques used to extend them, which range along a continuum from whitebox frameworks to blackbox frameworks [HJE95], as
described below:
 Whitebox frameworksIn this type of framework, extensibility is
achieved via object-oriented language features, such as inheritance
and dynamic binding. Existing functionality can be reused and customized by inheriting from framework base classes and overriding
pre-defined hook methods [Pre94] using patterns such as Template
Method [GHJV95], which defines an algorithm with some steps supplied by a derived class. Application developers must have some
knowledge of a whitebox frameworks internal structure in order to
extend it.
 Blackbox frameworksIn this type of framework, extensibility is
achieved by defining interfaces that allow objects to be plugged into
the framework via composition and delegation. Existing functionality can be reused by defining classes that conform to a particular
interface and then integrating these classes into the framework using patterns such as Bridge and Strategy [GHJV95], which provide
a blackbox abstraction for selecting one of many algorithms. Blackbox frameworks may be easier to use than whitebox frameworks since
application developers neednt have as much knowledge of the frameworks internal structure. Blackbox frameworks can be harder to design, however, since framework developers must define crisp interfaces that anticipate a broad range of potential use-cases.
ACE supports both whitebox and blackbox frameworks. For example,
its Acceptor-Connector framework described in Chapter ?? defines two
types of factories that initialize a new endpoint of communication in response to a connection request from a peer connector:
 The ACE_Acceptor uses the Template Method pattern, which provides a whitebox approach to extensibility, whereas
 The ACE_Strategy_Acceptor uses the Bridge and Strategy patterns,
which provide a blackbox approach to extensibility [Sch00b].

i
i

i hsbo
2001/
page 1
i

Section 1.4 Comparing Frameworks with Other Reuse Techniques

13

Complete networked applications can be composed by customizing the


classes and frameworks in ACE.
2. Frameworks are active and exhibit inversion of control at runtime. Classes in a class library are typically passive, i.e., they perform
their processing by borrowing the thread of control from self-directed applications that invoke their methods. As a result, developers must continually rewrite much of the control logic needed to glue the reusable classes
together to form complete networked applications.
In contrast, frameworks are active, i.e., they direct the flow of control
within an application via various callback event handling patterns, such
as Reactor, Proactor [SSRB00], and Observer [GHJV95]. These patterns
invert the applications flow of control using a design technique known as
the Hollywood Principle, i.e., Dont call us, well call you [Vli98]. Since
frameworks are active and manage the applications control flow, they can
perform a broader range of activities on behalf of applications than is possible with passive class libraries.
All the ACE frameworks provide inversion of control via callbacks, as
shown in the following table:
ACE Framework
Reactor and
Proactor
Service Configurator
Task
Acceptor-Connector
Streams

Inversion of Control
Call back to application-supplied event handlers to perform processing when events occur synchronously and
asynchronously, respectively.
Calls back to application-supplied service objects to initialize, suspend, resume, and finalize them.
Calls back to an application-supplied hook method to
perform processing in one or more threads of control.
Calls back to service handlers in order to initialize them
after theyve been connected.
Calls back to initialize and finalize tasks when they are
pushed and popped from a stream.

In practice, frameworks and class libraries are complementary technologies. As shown in Figure 1.6 for instance, the ACE toolkit simplifies the
implementation of its frameworks via its class libraries of containers, such
as queues, hash tables, and other ADTs. Likewise, application-defined
code invoked by event handlers in the ACE Reactor framework can use
the ACE wrapper facades presented in [SH02] and the C++ standard library classes [Jos99] to perform IPC, synchronization, file management,
and string processing operations.

i
i

i hsbo
2001/
page 1
i

14

Section 1.4 Comparing Frameworks with Other Reuse Techniques

ACE Task

LOOP

CALLBACKS

CLASSES

APPLICATION-

LOCAL
INVOCATIONS

IPC

CALL

SPECIFIC EVENT HANDLER BACKS


FUNCTIONALITY

ACE
Streams

ADT

EVENT

EVENT

CALLBACKS

CLASSES

ACE Reactor

LOOP
EVENT
LOOP

Figure 1.6: Applying Class Libraries to Develop and Use ACE Frameworks
Comparing Frameworks and Components
A component is an encapsulated part of a software system that implements
a specific service or set of services. A component has one or more interfaces
that provide access to its services. Components serve as building blocks for
the structure of a system and can be reused based solely upon knowledge
of their interface protocols. Components can also be plugged in and/or
scripted together to form complete applications, as shown in Figure 1.7.
Common examples of components include ActiveX controls, CORBA object
APPLICATION-

REMOTE OR

SPECIFIC

INVOCATIONS

FUNCTIONALITY

LOCAL

TIME

NAMING
LOCKING

EVENT
LOOP

GLUE
CODE

TRADING

LOGGING

Figure 1.7: Component Architecture


services [Obj98], and JavaBeans [CL97].
Components are less lexically and spatially coupled than frameworks.
For example, applications can reuse components without having to subclass them from existing base classes. In addition, by applying patterns
like Proxy [GHJV95] and Broker [BMR+ 96], components can often be dis-

i
i

i hsbo
2001/
page 1
i

Section 1.4 Comparing Frameworks with Other Reuse Techniques

15

tributed to servers throughout a network and accessed by clients remotely.


ACE provides naming, event routing [Sch00a], logging, time synchronization, and network locking components in its networked service components
layer outlined in Chapter 0 of [SH02].
The relationship between frameworks and components is highly synergistic, with neither subordinate to the other [Joh97]. For example, a middleware framework can be used to develop higher-level application components, where component interfaces provide a facade for the internal
class structure of the framework. Likewise, components can be used as
pluggable strategies in blackbox frameworks [HJE95]. Frameworks are
often used to simplify the development of middleware component models [Ann98, BEA99], whereas components are often used to simplify the
development and configuration of networked application software.
Comparing Frameworks and Patterns
Developers of networked applications must address design challenges related to complex topics like connection management, service initialization,
distribution, concurrency control, flow control, error handling, event loop
integration, and dependability. These challenges are often independent
of application-defined requirements. Successful developers resolve these
challenges by applying the following types of patterns [BMR+ 96]:
 Design patterns, which provide a scheme for refining components
of a software system or the relationships between them and describe
a commonly-recurring structure of communicating components that
solves a general design problem within a particular context.
 Architectural patterns, which express fundamental structural organization schemas for software systems and provide a set of predefined subsystems, specify their responsibilities, and include rules and
guidelines for organizing the relationships between them.
 Pattern languages, which define a vocabulary for talking about software development problems and provide a process for the orderly resolution of these problems.
Traditionally, patterns and pattern languages have been locked in the
heads of expert developers or buried deep within complex system source
code. Allowing this valuable information to reside only in these locations
is risky and expensive, however. Capturing and documenting patterns for
networked applications therefore helps to

i
i

i hsbo
2001/
page 1
i

16

Section 1.5 Example: A Networked Logging Service

Preserve important design information for programmers who enhance and maintain existing software. If this information isnt documented explicitly itll be lost over time. In turn, this increases software
entropy and decreases software maintainability and quality since substantial effort may likewise be necessary to reverse engineer the patterns from existing source code.
Guide design choices for developers who are building new networked
applications. Since patterns document the common traps and pitfalls
in their domain, they help developers select suitable architectures,
protocols, algorithms, and platform features without wasting time
and effort (re)implementing solutions that are known to be inefficient
or error-prone.

Knowledge of patterns and pattern languages helps to reduce development effort and maintenance costs. Reuse of patterns alone, however, is
not sufficient to create flexible and efficient networked application software. Although patterns enable reuse of abstract design and architecture
knowledge, software abstractions documented as patterns dont yield reusable code directly. Its therefore essential to augment the study of patterns with the creation and use of frameworks. Frameworks help developers avoid costly reinvention of standard software artifacts by implementing
common pattern languages and refactoring common implementation roles.

1.5 Example: A Networked Logging Service


Throughout this book, we illustrate key points and ACE capabilities by extending and enhancing the networked logging service example presented in
[SH02]. This service collects and records diagnostic information sent from
one or more client applications. Unlike the versions in [SH02], which were
a subset of the actual networked logging service in ACE, this book illustrates the full complement of capabilities, patterns, and daemons provided
by ACE. Sidebar 1 defines what a daemon is and explains the daemon
process.
Figure 1.8 illustrates the application processes and daemons in our
networked logging service, which are described below.
Client application processes run on client hosts and generate log records
ranging from debugging messages to critical error messages. The logging

i
i

i hsbo
2001/
page 1
i

Section 1.5 Example: A Networked Logging Service

17

Sidebar 1: Daemons and Daemonizing


A daemon is a server process that runs continuously in the background
performing various services on behalf of clients [Ste98]. Daemonizing a
UNIX process involves the following steps:
1. Dynamically spawning a new server process
2. Closing all unnecessary I/O handles
3. Changing the current filesystem directory away from the initiating
users
4. Resetting the file access creation mask
5. Disassociating from the controlling process group and the controlling
terminal and
6. Ignoring terminal I/O-related events and signals.
An ACE server can convert itself into a daemon on UNIX by invoking the
static method ACE::daemonize(). A Win32 Service [Ric97] is a form of
daemon and can be programmed in ACE using the ACE_NT_Service
class.
information sent by a client application indicates the following:
1.
2.
3.
4.

The time the log record was created


The process identifier of the application
The priority level of the log record and
A string containing the logging message text, which can vary in size
from 0 to a maximum length, such as 4 Kbytes.

Client logging daemons run on every host machine participating in the


networked logging service. Each client logging daemon receives log records
from client applications via some form of local IPC mechanism, such as
shared memory, pipes, or sockets. The client logging daemon converts
header fields from the received log records into network-byte order and
uses a remote IPC mechanism, such as TCP/IP, to forward each record to
a server logging daemon running on a designated host.
Server logging daemons collect and output the incoming log records they
receive from client applications via client logging daemons. A server logging daemon can determine which client host sent each message by using
addressing information it obtains from the underlying Socket API. Theres

i
i

i hsbo
2001/
page 1
i

18

Section 1.6 Summary

STORAGE DEVICE

Oct 31 14:48:13 2000@tango.ece.uci.edu@38491@7@client::unable to fork in function spawn


Oct 31 14:50:28 2000@mambo.cs.wustl.edu@18352@2@drwho::sending request to server tango

CONSOLE

LOCAL

P2

IPC

ER
INT
PR

SERVER LOGGING

CLIENT

TCP

LOGGING

DAEMON

CONNECTION

HOST A
TANGO

DAEMON

P3

HOST B
MAMBO

if (Options::instance ()->debug())
ACE_DEBUG ((LM_DEBUG,
"sending request to server %s",
server_host));

CONNECTION

P1

SERVER

TCP

CLIENT
TANGO

int spawn (void) {


if (ACE_OS::fork () == -1)
ACE_ERROR (LM_ERROR,
"unable to fork in function spawn");

P1
CLIENT

NETWORK

CLIENT
MAMBO

P2

LOGGING
DAEMON
LOCAL

IPC

P3

Figure 1.8: Processes and Daemons in the Networked Logging Service


generally one server logging daemon per system configuration, though they
can be replicated to enhance fault tolerance.

1.6 Summary
The traditional method of continually re-discovering and re-inventing core
concepts and capabilities in networked application software has kept the
costs of engineering these systems unnecessarily high for too long. Objectoriented frameworks are crucial to improving networked application development processes by reducing engineering cycle time and enhancing software quality and performance. A framework is a reusable, semi-complete
application that can be specialized to produce custom applications [JF88].
Frameworks can be applied together with patterns and components
to improve the quality of networked applications by capturing successful software development strategies. Patterns systematically capture abstract designs and software architectures in a format thats intended to be
comprehensible to developers. Frameworks and components reify concrete

i
i

i hsbo
2001/
page 1
i

Section 1.6 Summary

19

patterns, designs, algorithms, and implementations in particular programming languages.


The ACE toolkit provides over a half dozen high-quality frameworks that
have been developed and refined over scores of person years. As shown in
later chapters, ACE serves as an excellent case study of how to reap the
benefits ofand reduce your exposure to the risks offramework usage.
As you read the remainder of the book, keep in mind the following points:
The ACE frameworks are much more than a class library. The ACE
frameworks are a collection of classes that collaborate to provide semicomplete networked applications. Whereas the ACE library container classes
described in [?] and the wrapper facades described in [SH02] are passive,
the ACE frameworks are active and exhibit inversion of control at runtime. The ACE toolkit provides both frameworks and a library of classes
to help programmers address a broad range of challenges that arise when
developing networked applications.
The ACE frameworks provide a number of benefits, including improved
reusability, modularity, and extensibility. The inversion of control in
the ACE frameworks augments these benefits by separating applicationdefined concerns, such as event processing, from core framework concerns, such as event demultiplexing and event handler dispatching. A less
tangiblebut no less valuablebenefit of the ACE frameworks is the transfer of decades of accumulated knowledge from ACE framework developers
to ACE framework users in the form of expertise embodied in well-tested,
easily reusable C++ software artifacts.
Framework developers must resolve many design challenges. One of
the most critical design challenges is determining which classes in a framework should be stable and which should be variable. Insufficient stability
makes it hard for users to understand and apply the framework effectively,
which makes the framework hard to use and may not satisfy the QoS
requirements of performance-sensitive applications. Conversely, insufficient variation makes it hard for users to customize framework classes,
which results in a framework that cant accommodate the functional requirements of diverse applications.
Framework development and training can be expensive. It took scores
of person years to develop and mature the ACE frameworks. This development effort has been amortized over thousands of users, however, who can

i
i

i hsbo
2001/
page 2
i

20

Section 1.6 Summary

take advantage of the expertise available from the core ACE development
team and experienced programmers throughout the ACE community. This
leveraging of expertise is one of the key benefits of open-source projects.
Developers can become more proficient with ACE by




Incrementally learning enough to complete the tasks at hand via the


many examples and tutorials in ACE and
Leveraging the goodwill and knowledge of the ACE user community
via the ACE mailing lists and USENET newsgroup, which are described at http://ace.ece.uci.edu/ACE/.

Foresight and experience help with integratability. Getting frameworks


to play together nicely with other frameworks, class libraries, and legacy
systems can be hard. ACEs layered architecture provides a good example
to follow since it enables integration at several levels of abstraction. For
example, applications can integrate with ACE at its wrapper facade level,
its framework level, or at its standards-based middleware level. Different
applications benefit from integrating with ACE at different levels.
Framework maintenance and validation is challenging. A frameworks
extensibility features, such as hierarchies of abstract classes and template
parameterization, can make a priori validation hard. Moreover, framework
evolution requires detailed knowledge of the areas being worked on, as well
as how various framework and application classes collaborate and interact. Addressing these challenges effectively requires experienced developers and a large set of automated regression tests. ACEs open-source development model is also useful since the large ACE user base provides extensive field testing, along with a collective body of experienced and skilled
developers to suggest fixes for problems that arise.
Micro-level efficiency concerns are usually offset by other macro framework benefits. The ACE frameworks virtual methods and additional levels of redirection may incur some micro-level performance degradation on
certain OS/compiler platforms. The expertise applied to the ACE frameworks design and implementation, however, often makes up for this overhead. For example, it may be possible to substitute completely different concurrency and synchronization strategies without affecting application functionality, thereby providing macro-level optimizations. Many
ACE frameworks also contain novel optimizations that may not be common knowledge to application developers. Finally, the added productivity

i
i

i hsbo
2001/
page 2
i

Section 1.6 Summary

21

benefits of employing the ACE framework usually offset any minor performance overheads.

i
i

i
i

i hsbo
2001/
page 2
i

i
i

i hsbo
2001/
page 2
i

C HAPTER 2

Service and Configuration Design


Dimensions

C HAPTER S YNOPSIS
A service is a set of functionality offered to a client by a service provider
or server. A networked application can be created by configuring its constituent services together at various points of time, such as compile-time,
static link-time, installation/boot-time, or even at run-time. This chapter
presents a domain analysis of service and configuration design dimensions
that address key networked application properties, including duration and
structure, how networked services are identified, and the time at which
they are bound together to form complete applications.

2.1

Service Design Dimensions

This section covers the following service design dimensions:








Short- vs. long-duration services


Internal vs. external services
Stateful vs. stateless services
Layered/modular vs. monolithic services
Single- vs. multi-service servers
One-shot vs. standing servers
23

i
i

i hsbo
2001/
page 2
i

24

Section 2.1 Service Design Dimensions

2.1.1 Short- vs. Long-Duration Services


The services offered by network servers can be classified loosely as shortduration or long-duration. These two different time durations help to determine which protocols to use, as outlined below. The time reflects how long
the service holds system resources, and should be evaluated in relation to
server startup and shutdown requirements.
Short-duration services execute in brief, often fixed, amounts of time and
often handle only a single request. Examples of relatively short-duration
services include computing the current time of day, resolving the Ethernet
number of an IP address, and retrieving a disk block from a network file
server. To minimize the amount of time spent setting up a reliable connection, these services are often implemented using connectionless protocols,
such as UDP/IP.
Long-duration services execute for extended, often variable, lengths of
time and may handle numerous requests during their lifetime. Examples
of long-duration services include transferring a large file via FTP, downloading streaming video over HTTP, accessing host resources remotely via
TELNET, or performing remote file system backups over a network. To improve efficiency and reliability, these services are often implemented with
connection-oriented protocols, such as TCP/IP.
Logging service ) From the standpoint of an individual log record, our
server logging daemon appears to be a short-duration service. The size
of each log record is bounded by its maximum length (e.g., 4 Kbytes) and
most messages are much smaller. The actual time spent handling a log
record is relatively short. Since a client may transmit many log records in
a session, however, we optimize performance by designing client logging
daemons to open connections with their peer server logging daemons and
then reusing these connections for many logging requests. It would be
wasteful and time-consuming to set up and tear down a socket connection
for each logging request, particularly if they are sent frequently. Thus, we
model our server logging daemon as a long-duration service.

2.1.2 Internal vs. External Services


Services can be classified as internal or external. The primary tradeoffs
in this dimension are service initialization time, safety of one service from

i
i

i hsbo
2001/
page 2
i

25

Section 2.1 Service Design Dimensions

another, and simplicity.


Internal services execute within the same address space as the server that
receives the request, as shown in Figure 2.1 (1). As described in Section ??,
DISPATCHER PROCESS
SVC1

SVC2

DISPATCHER PROCESS
select()

SVC3

select()

(1) INTERNAL SERVICES

SVC1

SVC2

(2) EXTERNAL SERVICES

Figure 2.1: Internal vs. External Services


an internal service can run iteratively or concurrently in relation to other
internal services.
External services execute in different process address spaces. For instance, Figure 2.1 (2) illustrates a master service process that monitors a
set of network ports. When a connection request arrives from a client, the
master accepts the connection and then spawns a new process to perform
the requested service externally.
Some server frameworks support both internal and external services.
For example, system administrators can choose between internal and external services in INETD by modifying the inetd.conf configuration file as
follows:




can be configured to execute short-duration services, such as


ECHO and DAYTIME, internally via calls to functions that are statically
linked into the INETD program and
INETD can also be configured to run longer-duration services, such as
FTP and TELNET, externally by spawning separate processes.
INETD

Internal services usually have lower initialization latency, but also may
reduce application robustness since separate functions within a process
arent protected from one another. For instance, one faulty service can
corrupt data shared with other internal services in the process, which may
produce incorrect results, crash the process, or cause the process to hang

i
i

i hsbo
2001/
page 2
i

26

Section 2.1 Service Design Dimensions

indefinitely. To increase robustness, therefore, mission-critical application


services are often implemented externally in separate processes.
Logging service ) Most of our logging servers examples in this book are
designed as an internal service. As long as only one type of service is
configured into our logging server we neednt protect it from harmful sideeffects of other services. There are valid reasons to protect the processing of
different client sessions from each other, however, so Chapter ?? illustrates
how to implement our logging server as an external service.

2.1.3 Stateful vs. Stateless Services


Services can be classified as stateful or stateless. The amount of state,
or context, information that a service maintains between requests impacts
both service and client complexity and resource consumption.
Stateful services cache certain information, such as session state, authentication keys, identification numbers, and I/O handles, in a server
to reduce communication and computation overhead. For instance, Web
cookies enable state to be preserved on a Web server across multiple page
requests.
Stateless services retain no volatile state within a server. For example, the
Network File System (NFS) provides distributed data storage and retrieval
services that dont maintain volatile state information within a servers address space. Each request sent from a client is completely self-contained
with the information needed to carry it out such as the file handle, byte
count, starting file offset, and user credentials.
Stateful and stateless services trade off efficiency and reliability, with
the right choice depending on a variety of factors, such as the probability
and impact of host and network failures. Some common network applications, such as FTP and TELNET, dont require retention of persistent application state information between consecutive service invocations. These
stateless services are generally fairly simple to configure and reconfigure
reliably. Conversely, a middleware service like CORBA Naming [Obj98]
manages various bindings whose values must be retained even if the server
containing the service crashes.
Logging service ) Our networked logging service is a stateless service. The
logging server processes each record individually without respect to any

i
i

i hsbo
2001/
page 2
i

27

Section 2.1 Service Design Dimensions

previous or possible future request. The need for any possible request
ordering is not a factor since TCP/IP is used, providing an ordered, reliable
communication stream.

2.1.4 Layered/Modular vs. Monolithic Services


Service implementations can be classified as layered/modular or monolithic. The primary tradeoffs in this dimension are service reusability, extensibility, and efficiency.
Layered/modular services decompose into a series of partitioned and hierarchically-related tasks. For instance, application families, such as PBX
network management services [SS94], can be specified and implemented
as layered/modular services, as illustrated in Figure 2.2 (1). Each layer
SVC1W

SVC1R

MODULE1

GLOBAL DATA

MSG

SVC2W

SVC2R

MODULE2

MSG

SVC3W

SVC4

SVC3R

MODULE3

SVC4R

MODULE4

MSG

SVC4W

SVC1

(1) LAYERED/MODULAR
SERVICES

SVC2
SVC3

(2) MONOLITHIC
SERVICES

Figure 2.2: Layered/Modular vs. Monolithic Services


can handle a self-contained portion of the overall service, such as input
and output, event analysis and service processing, and event filtering.
Inter-connected services can collaborate by exchanging control and data
messages for incoming and outgoing communication.
Over the years, many communication frameworks have been developed to simplify and automate the development and configuration of layered/modular services [SS93]. Well-known examples include Systems V
STREAMS [Rit84], the x-kernel [HP91], and the ACE Streaming framework
covered in Chapter ??. In general, these frameworks decouple the protocol

i
i

i hsbo
2001/
page 2
i

28

Section 2.1 Service Design Dimensions

and service functionality from the following non-functional service design


aspects:
1. Compositional strategies, such as the time and/or order in which
services and protocols are composed together
2. Concurrency and synchronization strategies, such as task- vs.
message-based architectures (described in Section ??) used to execute services at run-time [SS95b].
Monolithic services are tightly coupled clumps of functionality that arent
organized hierarchically. They may contain separate modules of functionality that vaguely resemble layers, but are most often tightly data coupled
via shared, global variables, as shown in Figure 2.2 (2). They are also often tightly functionally coupled, with control flow diagrams that look like
spaghetti. Monolithic services are hard to understand, maintain, and extend. While they may sometimes be appropriate in short-lived, throwaway prototypes1 [FY99], they are rarely suitable for software that must
be maintained and enhanced by multiple developers over time.
Developers can often select either layered or monolithic service architectures to structure their networked applications. There are several advantages to designing layered/modular services:






Layering enhances reuse since multiple higher-layer application components can share lower-layer services
Implementing applications via an inter-connected series of layered
services enables transparent, incremental enhancement of their functionality
A layered/modular architecture facilitates macro-level performance
improvements by allowing the selective omission of unnecessary service functionality and
Modular designs generally improve the implementation, testing, and
maintenance of networked applications and services.

There can also be some disadvantages with using a layered/modular


architecture to develop networked applications:

The modularity of layered implementations can introduce excessive


overhead, e.g., layering may cause inefficiencies if buffer sizes dont

1
After you become proficient with the ACE toolkit itll be much faster to build a properly
layered prototype than to hack together a monolithic one.

i
i

i hsbo
2001/
page 2
i

Section 2.1 Service Design Dimensions




29

match in adjacent layers, thereby causing additional segmentation,


reassembly, and transmission delays
Communication between layers must be designed and implemented
properly, which can introduce another source of errors and
Information hiding within layers can make it hard to allocate and
manage resources predictably and dependably in applications with
stringent real-time and dependability requirements.

It may seem odd to consider the tradeoffs between layered/modular and


monolithic services in a book on advanced object-oriented design. Yet many
services are implemented monolithicallywithout regard to structure or
good design principleseven after years of evangelizing higher-level design
methods based on advances in object-oriented techniques. While monolithic service designs may occasionally make sense, e.g., when prototyping
new capabilities, they are ultimately like the gotoa powerful mechanism
that can easily hurt you.
Logging service ) By carefully separating design concerns, our server logging daemon is designed using a layered/modular architecture that consists of the following layers:
1. Event infrastructure layerThe classes in this layer perform application-independent strategies for detecting and demultiplexing events
and dispatching them to their associated event handlers. Chapter 3
describes how the Reactor pattern and ACE Reactor framework can
be applied to implement a generic event infrastructure layer.
2. Configuration management layerThe classes in this layer perform
application-independent strategies to install, initialize, control, and
terminate service components. Chapter ?? describes how the Component Configurator pattern and ACE Service Configurator framework
can be applied to implement a generic configuration management
layer.
3. Connection management layerThe classes in this layer perform
application-independent connection and initialization services that
are independent of application functionality. Chapter ?? describes
how the Acceptor-Connector pattern and the ACE Acceptor-Connector
framework can be applied to implement a generic connection management layer.
4. Application layerThe classes in this layer customize the applicationindependent classes provided by the other layers to create concrete

i
i

i hsbo
2001/
page 3
i

30

Section 2.1 Service Design Dimensions

objects that configure applications, process events, establish connections, exchange data, and perform logging-specific processing. The
remaining chapters in this book illustrate how to implement these
application-level capabilities using the ACE frameworks.

2.1.5 Single- vs. Multi-Service Servers


Protocols and services rarely operate in isolation, but instead are accessed
by applications within the context of a server. Servers can be designed as
either single-service or multi-service. The tradeoffs in this dimension are
resource consumption vs. robustness.
Single-service servers offer only one service. As shown in Figure 2.3 (1),
a service can be internal or external, but theres only a single service per
MASTER

INTERNAL
SERVICE

EXTERNAL
SERVICE

EXTERNAL
SERVICE

INTERNAL
SERVICE

EXTERNAL
SERVICE

EXTERNAL
SERVICE

SLAVES

(1) SINGLE-SERVICE SERVERS

INTERNAL
SERVICE

INTERNAL
SERVICE

EXTERNAL
SERVICE

EXTERNAL
SERVICE

(2) MULTI-SERVICE SERVER

Figure 2.3: Single-service vs. Multi-service Servers


process. Examples of single-service servers include:




The RWHO daemon (RWHOD), which reports the identity and number
of active users, as well as host workloads and host availability.
Early versions of UNIX ran standard network services, such as FTP
and TELNET, which ran as distinct single-service daemons that were
initiated at OS boot-time [Ste98].

Each instance of these single-service servers execute externally in a separate process. As the number of system servers increases, however, this
statically configured, single-service per-process approach incurred the following limitations:

i
i

i hsbo
2001/
page 3
i

Section 2.1 Service Design Dimensions






31

It consumed excessive amounts of OS resources, such as virtual memory and process table slots
It caused redundant initialization and networking code to be written
separately for each service program
It required running processes to be shutdown and restarted manually
to install new service implementations and
It led to ad hoc and non-uniform administrative mechanisms being
used to control different types of services.

Multi-service servers address the limitations with single-service servers


by integrating a collection of single-service servers into a single administrative unit, as shown in Figure 2.3 (2). This multi-service design yields
the following benefits:
 It reduces the consumption of OS resources by allowing servers to be
spawned on-demand
 It simplifies server development and reuses common code by automatically (1) daemonizing a server process (as described in Sidebar 1),
(2) initializing transport endpoints, (3) monitoring ports, and (4) demultiplexing/dispatching client requests to service handlers
 It allows external services to be updated without modifying existing
source code or terminating running server processes and
 It consolidates network service administration via a uniform set of
configuration management utilities, e.g., the INETD super-server provides a uniform interface for coordinating and initiating external services, such as FTP and TELNET, and internal services, such as DAYTIME
and ECHO.
Logging service ) Our implementations of the networked logging service
in [SH02] used a pair of single-service servers, i.e., one for the client logging
daemon and one for the server logging daemon. In this book, well enhance
our implementation so that the various entities in the networked logging
service can be configured via a multi-service super-server similar to INETD.

2.1.6 One-shot vs. Standing Servers


Servers can also be designed as either one-shot or standing. The primary
tradeoffs in this dimension involve how long the server runs and uses system resources, and should be evaluated in relation to server startup and
shutdown requirements.

i
i

i hsbo
2001/
page 3
i

32

Section 2.1 Service Design Dimensions

One-shot servers are spawned on-demand, e.g., by a super-server like


INETD. They perform service requests in a separate thread or process, as
shown in Figure 2.4 (1). One-shot servers terminate after the request that
SUPER-SERVER PROCESS

SUPER-SERVER PROCESS
LOCAL

SVC1

IPC

SVC2

SVC3

SVC4

(1) ONE-SHOT SERVER

SVC1

SVC2

SVC3

(2) STANDING SERVER

Figure 2.4: One-shot vs. Standing Servers


triggered their creation completes. This design strategy can consume fewer
system resources, such as virtual memory and process table slots, since
servers dont remain in system memory when they are idle.
Standing servers continue to run beyond the lifetime of any particular
service request they process. Standing servers are often initiated at boottime or by a super-server after the first client request. They may receive
connection and/or service requests via local IPC channels, such as named
pipes or sockets, that are attached to a super-server, as shown in Figure 2.4 (2). Alternatively, a standing server may take ownership of, or inherit, an IPC channel from the original service invocation. Standing servers
can improve service response time by amortizing the cost of spawning a
server process or thread over a series of client requests. In addition, they
enable applications to reuse endpoint initialization, connection establishment, endpoint demultiplexing, and service dispatching code.
Logging service ) We implement the client and server logging daemons
in our networked logging service as standing servers since they run longduration services that process requests from many client applications before terminating. In general, the choice between one-shot or standing
servers is orthogonal to the choice between short- or long-duration services described in Section 2.1.1. The former design alternative reflects OS
resource management constraints, whereas the latter design alternative is
a property of a service.

i
i

i hsbo
2001/
page 3
i

Section 2.2 Conguration Design Dimensions

2.2

33

Configuration Design Dimensions

This section covers the following configuration design dimensions:





Static vs. dynamic naming


Static vs. dynamic linking
Static vs. dynamic configuration

2.2.1 Static vs. Dynamic Naming


Applications can be categorized according to whether their services are
named statically or dynamically. The primary tradeoff in this dimension
involves efficiency vs. flexibility.
Statically named services associate the name of a service with object
code that exists at compile-time and/or static link-time. For example, IN ETDs internal services, such as ECHO and DAYTIME, are bound to statically
named built-in functions stored internally within the INETD program.
Dynamically named services defer the association of a service name
with the object code that implements the service. Thus, code neednt be
identifiednor even be written, compiled, and linkeduntil an application
begins executing the corresponding service at run-time. A common example of dynamic naming is demonstrated by INETDs handling of TELNET,
which is an external service. External services can be updated by modifying the inetd.conf configuration file and sending the SIGHUP signal to
the INETD process. When INETD receives this signal, it re-reads its configuration file and dynamically rebinds the services it offers to their new
executables.

2.2.2 Static vs. Dynamic Linking


Applications can also be categorized according to whether their services
are linked into a program image statically or dynamically. The primary
tradeoffs in this dimension involve efficiency, security, and extensibility.
Static linking creates a complete executable program by binding together
all its object files at compile-time and/or static link-time, as shown in
Figure 2.5 (1).

i
i

i hsbo
2001/
page 3
i

34

Section 2.2 Conguration Design Dimensions

OBJECT
CODE
MODULES

(1)

STATIC LINKING

OBJECT
CODE
STUBS

(2)

SHARED

OBJECT

OBJECTS
IN

DLLS

CODE
STUBS

DYNAMIC LINKING

Figure 2.5: Static Linking vs. Dynamic Linking


Dynamic linking inserts object files into and removes object files from
the address space of a process when a program is invoked initially or updated at run-time, as shown in Figure 2.5 (2). Modern operating systems
generally support both implicit and explicit dynamic linking:

Implicit dynamic linking defers most address resolution and relocation operations until a function is first referenced. This lazy evaluation strategy minimizes link editing overhead during server initialization. Implicit dynamic linking is used to implement shared libraries,
also known as dynamic-link libraries (DLLs) [Sol98]. Ideally, only one
copy of DLL code exists, regardless of the number of processes that
execute library code simultaneously. DLLs can therefore reduce the
memory consumption of both a process in memory and its program
image stored on disk.
Explicit dynamic linking allows an application to obtain, use, and/or
remove the run-time address bindings of certain function- or datarelated symbols defined in DLLs. Common explicit dynamic linking
mechanisms include
The POSIX/UNIX dlopen(), dlsym(), and dlclose() functions
and
The Win32 LoadLibrary(), GetProcAddress(), and FreeLibrary()
functions.
Developers must consider tradeoffs between flexibility, time and space
efficiency, security, and robustness carefully when choosing between
dynamic and static linking [SSRB00].

i
i

i hsbo
2001/
page 3
i

Section 2.2 Conguration Design Dimensions

35

2.2.3 Static vs. Dynamic Configuration


As described in Section 2.1, networked applications generally offer or use
communication services. Popular services available on the Internet today
include









Web browsing and content retrieval services, e.g., Alta Vista, Apache,
and Netscapes HTTP server
Software distribution services, e.g., Castanet
Electronic mail and network news transfer services, e.g., sendmail
and nntpd
File access on remote machines, e.g., ftpd
Network time protocols, e.g., ntpd
Payment processing services, e.g., Cybercash and
Streaming audio/video services, e.g., RealAudio, RealSystem, and RealPlayer.

By combining the naming and linking dimensions described above, we can


classify these types of networked application services as being either statically or dynamically configured. The primary tradeoffs in this dimension
involve efficiency, security, and extensibility.
Static configuration refers to the process of initializing an application that
contains statically named services, i.e., developing each service as a separate function or class and then compiling, linking, and executing them in
a separate OS process. In this case, the services in the application arent
extensible at run-time. This design may be necessary for secure applications that contain only trusted services. Statically configured applications
may also be more efficient since code generated by compilers can eliminate
indirections required to support relocatable code.
However, statically configuring services can also yield non-extensible
applications and software architectures. The main problems with static
configurations are:




It tightly couples the implementation of a particular service with the


configuration of the service with respect to other services in a networked application and
It severely limits the ability of system administrators to change the
parameters or configuration of a system to suit local operating conditions or changing network and hardware configurations.

i
i

i hsbo
2001/
page 3
i

36

Section 2.2 Conguration Design Dimensions

Dynamic configuration refers to the process of initializing an application that offers dynamically named services. When combined with explicit dynamic linking and process/thread creation mechanisms, the services offered by dynamically configured applications can be extended at
installation/boot-time or even during run-time. This degree of extensibility
helps facilitate the following configuration-related activities:

Functional subsettingDynamic configuration simplifies the steps


necessary to produce subsets of functionality for application families
developed to run on a range of OS platforms. Explicit dynamic linking
enables the fine-grain addition, removal, or modification of services.
In turn, this allows the same framework to be used for space-efficient
embedded applications and for large enterprise distributed applications. For example, a web browsing application may be able to run on
PDAs, PCs, and/or workstations by dynamically configuring subsets,
such as image rendering, Java capability, printing, or direct phone
number dialing.
Application workload balancingIts often hard to determine the
relative processing characteristics of application services a priori since
workloads can vary at run-time. It may therefore be necessary to experiment with alternative load balancing techniques [OOS01] and system configurations that locate application services on different host
machines throughout a network. For example, developers may have
the opportunity to place certain services, such as image processing,
on either side of a client/server boundary. Bottlenecks may result if
many services are configured into a server application and too many
active clients access these services simultaneously. Conversely, configuring many services into clients can result in a bottleneck if clients
execute on cheaper, less powerful machines.
Dynamic service reconfigurationHighly available networked applications, such as mission-critical systems that perform on-line transaction processing or real-time process control, may require flexible
dynamic reconfiguration management capabilities. For example, it
may be necessary to phase new versions of a service into a server application without disrupting other services that its already executing.
Re-configuration protocols [SSRB00] based on explicit dynamic linking mechanisms can enhance the functionality and flexibility of networked applications since they enable services to be inserted, deleted,

i
i

i hsbo
2001/
page 3
i

Section 2.3 Summary

37

or modified at run-time without first terminating and restarting the


underlying process or thread(s) [SS94].
Logging service ) The implementations of our networked logging service
in [SH02] and Chapter 3 are all configured statically. In Chapter ?? in this
book, we describe




The ACE_DLL class, which is a wrapper facade class that portably


encapsulates the ability to load/unload dynamically linked libraries
(DLLs) and find symbols in them and
The ACE Service Configuration framework, which can be used to configure application services dynamically.

Starting with Chapter ??, all our examples are configured dynamically.

2.3

Summary

The way in which application services are structured, instantiated, and


configured has a significant impact on how effectively they use system and
network resources. Efficient resource usage is closely linked to application response time, and overall system performance and scalability. As
important as performance and scalability are, however, a coherent and
modular design is often just as important to maintain and extend applications over time. Fortunately, performance vs. modularity tradeoffs neednt
be an either/or proposition. By studying the design dimensions carefully
in this chapter and applying the ACE wrapper facades and frameworks
judiciously, youll be able to create well-designed and highly efficient networked applications.
Configuration and service location/naming are important topics to understand and use in the quest to reduce the effects of inherent complexity.
In this chapter, we described the two key design dimensions in this area
as
1. Identifying a particular set of services and
2. Linking these services into the address space of one or more applications.
The remaining chapters in the book describe the ACE frameworks that reify
these design dimensions.

i
i

i
i

i hsbo
2001/
page 3
i

i
i

i hsbo
2001/
page 3
i

C HAPTER 3

The ACE Reactor Framework

C HAPTER S YNOPSIS
This chapter describes the design and use of the ACE Reactor framework. This framework implements the Reactor pattern [SSRB00], which
allows event-driven applications to process events delivered from one or
more clients. In this chapter, we show how to implement a logging server
with a reactor that detects and demultiplexes different types of connection
and data events from various event sources and dispatches the events to
application-defined handlers that process the events.

3.1

Overview

The ACE Reactor framework simplifies the development of event-driven


programs, which characterize many networked applications. Common
sources of events in these applications include I/O operations, signals,
and expiration of timers. In this context, the ACE Reactor framework is
responsible for





Detecting the occurrence of events from various event sources


Demultiplexing the events to their pre-registered event handlers and
Dispatching methods on the handlers, which process the events in an
application-defined manner.

This chapter describes the following ACE Reactor framework classes


that networked applications can use to detect the occurrence of events
and then demultiplex and dispatch the events to their event handlers:
39

i
i

i hsbo
2001/
page 4
i

40

Section 3.1 Overview

ACE Class
ACE_Time_Value

ACE_Event_Handler
ACE_Timer_Heap
ACE_Timer_List
ACE_Timer_Wheel
ACE_Timer_Hash
ACE_Reactor
ACE_Select_Reactor

ACE_TP_Reactor

ACE_WFMO_Reactor

Description
Provides a portable representation of time that uses
C++ operator overloading to simplify the time-related
arithmetic and relational operations.
Defines an abstract interface for processing various
types of I/O, timer, and signal events.
Implementations of various ACE timer queues.

The public interface to the ACE Reactor framework.


An ACE_Reactor implementation that uses the
select() synchronous event demultiplexer function to
detect I/O and timer events.
An ACE_Reactor implementation that uses the
select() function to detect events and the
Leader/Followers pattern to process the events in
a pool of threads.
An ACE_Reactor implementation that uses the
WaitForMultipleObjects() event
demultiplexer
function to detect I/O, timer, and synchronization
events.

Figure 3.1: The ACE Reactor Framework Classes


The most important relationships between the classes in the ACE Reactor framework are shown in Figure 3.1. The Reactor pattern described
in [SSRB00] divides its participants into two layers:
 Event infrastructure layer classes that perform application-independent strategies for demultiplexing indication events to event handlers and then dispatching the associated event handler hook methods. The infrastructure layer components in the ACE Reactor framework include the various implementations of the ACE_Reactor and
the ACE timer queue.
 Application layer classes that define concrete event handlers that
perform application-defined processing in their hook methods. In the
ACE Reactor framework, all application layer components are descendants of ACE_Event_Handler.
The ACE Reactor framework provides the following benefits:

i
i

i hsbo
2001/
page 4
i

Section 3.1 Overview

41

Broad portabilityThe framework can be configured to use a wide


range of OS synchronous event demultiplexing mechanisms, such as
select(), which is available on UNIX, Win32, and many real-time
operating systems, and WaitForMultipleObjects(), which is available only on Win32.
Automates event detection, demultiplexing, and dispatching
By eliminating the reliance on non-portable native OS synchronous
event demultiplexing APIs, the ACE Reactor framework provides applications with a uniform object-oriented event detection, demultiplexing, and dispatching mechanism. Event handlers can be written
in C++ and registered with the ACE_Reactor to process sets of desired
events.
Transparent extensibilityThe framework employs hook methods
via inheritance and dynamic binding to decouple
Lower-level event mechanisms, such as detecting events on multiple I/O handles, expiring timers, and demultiplexing and dispatching methods of the appropriate event handler to process
these events, from
Higher-level application event processing policies, such as connection establishment strategies, data marshaling and demarshaling, and processing of client requests.

This design allows the ACE Reactor framework to be extended transparently without modifying or recompiling existing application code.
Increase reuse and minimize errorsDevelopers who write programs using native OS synchronous event demultiplexing operations
must reimplement, debug, and optimize the same low-level code for
each application. In contrast, the ACE Reactor frameworks event
detection, demultiplexing, and dispatching mechanisms are generic
and can therefore be reused by many networked applications, which
allows developers to focus on higher-level application-defined event
handler policies, rather than wrestling repeatedly with low-level mechanisms.
Efficient demultiplexingThe framework performs its event demultiplexing and dispatching logic efficiently. For instance, the ACE_
Select_Reactor presented in Section 3.6 uses the ACE_Handle_Set_
Iterator class described in Chapter 7 of [SH02], which uses an opti-

i
i

i hsbo
2001/
page 4
i

42

Section 3.2 The ACE Time Value Class

mized implementation of the Iterator pattern [GHJV95] to avoid examining fd_set bitmasks one bit at a time. This optimization is based on
a sophisticated algorithm that uses the C++ exclusive-or operator to
reduce run-time complexity from O(number of total bits) to O(number of
enabled bits), which can substantially improve run-time performance
for large-scale applications.
The remainder of this chapter motivates and describes the capabilities
of each class in the ACE Reactor framework. We illustrate how this framework can be used to enhance the design of our networked logging server.
Section ?? describes design rules to follow when applying the ACE Reactor
framework.

3.2 The ACE Time Value Class


Motivation
Different operating systems provide different functions and data structures
to access and manipulate date and time information. For example, UNIX
platforms define the timeval structure as follows:
struct timeval {
long secs;
long usecs;
};

Different date and time representations are used on other OS platforms,


such as POSIX, Win32, and proprietary real-time operating systems. Addressing these portability differences in each application is unnecessarily
costly, which is why the ACE Reactor framework provides the ACE_Time_
Value class.
Class Capabilities
The ACE_Time_Value class applies the Wrapper Facade pattern [SSRB00]
and C++ operator overloading to simplify the use of portable time-related
operations. This class provides the following capabilities:

It provides a standardized representation of time thats portable across


OS platforms

i
i

i hsbo
2001/
page 4
i

Section 3.2 The ACE Time Value Class

43

Figure 3.2: The ACE Time Value Class




It uses operator overloading to simplify time-based comparisons by


permitting standard C++ syntax for time-based arithmetic and relational expressions and
Its constructors and methods normalize time quantities by converting
the fields in a timeval structure into a canonical encoding scheme
that ensures accurate comparisons.

The interface for the ACE_Time_Value class is shown in Figure 3.2 and
its key methods are outlined in the following table:
Method
ACE_Time_Value
set()
sec()
usec()
operator +=
operator -=
operator *=

Description
Constructors and methods that convert from various time
formats, such as timeval or long, to a normalized ACE_
Time_Value.
Return the number of seconds in an ACE_Time_Value.
Return the number of microseconds in an ACE_Time_
Value.
Arithmetic methods that add, subtract, and multiply an
ACE_Time_Value.

In addition to the methods shown above, the following binary operators are
friends of the ACE_Time_Value class that define arithmetic and relational
operations:
Method
operator +
operator operator ==
operator !=
operator <
operator >
operator <=
operator >=

Description
Arithmetic methods that add and subtract two ACE_Time_
Values.
Methods that compare two ACE_Time_Values for equality and
inequality.
Methods that determine relationships between two ACE_Time_
Values.

All ACE_Time_Value constructors and methods normalize the time values they operate upon. For example, after normalization, the quantity

i
i

i hsbo
2001/
page 4
i

44

Section 3.3 The ACE Event Handler Class

ACE_Time_Value(1,1000000) will compare equal to ACE_Time_Value(2).


In contrast, a direct bitwise comparison of these non-normalized class values wont detect their equality.
Example
The following code creates two ACE_Time_Value objects, which are constructed by adding user-supplied command-line arguments to the current
time. The appropriate ordering relationship between the two ACE_Time_
Values is then displayed:
#include "ace/OS.h"
int main (int argc, char *argv[])
{
if (argc != 3)
ACE_ERROR_RETURN ((LM_ERROR,
"usage: %s time1 time2\n", argv[0]), 1);
ACE_Time_Value curtime = ACE_OS::gettimeofday ();
ACE_Time_Value t1 =
curtime + ACE_Time_Value (ACE_OS::atoi (argv[1]));
ACE_Time_Value t2 =
curtime + ACE_Time_Value (ACE_OS::atoi (argv[2]));
if (t1 > t2) cout << "timer 1 is greater" << endl;
else if (t2 > t1) cout << "timer 2 is greater" << endl;
else cout << "timers are equal" << endl;
return 0;
}

This program behaves identically on all OS platforms where ACE has been
ported. Sidebar 2 describes how to build the ACE library so that you can
experiment with the examples we present in this book.

3.3 The ACE Event Handler Class


Motivation
Networked applications are often written as reactive servers, which respond to various types of events, such as I/O events, timer events, and
signals. One way to program reactive applications is to define a separate

i
i

i hsbo
2001/
page 4
i

Section 3.3 The ACE Event Handler Class

45

Sidebar 2: Building ACE and Programs that Use ACE


ACE is open-source software: you can download it from http://ace.
ece.uci.edu and build it yourself. Here are some tips to help you understand the source examples we show, and how to build ACE, the examples,
and your own applications:






ACE should be installed into an empty directory. The top-level directory in the distribution is named ACE_wrappers. We refer to this toplevel directory as $ACE_ROOT. You should create an environment
variable by that name containing the full path to the top-level ACE
directory.
The ACE source and header files reside in $ACE_ROOT/ace.
This books networked logging service example source and header
files reside in $ACE_ROOT/examples/C++NPv2.
When compiling your programs, the $ACE_ROOT directory must be
added to your compilers file include path, which is often designated
by the -I or /I compiler option.
The $ACE_ROOT/ACE-INSTALL.html file contains instructions on
building and installing ACE and programs that use ACE.

You can also purchase a prebuilt version of ACE from Riverace at a nominal cost. A list of the prebuilt compiler and OS platforms supported by
Riverace is available at http://www.riverace.com.

function for each type of event. This approach can become unwieldy, however, since application programmers are responsible for explicitly
1. Keeping track of which functions correspond to which events and
2. Associating data with the functions.
To alleviate both problems, the ACE Reactor framework defines the ACE_
Event_Handler base class.
Class Capabilities
The ACE_Event_Handler base class is the root of all event handlers in
ACE. This class provides the following capabilities:

i
i

i hsbo
2001/
page 4
i

46

Section 3.3 The ACE Event Handler Class

Figure 3.3: The ACE Event Handler Class






It defines hook methods for input events, output events, exception


events, timer events, and signal events1
It simplifies the association of data with methods that manipulate the
data
It centralizes how event handlers can be destroyed when they are no
longer needed and
It holds a pointer to the ACE_Reactor that manages it.

The interface for the ACE_Event_Handler class is shown in Figure 3.3


and its key methods are outlined in the following table:
Method
handle_input()
handle_output()

handle_exception()
handle_timeout()
handle_signal()

handle_close()

get_handle()
reactor()

Description
Hook method called when input events occur, e.g., connection or data events.
Hook method called when output events are possible,
e.g., when flow control abates or a non-blocking connection completes.
Hook method called when an exceptional event occurs,
e.g., a SIGURG signal.
Hook method called when a timer expires.
Hook method called when signaled by the OS, either via
POSIX signals or when a Win32 synchronization object
transitions to the signaled state.
Hook method that performs user-defined termination
activities when one of the handle_*() hook methods outlined above returns ,1 or when a remove_
handler() method is called explicitly to unregister an
event handler.
Returns the underlying I/O handle.
Accessors to get/set the ACE_Reactor associated with
an ACE_Event_Handler.

Applications can inherit from ACE_Event_Handler to create concrete


event handlers, which have the following properties:
1

On Win32, an ACE_Event_Handler can also handle synchronization events, such


as transitioning from the non-signaled to signaled state with Win32 mutexes or
semaphores [Sol98].

i
i

i hsbo
2001/
page 4
i

Section 3.3 The ACE Event Handler Class

47

They override one or more of the ACE_Event_Handlers handle_*()


virtual hook methods to perform application-defined processing in response to the corresponding types of events.
 They are registered with an ACE_Reactor, which then uses the ACE_
Event_Handler interface to dispatch hook methods on event handlers that process events.
 Since concrete event handlers are objects rather than functions, its
straightforward to associate data with a handlers hook methods to
hold state across multiple callbacks by an ACE_Reactor.
When an application registers a concrete event handler with a reactor, it
must indicate what type(s) of event(s) the event handler should process. To
designate these event types, ACE defines a typedef called ACE_Reactor_
Mask and the ACE_Event_Handler defines the following set of enumeration literals that can be passed as the values of the ACE_Reactor_Mask
parameter:
Event Type
READ MASK

WRITE MASK

EXCEPT MASK

ACCEPT MASK

CONNECT MASK

Description
Indicates input events, such as data on a socket or file handle. A reactor dispatches the handle_input() hook method
to process input events.
Indicates output events, such as when flow control abates.
A reactor dispatches the handle_output() hook method to
process output events.
Indicates exceptional events, such as urgent data on a
socket. A reactor dispatches the handle_except() hook
method to process exceptional events.
Indicates passive-mode connection events. A reactor dispatches the handle_input() hook method to process connection events.
Indicates a non-blocking connection completion. A reactor
dispatches the handle_output() hook method to process
non-blocking connection completion events.

All these * MASK enumeration values are defined as powers of two so their
bits can be ord together efficiently to designate a set of events.
Concrete event handlers used for I/O events provide a handle, such
as a socket handle, that can be retrieved via the get_handle() hook
method. When an application registers a concrete event handler with a
reactor, the reactor calls back to the handlers get_handle() method to
retrieve the underlying handle. This method can be left as a no-op if a concrete event handler only handles time-driven events. For example, the ACE

i
i

i hsbo
2001/
page 4
i

48

Section 3.3 The ACE Event Handler Class

timer queue classes described in Section 3.4 use concrete event handlers
to process time-driven events. When a timer managed by this mechanism
expires, the handle_timeout() method of the associated event handler is
invoked by the reactor.
The reactor interprets the return values of the handle_*() hook methods as follows:

Return value of 0When a handle_*() method returns 0 this informs the reactor that the event handler wishes to remain registered
with the reactor. The reactor will therefore continue to include the
handle of this event handler next time its handle_events() method
is invoked. This behavior is common for event handlers whose lifetime
extends beyond a single handle_*() method dispatch.
Return greater than 0When a handle_*() method returns a value
greater than 0 this informs the reactor that the event handler wishes
to be dispatched again before the reactor blocks on its event demultiplexer. This feature is useful for cooperative event handlers to enhance overall system fairness since it allows one event handler to perform a limited amount of computation, relinquish control, and then
allow other event handlers to be dispatched before it retains control
again.
Return less than 0When a handle_*() method returns a value
less than 0 this informs the reactor that the event handler wants to be
removed from the reactors internal tables. In this case, the reactor invokes the event handlers handle_close() hook method and passes
it the ACE_Reactor_Mask value corresponding to the handle_*()
hook method that returned ,1. In addition to the event types shown
in the table on page 47, the reactor can pass the following enumeration values defined in ACE_Event_Handler:
Event Type
TIMER MASK

SIGNAL MASK

Description
Indicates time-driven events. A reactor dispatches the
handle_timeout() hook method to process timeout
events.
Indicates signal-based events (or synchronizationbased events on Win32). A reactor dispatches the
handle_signal() hook method to process signal
events.

The handle_close() method can perform user-defined termination


activities, such as deleting dynamic memory allocated by the object

i
i

i hs_
2001/
page
i

Section 3.3 The ACE Event Handler Class

49

or closing log files. After the handle_close() method returns, the


reactor will remove the associated concrete event handler from its
internal tables if the handler is no longer registered for any events.
Example
Well implement our networked logging server by subclassing from ACE_
Event_Handler and driving its processing via the ACE_Reactors event
loop. We have two types of events to handle:
1. Events indicating the arrival of new connections and
2. Events indicating the arrival of log records from connected clients.
We therefore define two types of event handlers:




Logging_Handler_AdapterThis class processes log records from a


connected client. It uses the ACE_SOCK_Stream class from Chapter 3
in [SH02] to receive log records from its client.
Logging_AcceptorThis class is a factory that dynamically allocates a Logging_Handler_Adapter and initializes it with a newly
connected client. It uses the ACE_SOCK_Acceptor class from Chapter
3 in [SH02] to initialize the Logging_Handler_Adapters ACE_SOCK_
Stream.

Both of these classes inherit from ACE_Event_Handler, which enables


their handle_input() methods to be dispatched automatically by an ACE_
Reactor, as described in Section 3.5.
We start by creating a file called Logging_Acceptor.h that includes
the necessary header files:
#include
#include
#include
#include
#include
#include
#include
#include

"ace/Event_Handler.h"
"ace/INET_Addr.h"
"ace/Log_Record.h"
"ace/Reactor.h"
"ace/FILE.h"
"ace/SOCK_Acceptor.h"
"ace/SOCK_Stream.h"
"Logging_Handler.h"

The Logging_Handler.h file was defined in Chapter 4 of [SH02] and all


other files are defined in the ACE toolkit.
We next show the interface and portions of the implementation of methods in Logging_Acceptor. Although we dont show much error handling

i
i

i hsbo
2001/
page 5
i

50

Section 3.3 The ACE Event Handler Class

code in this example, a production implementation should take appropriate corrective action if failures occur.
class Logging_Acceptor : public ACE_Event_Handler
{
public:
Logging_Acceptor (ACE_Reactor *r = ACE_Reactor::instance ())
: ACE_Event_Handler (r) {}
// Initialize a passive-mode acceptor socket. The <local_addr>
// is the address that were going to listen for connections on.
virtual int open (const ACE_INET_Addr &local_addr) {
return peer_acceptor_.open (local_addr);
}
// Return the connected sockets I/O handle.
virtual ACE_HANDLE get_handle () const
{ return peer_acceptor_.get_handle (); }
// Called by a reactor when theres a new connection to accept.
virtual int handle_input (ACE_HANDLE h = ACE_INVALID_HANDLE);
// Called when object is destroyed, e.g., when its removed
// from a reactor.
virtual int handle_close (ACE_HANDLE = ACE_INVALID_HANDLE,
ACE_Reactor_Mask = 0) {
peer_acceptor_.close ();
}
// Returns a reference to the underlying <peer_acceptor_>.
ACE_SOCK_Acceptor &acceptor () const {
return peer_acceptor_;
}
private:
// Factory that connects <ACE_SOCK_Stream>s passively.
ACE_SOCK_Acceptor peer_acceptor_;
};

As shown on page 72, a Logging_Acceptor will be registered with an


ACE_Reactor to handle ACCEPT events. Since the passive-mode socket in
the ACE_SOCK_Acceptor becomes active when a new connection can be accepted, the reactor dispatches the Logging_Acceptor::handle_input()

i
i

i hs_
2001/
page
i

Section 3.3 The ACE Event Handler Class

51

method automatically. Well show this methods implementation on page 52


after defining the following Logging_Handler_Adapter class:
class Logging_Handler_Adapter : public ACE_Event_Handler
{
protected:
// File where log records are written.
ACE_FILE_IO log_file_;
// Connection to remote peer.
Logging_Handler logging_handler_;

This class inherits from ACE_Event_Handler and adapts the Logging_


Handler defined in Chapter 4 of [SH02] for use with the ACE Reactor
framework. In addition to a Logging_Handler, each Logging_Handler_
Adapter contains an ACE_FILE_IO object to keep separate log files for each
connected client.
The public methods in the Logging_Handler_Adapter class are shown
below.
public:
// Constructor.
Logging_Handler_Adapter (ACE_Reactor *r)
: ACE_Event_Handler (r),
logging_handler_ (log_file_) {}
// Activate the object.
virtual int open ();
// Get the I/O handle of the contained Logging_Handler.
virtual ACE_HANDLE get_handle () const
{ return peer ().get_handle (); };
// Called when input events occur, e.g., connection or data.
virtual int handle_input (ACE_HANDLE h = ACE_INVALID_HANDLE);
// Called when object is destroyed, e.g., when its removed
// from an ACE_Reactor.
virtual int handle_close (ACE_HANDLE = ACE_INVALID_HANDLE,
ACE_Reactor_Mask = 0);
// Get a reference to the contained ACE_SOCK_Stream.

i
i

i hs_
2001/
page
i

52

Section 3.3 The ACE Event Handler Class

ACE_SOCK_Stream &peer () const


{ return logging_handler_.peer (); };
};

Now that weve shown the interface of Logging_Handler_Adapter, we


can implement Logging_Acceptor::handle_input(), which is called back
automatically by a reactor whenever a new connection can be accepted.
This handle_input() method is a factory method that performs the steps
necessary to create, connect, and activate a Logging_Handler_Adapter,
as shown below:
1 int Logging_Acceptor::handle_input (ACE_HANDLE)
2 {
3
Logging_Handler_Adapter *peer_handler;
4
5
ACE_NEW_RETURN (peer_handler,
6
Logging_Handler_Adapter (reactor ()),
7
-1);
8
9 if (peer_acceptor_.accept (peer_handler->peer ()) == -1) {
10
delete peer_handler;
11
return -1;
12
}
13
else if (peer_handler->open () == -1)
14
peer_handler->close ();
15
return 0;
16 }

Lines 37 Create a new Logging_Handler_Adapter that will process the


new clients logging session.
Lines 912 Accept the new connection into the socket handle of the
Logging_Handler_Adapter.
Lines 1314 Activate the connected Logging_Handler_Adapter by invoking its open() method:
1 int Logging_Handler_Adapter::open ()
2 {
3
char filename[MAXHOSTNAMELEN + sizeof (".log")];
4
ACE_INET_Addr logging_peer_addr;
5
6
logging_handler_.peer ().get_remote_addr (logging_peer_addr);

i
i

i hsbo
2001/
page 5
i

Section 3.3 The ACE Event Handler Class

7
8
9
10
11
12
13
14
15
16
17
18
19
20
21 }

53

logging_peer_addr.get_host_name (filename, MAXHOSTNAMELEN);


strcat (filename, ".log");
ACE_FILE_Connector connector;
connector.connect (log_file_,
ACE_FILE_Addr (filename),
0, // No timeout.
ACE_Addr::sap_any, // Ignored.
0, // Dont try to reuse the addr.
O_RDWR|O_CREAT|O_APPEND,
ACE_DEFAULT_FILE_PERMS);
return reactor ()->register_handler
(this, ACE_Event_Handler::READ_MASK);

Lines 38 Determine the host name of the connected client and use this
as the filename of the log file.
Lines 1017 Create or open the file thats used to store log records from
a connected client.
Lines 1920 Use the ACE_Reactor::register_handler() method to
register this event handler for READ events with the same reactor used
by the Logging_Acceptor. This method is described on page 62 in Section 3.5.
When log records arrive from clients, the reactor will dispatch Logging_
Handler_Adapter::handle_input() automatically. This method processes
a single log record by calling Logging_Handler::log_record(), which
reads the record from the socket and writes it to the log file associated with
the client connection, as shown below:
int Logging_Handler_Adapter::handle_input (ACE_HANDLE)
{
return logging_handler_.log_record ();
}

Since logging_handler_ maintains its own socket handle, the handle_


input() method ignores its ACE_HANDLE parameter.
Whenever an error occurs or a client closes a connection to the logging
server, the log_record() method returns ,1, which the handle_input()
method then passes back to the reactor that dispatched it. In turn, this

i
i

i hsbo
2001/
page 5
i

54

Section 3.4 The ACE Timer Queue Classes

value causes the reactor to dispatch the Logging_Handler_Adapter::


handle_close() hook method, which closes both the socket to the client
and the log file and then deletes itself, as follows:
int Logging_Handler_Adapter::handle_close (ACE_HANDLE,
ACE_Reactor_Mask)
{
logging_handler_.close ();
log_file_.close ();
delete this;
return 0;
}

The use of delete this in handle_close() is valid since this Logging_


Handler_Adapter object is allocated dynamically and will no longer be referenced or used by the reactor or any other part of the program. Additional
rules for managing the removal of concrete event handlers are described in
Section ?? on page ??.

3.4 The ACE Timer Queue Classes


Motivation
Many networked applications require support for time-driven dispatching.
For example, web servers require watch-dog timers that release resources
if clients dont send an HTTP GET request shortly after they connect. Likewise, the Windows NT Service Control Manager [Sol98] requires services
under its control to report their status periodically via heartbeat messages so it can restart services that have terminated abnormally. To alleviate application developers from the burden of developing efficient, scalable,
and portable time-driven dispatchers in an ad hoc manner, the ACE Reactor framework defines a family of timer queue classes.
Class Capabilities
The ACE timer queue classes allow applications to register time-driven concrete event handlers derived from ACE_Event_Handler. These classes provide the following capabilities:

i
i

i hsbo
2001/
page 5
i

Section 3.4 The ACE Timer Queue Classes





55

They allow applications to schedule event handlers whose handle_


timeout() hook method will be dispatched efficiently and scalably at
caller-specified times in the future, either once or at periodic intervals
They allow applications to cancel a particular timer associated with
an event handler or all timers associated with an event handler and
They provide a means to configure a timer queue so it can use various
time sources, such as ...

Figure 3.4: The ACE Timer Queue Classes


The interfaces and relationships of all the ACE timer queue classes are
shown in Figure 3.4. The key methods in these classes are outlined in the
following table:
Method
Description
schedule() Schedule an event handler whose handle_timeout() method
will be dispatched at a caller-specified time in the future, either
once or at periodic intervals.
cancel()
Cancel a timer associated with a particular event handler or all
timers associated with an event handler.
expire()
Dispatch the handle_timeout() method of all event handlers
whose expiration time is less than or equal to the current time
of day, which is represented as an absolute value, e.g., 2001-0911-08.45.00.

The schedule() method is passed a pointer to an ACE_Event_Handler


and a reference to an ACE_Time_Value indicating the absolute point of
time in the future when the handle_timeout() hook method is invoked on
the event handler. This method can also be passed the following optional
parameters:

A void pointer thats stored internally by the timer queue and passed
back unchanged when the handle_timeout() method is dispatched.
This pointer can be used as an asynchronous completion token (ACT)
[SSRB00], which allows an application to efficiently demultiplex and
process the responses of asynchronous operations it invokes on services. This capability allows the same event handler to be registered
with a timer queue at multiple dispatching times in the future.

i
i

i hsbo
2001/
page 5
i

56

Section 3.4 The ACE Timer Queue Classes

An ACE_Time_Value that designates the interval at which the event


handler should be dispatched periodically. If this parameter is omitted, the event handlers handle_timeout() method is just dispatched
once.
When a timer queue dispatches an event handlers handle_timeout()
method, it passes the current time and the void pointer ACT that was
passed as a parameter to the schedule() method when the event handler
was scheduled originally.
The return value of schedule() uniquely identifies each event handler
thats scheduled with a timer queue. Sidebar 3 explains the recycling policy
for these timer identifiers. Applications can pass the unique timer identifier
to the cancel() method to remove a particular event handler before it
expires. Applications can also pass the address of the handler to cancel()
to remove all timers associated with a particular event handler. If a nonNULL void pointer is passed to cancel(), its assigned the ACT passed
by the application when the timer was scheduled originally. This makes it
possible to allocate ACTs without dynamically incurring memory leaks.

Sidebar 3: Recycling of Timer Identifiers

The ACE timer queue classes combine design patterns, hook methods,
and template arguments to provide the following timer queue implementations:
 ACE_Timer_Heap, which is a partially-ordered, almost-complete binary tree implemented in an array [Rob99]. Its average- and worstcase performance for scheduling, canceling, and expiring a concrete
event handler is O (lg n). Heap-based timer queues are therefore useful for applications [BL88] and middleware [SLM98] that require predictable and low-latency time-driven dispatching, which is why its
the default timer queue mechanism in the ACE Reactor framework.
 ACE_Timer_Wheel, which uses timing wheels [VL97] that contain a
circular buffer designed to schedule, cancel, and dispatch timers in
O(1) time in the average case, but O(n) in the worst-case.
 ACE_Timer_Hash, which uses a hash table to manage the queue. Like
the timing wheel implementation, the average-case time required to
schedule, cancel, and expire timers is O (1) and its worst-case is O (n).

i
i

i hsbo
2001/
page 5
i

Section 3.4 The ACE Timer Queue Classes

57

ACE_Timer_List, which is implemented as a linked list of absolute


timers ordered by increasing deadline. Its average and worst-case
performance for scheduling and canceling timers is O (n), but it uses
the least amount of memory of the ACE timer queue implementations.

The ACE Reactor framework allows developers to use any of these timer
queue implementations to achieve the functionality needed by their applications, without burdening them with a one size fits all implementation.
Since the methods in the ACE_Timer_Queue base class are virtual, applications can provide different implementations of other timer queue mechanisms, such as delta lists [Com84]. Like the ACE_Timer_List, a delta list
stores event handlers in a list ordered by increasing deadline. Rather than
being absolute time, however, the delay for each event is computed from
expiration of previous event. The first element of the list can therefore be
checked/dispatched in O (1) time, though insertion and deletion of a timer
may require O(n) time.
Another example of flexibility in the ACE Reactor framework is the time
source used by an ACE timer queue. Most ACE timer queues internally
use the absolute time of day. While this implementation is fine for many
applications, ACE also provides a hook method that allows applications
to configure a different time source for a timer queue. For example, the
performance counter on Win32 bases the time on system uptime rather
than on wallclock time. System uptime is useful in situations where the
systems time-of-day can be adjusted, but timers must not be affected by
changes to the time-of-day clock.
Example
Although the Logging_Acceptor and Logging_Handler_Adapter event
handlers in Section 3.3 implement the logging server functionality correctly, they may consume system resources unnecessarily. For example,
clients can connect to a server and then not send log records for long periods of time. In the example below, we illustrate how to apply the ACE
timer queue mechanisms to reclaim resources from those event handlers
whose clients log records infrequently. Our design is based on the Evictor
pattern [HV99], which describes how and when to release resources, such
as memory and I/O handles, to optimize system resource management.
We use the Evictor pattern in conjunction with the ACE Reactor frame-

i
i

i hs_
2001/
page
i

58

Section 3.4 The ACE Timer Queue Classes

works timer queue mechanisms to check periodically when each registered


event handler has received its last client log record. If the time the last log
record was received exceeds a designated threshold, the event handler is
disconnected from its client, its resources are returned to the OS, and its
removed from the reactor. Clients are responsible for detecting these disconnections and re-establishing them when they need to send more log
records.
To implement the Evictor pattern, we extend the Logging_Acceptor
and Logging_Handler_Adapter classes shown in Section 3.3 to create the
Logging_Acceptor_Ex and Logging_Handler_Adapter_Ex classes. We
then register a timer for every instance of Logging_Handler_Adapter_Ex.
Since the default ACE timer mechanism (ACE_Timer_Heap) is highly scalable and efficient, its scheduling, cancellation, and dispatching overhead
is minimal regardless of the number of registered timers.
We start by creating a new header file called Logging_Acceptor_Ex.h
that contains the new subclasses. The changes to the Logging_Acceptor_
Ex class are minor. We simply override and modify the handle_input()
method to create a Logging_Handler_Adapter_Ex class rather than a
Logging_Handler_Adapter, as shown below:
#include "Logging_Acceptor.h"
class Logging_Acceptor_Ex : public Logging_Acceptor
{
public:
int handle_input (ACE_HANDLE) {
Logging_Handler_Adapter_Ex *peer_handler;
ACE_NEW_RETURN (peer_handler,
Logging_Handler_Adapter_Ex (reactor ()),
-1);
// ... same as Logging_Acceptor::handle_input()
}
};

In Chapter ??, well illustrate how the ACE Acceptor-Connector framework


can be used to add new behavior to an event handler without copying or
modifying existing code.
We then extend the Logging_Handler_Adapter to create the following
Logging_Handler_Adapter_Ex class:

i
i

i hs_
2001/
page
i

Section 3.4 The ACE Timer Queue Classes

59

class Logging_Handler_Adapter_Ex
: public Logging_Handler_Adapter
{
private:
// Time when a client last sent a log record.
ACE_Time_Value time_of_last_log_record_;
// Maximum timeout.
const ACE_Time_Value max_client_timeout_;
// Max timeout is an hour.
enum { MAX_CLIENT_TIMEOUT = 3600 };
// Private destructor ensures dynamic allocation.
Logging_Handler_Adapter () {}

We implement the Evictor pattern by adding an ACE_Time_Value that


keeps track of the time when a client last sent a log record. The methods in the public interface of Logging_Handler_Adapter_Ex are shown
below:
public:
Logging_Handler_Adapter_Ex
(ACE_Reactor *r,
const ACE_Time_Value &max_client_timeout
= ACE_Time_Value (MAX_CLIENT_TIMEOUT))
: Logging_Handler_Adapter (r),
time_of_last_log_record (0),
max_client_timeout_ (max_client_timeout) {}
// Activate the object.
virtual int open ();
// Called when input events occur, e.g., connection or data.
virtual int handle_input (ACE_HANDLE);
// Called when a timeout expires to check if the client has
// been idle for an excessive amount of time.
virtual int handle_timeout (const ACE_Time_Value &tv,
const void *act);
// Called when object is destroyed, e.g., when its removed
// from an ACE_Reactor.

i
i

i hs_
2001/
page
i

60

Section 3.4 The ACE Timer Queue Classes

virtual int handle_close (ACE_HANDLE h = ACE_INVALID_HANDLE,


ACE_Reactor_Mask close_mask = 0);
};

The handle_input() method notes the time when a log record is received from the connected client and then forwards to its parents handle_
input() method to process the log record, as shown below:
int Logging_Handler_Adapter_Ex::handle_input (ACE_HANDLE h)
{
time_of_last_log_record_ = ACE_OS::gettimeofday ();
return Logging_Handler_Adapter::handle_input (h);
}

The open() method is shown next:


int Logging_Handler_Adapter_Ex::open ()
{
int result = Logging_Handler_Adapter::open ();
if (result != -1)
result = reactor ()->schedule_timer
(this,
max_client_timeout_,
0,
max_client_timeout_);
return result;
}

This method first forwards to its parents open() method, which is defined
on page 52. It then calls the ACE_Reactor::schedule_timer() method
(described on page 65) to schedule this event handler to be dispatched
periodically to check whether its client sent it a log record recently. We
schedule the initial timer to expire in max_client_timeout_ seconds and
also request that it re-expire periodically every max_client_timeout_ seconds. When a timer expires, the reactor uses its timer queue mechanism
to dispatch the following handle_timeout() hook method automatically:
int Logging_Handler_Adapter_Ex::handle_timeout
(const ACE_Time_Value &tv, const void *act)
{
if (ACE_OS::gettimeofday () - time_of_last_log_record_
> max_client_timeout_)

i
i

i hsbo
2001/
page 6
i

Section 3.5 The ACE Reactor Class

61

reactor ()->remove_handler (this, ACE_Event_Handler::READ_MASK);


return 0;
}

This method checks if the time elapsed between the current time and
when the last log record was received by this event handler is greater than
designated max_client_timeout_ threshold. If so, it calls the remove_
handler() method, which triggers the reactor to call the following handle_
close() hook to remove the event handler from the reactor:
int Logging_Handler_Adapter_Ex::handle_close (ACE_HANDLE,
ACE_Reactor_Mask)
{
reactor ()->cancel_timer (this);
return Logging_Handler_Adapter::handle_close ();
}

This method cancels the timer for this event handler and then calls its
parents handle_close() method, which closes the socket to the client
and the log file, and then deletes itself.

3.5

The ACE Reactor Class

Motivation
Event-driven networked applications have historically been programmed
using native OS mechanisms, such as the Socket API and the select()
synchronous event demultiplexer. These implementations are often inflexible, however, since they tightly couple low-level event detection, demultiplexing, and dispatching code with application processing code. Developers must therefore rewrite all this code for each new networked application.
This approach is tedious, expensive, error-prone, and unnecessary, however, since event detection, demultiplexing, and dispatching can be performed in an application-independent manner. To decouple this reusable
code from the application-defined event processing code, the ACE Reactor
framework defines the ACE_Reactor class.
Class Capabilities
The ACE_Reactor class implements the Facade pattern [GHJV95] to define
an interface that applications can use to access the various capabilities of

i
i

i hsbo
2001/
page 6
i

62

Section 3.5 The ACE Reactor Class

the ACE Reactor framework. This class provides the following capabilities:







It centralizes event loop processing in a reactive application


It detects events using a synchronous event demultiplexer, such as
select() or WaitForMultipleObjects(), provided by the underlying OS
It demultiplexes events to event handlers when the synchronous event
demultiplexer indicates the occurrence of designated events
It dispatches the appropriate methods on registered event handlers to
perform application-defined processing in response to the events and
It enables other threads in a program to notify a reactor.

By encapsulating low-level OS event demultiplexing mechanisms within


an object-oriented C++ interface, the ACE Reactor framework simplifies
the development of correct, compact, portable, and efficient event-driven
networked applications. Likewise, by separating event detection, demultiplexing, and dispatching mechanisms from application-defined event processing policies, the ACE Reactor framework enhances reuse, improves
portability, and enables the transparent extensibility of event handlers.
Figure 3.5: The ACE Reactor Class
The interface for the ACE_Reactor class is shown in Figure 3.5. The
ACE_Reactor has a rich interface that exports all the features in the ACE
Reactor framework. We therefore group the description of its methods into
the six categories described below.
1. Reactor lifecycle management methods. The following methods initialize and terminate an ACE_Reactor:
Method
ACE_Reactor
open()
ACE_Reactor
close()

Description
These methods create and initialize instances of a reactor.
These methods clean up the resources allocated when a reactor was initialized.

2. Event handler management methods. The following methods register and remove concrete event handlers from an ACE_Reactor:

i
i

i hsbo
2001/
page 6
i

Section 3.5 The ACE Reactor Class

Method
register_handler()
remove_handler()
mask_ops()

schedule_wakeup()

cancel_wakeup()

63

Description
Methods that register concrete event handlers with a
reactor.
Methods that remove concrete event handlers from a
reactor.
Perform operations that get, set, add, and clear the
event type(s) associated with an event handler and its
dispatch mask.
Add the designed masks to an event handlers entry, which must already have been registered via
register_handler().
Clears the designated masks from an event handlers
entry, but doesnt not remove the handler from the reactor.

The register_handler() methods can be used with either of the following signatures:

Two parametersIn this version, the first parameter identifies the


event handler and the second indicates the type of event(s) the event
handler has registered to process. The methods implementation uses
double-dispatching [GHJV95] to obtain a handle by calling back to
the event handlers get_handle() method. The advantage of this
design is that the wrong handle cant be associated with an event
handler accidentally. Most examples in this book therefore use the
two parameter variant of register_handler().
Three parametersIn this version, another first parameter is added
to pass the handle explicitly. Although this design is potentially
more error-prone than the two-parameter version, it allows an application to register the same event handler for multiple handles,
which helps conserve memory if an event handler neednt maintain
per-handle state. The client logging daemon example in the Example portion of Section 5.2 illustrates the three parameter variant of
register_handler().

Both ACE_Reactor::remove_handler() methods remove an event handler from a reactor so that its no longer registered for one or more types
of events. The signatures of these methods can be passed either an event
handler and/or a handle, just like the two register_handler() method
variants described above. When removing a handler, applications also pass
a bit-mask consisting of the enumeration literals defined in the table on

i
i

i hsbo
2001/
page 6
i

64

Section 3.5 The ACE Reactor Class

page 47 to indicate which event types are no longer desired. The event
handlers handle_close() method is called soon afterwards to notify it of
the removal.2 After handle_close() returns and the concrete event handler is no longer registered to handle any events, the ACE_Reactor removes
the event handler from its internal data structures.
The mask_ops() method performs operations that get, set, add, and
clear the event type(s) associated with an event handler and its dispatch
mask. Since mask_ops() assumes that an event handler is already present
and doesnt try to remove it, its more efficient than using register_
handler() and remove_handler(). The schedule_wakeup() and cancel_
wakeup() methods are simply syntactic sugar for common operations involving mask_ops().
The mask_ops(), schedule_wakeup(), and cancel_wakeup() methods dont cause the reactor to re-examine its set of handlers, i.e., the new
masks will only be noticed the next time the reactors handle_events()
method is called. If theres no other activity expected, or you need immediate re-examination of the wait masks, youll need to call ACE_Reactor::
notify() after calling one of these methods or use the ACE_Reactors
register_handler() or remove_handler() methods instead.

3. Event-loop management methods. After registering its initial concrete event handlers, an application can manage its event loop via the
following methods:

This callback can be prevented by adding the ACE_Event_Handler::DONT_CALL value


to the mask passed to remove_handler() which instructs a reactor not to dispatch the
handle_close() method when removing an event handler.

i
i

i hsbo
2001/
page 6
i

Section 3.5 The ACE Reactor Class

Method
handle_events()

run_reactor_event_loop()

end_reactor_event_loop()
reactor_event_loop_done()

65

Description
Wait for events to occur and then dispatch the
event handlers associated with these events.
A timeout parameter can bound the time
spent waiting for events, so that it wont block
indefinitely if events never arrive.
Run the event loop continuously until the
handle_events() method returns ,1 or the
end_reactor_event_loop() method is invoked.
Instruct a reactor to terminate its event loop
so that it can shut down gracefully.
Returns 1 when the reactors event loop has
been ended, e.g., via a call to end_reactor_
event_loop().

The handle_events() method gathers the handles of all registered


concrete event handlers, passes them to the reactors synchronous event
demultiplexer, and then blocks for an application-specified time interval
awaiting the occurrence of various events, such as data events, to arrive
on socket handles or for timer deadlines to expire. When I/O events occur
and the handles become active, this method dispatches the appropriate
pre-registered concrete event handlers by invoking their handle_*() hook
method(s) defined by the application to process the event(s). Sidebar 4 outlines the order in which different types of event handlers are dispatched by
handle_events().
4. Timer management methods. By default, the ACE_Reactor uses
the ACE_Timer_Heap timer queue mechanism described in Section 3.4
to schedule and dispatch event handlers in accordance to their timeout
deadlines. The timer management methods exposed by the ACE_Reactor
include:
Method
schedule_timer()
cancel_timer()

Description
Register a concrete event handler that will be executed
at a user-specified time in the future.
Cancel one or more event handlers that were registered
previously.

The ACE_Reactor provides additional capabilities to those defined in


the ACE timer queue classes. For example, it integrates the dispatching

i
i

i hsbo
2001/
page 6
i

66

Section 3.5 The ACE Reactor Class

Sidebar 4: The Dispatching Order in the ACE Reactor Framework


The ACE Reactor framework can dispatch five different types of events,
which are processed in the following order:
1.
2.
3.
4.
5.

Time-driven events
Notifications
Output I/O events
Exception I/O events
Input I/O events

Its generally a good idea to design applications whose behavior is independent of the order in which the different types of events are dispatched.
There are situations, however, where knowledge of the dispatching order
of events is necessary.
of timers with the dispatching of other types of events, whereas the ACE
timer queue classes just dispatch time-driven events. Moreover, unlike
the schedule() methods described in Section 3.4, the ACE_Reactor::
schedule_timer() uses relative time, not absolute time. As a result, we
use slightly different names for the ACE_Reactor timer management methods.
5. Notification methods. The following methods manage the notification
queue that application threads can use to communicate with and control
a reactor:
Method
notify()

max_notify_iterations()

purge_pending_notifications()

Description
Pass an event handler to a reactor and
designate which handle_*() method
will be dispatched in the context of the
reactor. By default, the event handlers handle_except() method is dispatched.
Set the maximum number of event handlers that a reactor will dispatch from
its notification queue.
Purge a specified event handler or all
event handlers that are in the reactors
notification queue.

i
i

i hsbo
2001/
page 6
i

Section 3.5 The ACE Reactor Class

67

The ACE_Reactor::notify() method can be used for several purposes:




It can wake up a reactor whose synchronous event demultiplexer


function is waiting for I/O events to occur.
It can pass event handlers to a reactor without associating the event
handlers with I/O or timer events.

By default, the reactors notification mechanism will dispatch all event


handlers in its notification queue. The max_notify_iterations() method
can be used to change the number of event handlers dispatched. Setting
a low value will improve fairness and prevent starvation, at the expense of
increasing dispatching overhead.
6. Utility methods. The following methods are also provided by the ACE_
Reactor:
Method
Description
instance() A static method that returns a pointer to a singleton ACE_
Reactor, which is created and managed by the Singleton pattern [GHJV95] combined with the Double-Checked Locking Optimization [SSRB00].
owner()
Assigns a thread to own a reactors event loop.

The ACE_Reactor can be used in two ways:




As a singleton [GHJV95] via the instance() method shown in the


table above.
By instantiating one or more instances. This capability can be used
to support multiple reactors within a process. Each reactor is often
associated with a thread running at a particular priority [Sch98].

Some reactor implementations, such as the ACE_Select_Reactor described in Section 3.6, only allow one thread to run their handle_events()
method. The owner() method changes the identity of the thread that owns
the reactor to allow this thread to run the reactors event loop.
The ACE Reactor framework uses the Bridge pattern [GHJV95] to decouple the interface of a class from its various implementations. The
ACE_Reactor defines the interface and all the actual event detection, demultiplexing, and dispatching is performed by implementations of this interface. The Bridge pattern allows applications to choose different reactors
without changing their source code, as shown in Figure 3.6.

i
i

i hsbo
2001/
page 6
i

68

Section 3.5 The ACE Reactor Class

Figure 3.6: The ACE Reactor Class Hierarchy

The most common implementations of ACE_Reactorthe ACE_Select_


Reactor, ACE_TP_Reactor, and ACE_WFMO_Reactor classesare described
in Sections 3.6 through 3.8, respectively. The ACE_Select_Reactor is the
default implementation of the ACE_Reactor on all platforms except Win32,
which uses the ACE_WFMO_Reactor by default. For situations where the
default reactor implementation is inappropriate, developers can choose a
different reactor at compile-time or at run-time. The method names and
overall functionality provided by the ACE_Reactor interface remain the
same, however. This uniformity stems from the modularity of the ACE
Reactor frameworks design, which enhances its reuse, portability, and
maintainability.
The design of the ACE Reactor framework also enhances extensibility above and below its public interface. For example, its straightforward to extend the logging servers functionality, e.g., to add an authenticated logging feature. Such extensions simply inherit from the ACE_
Event_Handler base class and selectively implement the necessary virtual method(s). Its also straightforward to modify the underlying event
demultiplexing mechanism of an ACE_Reactor without affecting existing
application code. For example, porting the Reactor-based logging server
from a UNIX platform to a Win32 platform requires no visible changes to
application code. In contrast, porting a C implementation of the networked
logging service from select() to WaitForMultipleObjects() would be
tedious and error-prone.
Additional coverage of the ACE Reactor frameworks design appears in
the Reactor patterns Implementation section in Chapter 3 of [SSRB00].
Sidebar 5 describes how you can extend the ACE Reactor framework to
support other event demultiplexers.

Sidebar 5: Extending the ACE Reactor Framework

i
i

i hsbo
2001/
page 6
i

Section 3.5 The ACE Reactor Class

69

Example
Now that weve described the classes in the ACE Reactor framework, we
show how they can be integrated to implement a complete version of the
networked logging server. This server listens on a TCP port number defined
in the OS network services file as ace_logger, which is a practice used by
many networked servers. For example, the following line might appear in
the UNIX /etc/services file:
ace_logger

9700/tcp

# Connection-oriented Logging Service

Client applications can optionally specify the TCP port and the IP address where the client application and logging server should rendezvous to
exchange log records. If this information is not specified, however, the port
number is located in the services database, and the hostname is assumed
to be the ACE DEFAULT SERVER HOST, which is defined as localhost on
most OS platforms.
The version of the logging server shown below offers the same capabilities as the Reactive_Logging_Server_Ex version in Chapter 7 of [SH02].
Both logging servers run in a single thread of control in a single process,
handling log records from multiple clients reactively. The main difference is
that this version uses the Reactor pattern to factor out the event detection,
demultiplexing, and dispatching code from the logging server application
into the ACE Reactor framework, which dispatches incoming log records
received from clients in a round-robin fashion. This refactoring helps address limitations with the original Reactive_Logging_Server_Ex implementation, which contained the classes and functionality outlined below
that had to be re-written for each new networked application:
Managing various mappings: The Reactive_Logging_Server_Ex server
contained two data structures that performed the following mappings:
 An ACE_Handle_Set mapped socket handles to Logging_Handler
objects, which encapsulate the I/O and processing of log records in
accordance with the logging services message framing protocol.
 An ACE_Hash_Map_Manager mapped socket handles to their associated ACE_FILE_IO objects, which write log records to the appropriate
output file.
Weve removed all the mapping code from this chapters logging server application and refactored it to reuse the capabilities available in the ACE
Reactor framework. Since the framework now provides and maintains this
code, you neednt write it for this or any other networked application.

i
i

i hsbo
2001/
page 7
i

70

Section 3.5 The ACE Reactor Class

Event detection, demultiplexing, and dispatching: To detect connection and data events, the Reactive_Logging_Server_Ex server used the
ACE::select() synchronous event demultiplexer method. This design has
the following drawbacks:





It works only as long as the OS platform provides select()


The code for setting up the fd_set structures for the call and interpreting them upon return is error-prone and
The code that called ACE::select() is hard to reuse for other applications.

The version of the logging server shown below uses the ACE Reactor framework to detect, demultiplex, and dispatch I/O- and time-based
events. This framework also supports the integration of signal handling in
the future as the need arises. In general, applications use the following
steps to integrate themselves into the ACE Reactor framework:
1. Create concrete event handlers by inheriting from the ACE_Event_
Handler base class and overriding its virtual methods to handle various types of events, such as I/O events, timer events, and signals
2. Register concrete event handlers with an instance of ACE_Reactor
and
3. Run an event loop that demultiplexes and dispatches events to the
concrete event handlers at run-time.
Figure 3.7 illustrates the Reactor-based logging server architecture that
builds upon and enhances our earlier logging server implementations from
[SH02]. To enhance reuse and extensibility, the classes in this figure are
Figure 3.7: Architecture of Reactor-based Logging Server
designed to decouple the following aspects of the logging servers architecture, which are described from the bottom to the top of Figure 3.7:
Reactor framework classes. The classes in the Reactor framework encapsulate the lower-level mechanisms that perform event detection and
the demultiplexing and dispatching of events to concrete event handler
hook methods.
Connection-oriented ACE Socket wrapper facade classes. The ACE_
SOCK_Acceptor and ACE_SOCK_Stream classes presented in Chapter 3

i
i

i hs_
2001/
page
i

Section 3.5 The ACE Reactor Class

71

of [SH02] are used in this version of the logging server. As in previous versions, the ACE_SOCK_Acceptor accepts network connections from remote
clients and initializes ACE_SOCK_Stream objects. An initialized ACE_SOCK_
Stream object then processes data exchanged with its connected client.
Concrete logging event handler classes. These classes implement the
capabilities specific to the networked logging service. As shown in the Example portion of Section 3.4, The Logging_Acceptor_Ex factory uses an
ACE_SOCK_Acceptor to accept client connections. Likewise, the Logging_
Handler_Adapter_Ex uses an ACE_SOCK_Stream to receive log records
from connected clients. Both classes are ancestors of ACE_Event_Handler,
so their handle_input() methods receive callbacks from an ACE_Reactor.
Our implementation resides in a header file called Reactor_Logging_
Server.h, which includes several header files that provide the various capabilities well use in our logging server.
#include "ace/ACE.h"
#include "ace/Reactor.h"
#include "Logging_Acceptor_Ex.h"

We next define the Reactor_Logging_Server class.


template <class ACCEPTOR>
class Reactor_Logging_Server : public ACCEPTOR
{
protected:
// Pointer to the reactor.
ACE_Reactor *reactor_;

This class inherits from its ACCEPTOR template parameter. To vary certain
aspects of Reactor_Logging_Servers connection establishment and logging behavior, subsequent examples will instantiate it with various types of
acceptors, such as the Logging_Acceptor_Ex on page 58. The Reactor_
Logging_Server also contains a pointer to the ACE_Reactor that it uses
to detect, demultiplex, and dispatch I/O- and time-based events to their
event handlers.
The signatures of the public methods in Reactor_Logging_Server class
are shown below.
public:
// Constructor uses the singleton reactor by default.
Reactor_Logging_Server

i
i

i hsbo
2001/
page 7
i

72

Section 3.5 The ACE Reactor Class

(int argc,
char *argv[],
ACE_Reactor *r = ACE_Reactor::instance ());
// Destructor.
virtual Reactor_Logging_Server ();
};

This interface differs from the Logging_Server class defined in Chapter 4 of [SH02]. In particular, the Reactor_Logging_Server uses the
ACE_Reactor::handle_events() method to drive application processing
via upcalls to instances of Logging_Acceptor and Logging_Handler_
Adapter. We therefore dont need the wait_for_multiple_events(),
handle_connections(), and handle_data() hook methods that were used
in the reactive logging servers from [SH02].
The Reactor_Logging_Servers constructor performs the steps necessary to initialize the Reactor-based logging server:
1 template <class ACCEPTOR>
2 Reactor_Logging_Server<ACCEPTOR>::Reactor_Logging_Server
3
(int argc, char *argv[], ACE_Reactor *reactor)
4
: reactor_ (reactor) {
5
u_short logger_port = argc > 1 ? atoi (argv[1]) : 0;
6
ACE::set_handle_limit ();
7
8
typename ACCEPTOR::PEER_ADDR server_addr;
9
int result;
10
11
if (logger_port != 0)
12
result = server_addr.set (logger_port, INADDR_ANY);
13
else
14
result = server_addr.set ("ace_logger", INADDR_ANY);
15
if (result == -1) return -1;
16
17
open (server_addr); // Calls ACCEPTOR::open();
18
19
reactor_->register_handler
20
(this, ACE_Event_Handler::ACCEPT_MASK);
21 }

Line 5 Set the port number thatll be used to listen for client connections.
Line 6 Raise the number of available socket handles to the maximum
supported by the OS platform.
Lines 817 Set the local server address and use this to initialize the
passive-mode logging acceptor endpoint.

i
i

i hsbo
2001/
page 7
i

73

Section 3.6 The ACE Select Reactor Class

Lines 1920 Register this object with the reactor to

ACCEPT

events.

The destructor of the Reactor_Logging_Server class removes this


object from the reactor, which triggers its inherited ACCEPTOR::handle_
close() hook method to close the underlying ACE_SOCK_Acceptor.
template <class ACCEPTOR>
Reactor_Logging_Server<ACCEPTOR>::Reactor_Logging_Server () {
reactor_->remove_handler (this,
ACE_Event_Handler::ACCEPT_MASK);
}

We conclude with the main() program, which instantiates the Reactor_


Logging_Server template with the Logging_Acceptor_Ex class defined
on page 58. It then defines an instance of this template class and uses
the singleton reactor to drive all subsequent connection and data event
processing.
typedef Reactor_Logging_Server<Logging_Acceptor_Ex>
Server_Logging_Daemon;
int main (int argc, char *argv[])
{
ACE_Reactor *reactor = ACE_Reactor::instance ();
Server_Logging_Daemon server (argc, argv, reactor);
if (reactor->run_reactor_event_loop () == -1)
ACE_ERROR_RETURN ((LM_ERROR, "%p\n",
"run_reactor_event_loop()"), 1);
return 0;
}

Since all event detection, demultiplexing, and dispatching is handled by


the Reactor framework, the reactive logging server implementation shown
above is much shorter than the equivalent ones in [SH02].

3.6

The ACE Select Reactor Class

Motivation
At the heart of a reactive server is a synchronous event demultiplexer that
detects and reacts to events from clients continuously. The select() function is the most common synchronous event demultiplexer. This system

i
i

i hsbo
2001/
page 7
i

74

Section 3.6 The ACE Select Reactor Class

function waits for specified events to occur on a set of I/O handles.3 When
one or more of the I/O handles become active, or after a designated period
of time elapses, select() returns to its caller. The caller can then process
the events indicated by information returned from select(). Additional
coverage of select() is available in Chapter 6 of [SH02] and in [Ste98].
Although select() is available on most OS platforms, programming to
the native OS select() C API requires developers to wrestle with many
low-level details, such as:






Setting and clearing bitmasks


Detecting events and responding to signal interrupts
Managing internal locks and
Dispatching functions to process I/O, signal, and timer events.

To shield programmers from the accidental complexities of these low-level


details, the ACE Reactor framework defines the ACE_Select_Reactor class.
Class Capabilities
The ACE_Select_Reactor class is an implementation of the ACE_Reactor
interface that uses the select() synchronous event demultiplexer function to detect I/O and timer events. Its the default implementation of
ACE_Reactor on all platforms except for Win32. In addition to supporting
all the features of the ACE_Reactor interface, the ACE_Select_Reactor
class provides the following capabilities:





It supports re-entrant reactor invocations, i.e., applications can call


handle_events() from within event handlers that are themselves being dispatched by the reactor and
It can be configured to be either synchronized or non-synchronized.
It preserves fairness by dispatching all active handles in the handle
sets before calling select() again.

The ACE_Select_Reactor class is a descendant of ACE_Reactor_Impl,


as shown in Figure 3.6. It therefore can serve as a concrete implementation
of the ACE_Reactor, which plays the role of the interface participant in
the Bridge pattern. Internally, the ACE_Select_Reactor is structured as
shown in Figure 3.8 and described below:
3

The Win32 version of select() only works on socket handles.

i
i

i hsbo
2001/
page 7
i

Section 3.6 The ACE Select Reactor Class

75

Figure 3.8: The ACE Select Reactor Framework Internals

It contains three instances of ACE_Handle_Set, one each for read,


write, and exception events, which are all passed to select() when
an application calls the ACE_Reactor::handle_events() method.
As described in Chapter 7 of [SH02], ACE_Handle_Set provides an
efficient C++ wrapper facade for the underlying fd_set bitmask data
type.
It also contains three arrays of ACE_Event_Handler pointers, which
store pointers to registered concrete event handlers that process various types of events specified by applications.

The select()-based implementation of the ACE Reactor framework is


based upon the ACE_Select_Reactor_T template that uses the Strategized Locking pattern [SSRB00] to configure the following two types of reactors:




Non-synchronizedThis version is designed to minimize the overhead of event demultiplexing and dispatching for single-threaded applications. It can be configured by parameterizing the ACE_Select_
Reactor_T template with an ACE_Null_Mutex.
SynchronizedThis version allows multiple threads to invoke methods on a single ACE_Reactor thats shared by all the threads. It
also allows multiple ACE_Reactors to run in separate threads within
a process. The ACE_Select_Reactor is an instantiation of ACE_
Select_Reactor_T that uses a recursive locking mechanism called
an ACE_Token to prevent race conditions and intra-class method deadlock. Sidebar 6 outlines the ACE_Token class.

Only one thread can invoke ACE_Select_Reactor::handle_events()


at a time. To enforce this constraint, each ACE_Select_Reactor stores
the identity of the thread that owns its event loop. By default, the owner of
an ACE_Reactor is the identity of the thread that initialized it. The ACE_
Select_Reactor::owner() method sets ownership of the ACE_Select_
Reactor to the calling thread. This method is useful when the thread
running the reactors event loop differs from the thread that initialized the
reactor.

i
i

i hsbo
2001/
page 7
i

76

Section 3.6 The ACE Select Reactor Class

Sidebar 6: The ACE Token Class


ACE_Token is a lock whose interface is compatible with other ACE synchronization wrapper facades, such as ACE_Thread_Mutex or ACE_RW_Mutex
from [SH02], but whose implementation differs as follows:




It implements recursive mutex semantics, where a thread that owns


the token can reacquire it without deadlocking. Before a token can
be acquired by a different thread, however, its release() method
must be called the same number of times that acquire() was called.
Each ACE_Token maintains two FIFO lists that are used to queue up
high- and low-priority threads waiting to acquire the token. Threads
requesting the token using ACE_Token::acquire_write() are kept
in the high-priority list and take precedent over threads that call
ACE_Token::acquire_read(), which are kept in the low-priority list.
Within a priority list, threads that are blocked awaiting to acquire a
token are serviced in FIFO order as other threads release the token,
which ensures the fairness among waiting threads. In contrast, UNIX
International and Pthread mutexes dont strictly enforce thread acquisition order.
The ACE_Token::sleep_hook() hook method allows a thread to release any resources its holding, before it waits to acquire the token.
This capability allows a thread to avoid deadlock, starvation, and priority inversion. The sleep_hook() is only invoked if a thread cant
acquire a token immediately.

ACE_Select_Reactor and its derived Reactors use a subclass of ACE_


Token to synchronize multithreaded access to a reactor. The FIFO serving order of ACE_Token ensures threads waiting on a reactor are served
fairly. Requests to change the internal states of a reactor use ACE_Token::
acquire_write() to ensure other waiting threads see the changes as
soon as possible. The ACE_Token subclass also overwrites the default
sleep_hook() method to notify the reactor of pending threads via its notification mechanism.
The ACE_Select_Reactor::owner() method is useful when the thread
that initializes an ACE_Select_Reactor differs from the thread that ultimately runs the reactors event loop.

i
i

i hsbo
2001/
page 7
i

Section 3.6 The ACE Select Reactor Class

77

Example
The reactive logging server on page 73 is designed to run continuously.
Theres no way to shut it down gracefully, however, other than to terminate
it abruptly, e.g., by sending its process a kill -9 from a UNIX login console
or ending the process via the Win2K process manager. In this example,
we show how to use the ACE_Select_Reactor::notify() mechanism to
shut down the logging server gracefully and portably.
The ACE_Select_Reactor implements its notification mechanism via
an ACE_Pipe, which is a bi-directional IPC mechanism described in Sidebar 7. The two ends of a pipe play the following roles:
 The writer roleThe ACE_Select_Reactor::notify() method exposes this end of the pipe to applications, which use the notify()
method to pass event handler pointers to an ACE_Select_Reactor
via its notification pipe.
 The reader roleThe ACE_Select_Reactor registers this end of the
pipe internally with a READ MASK. When the reactor detects an event
in the read-side of its notification pipe it wakes up and dispatches
a user-configurable number of event handlers from the pipe. Unlike
other concrete event handlers registered with a reactor, these handlers neednt be associated with I/O-based or timer-based events,
which helps improve the ACE_Select_Reactors dispatching scalability. In particular, theres no requirement that a handler whose
pointer you give to ACE_Reactor::notify() has ever been, or ever
will be, registered with that reactor.
Figure 3.9 illustrates the structure of the reader and writer roles within
an ACE_Select_Reactor.
We can use the ACE_Select_Reactors ACE_Pipe-based notification
mechanism to shut down our Reactor_Logging_Server gracefully via the
following steps:
1. Well spawn a controller thread that waits for an administrator to pass
it commands via its standard input.
2. When the quit command is received, the controller thread passes a
special concrete event handler to the singleton reactor via its notify()
method and then exits the thread.
3. The reactor dispatches this event handler by invoking its handle_
except() method, which calls end_reactor_event_loop() and then
deletes itself.

i
i

i hs_
2001/
page
i

78

Section 3.6 The ACE Select Reactor Class

Sidebar 7: The ACE Pipe Class


The ACE_Pipe class provides a portable, bi-directional IPC mechanism that
transfers data within an OS kernel. This class is implemented as follows on
various OS platforms:





Via a STREAM pipe on modern UNIX platforms


Via a socketpair() on legacy UNIX platforms or
Via a connected socket on Win32 platforms.

After initializing an ACE_Pipe, applications can obtain its read and


write handles via access methods and invoke I/O operations to receive
and send data. These handles can also be included in ACE_Handle_Sets
passed to ACE::select() or to an ACE_Select_Reactor.
Figure 3.9: The ACE Select Reactor Notification Mechanism
4. The next time the ACE_Reactor::run_reactor_event_loop() method
calls handle_events() internally it will detect that reactor_event_
loop_done() is now true, which will cause the main server thread to
exit gracefully.
The C++ code below illustrate these four steps. The revised main() function is shown first:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

#include "ace/Auto_Ptr.h"
// Forward declarations.
void *controller (void *);
void *event_loop (void *);
typedef Reactor_Logging_Server<Logging_Acceptor_Ex>
Server_Logging_Daemon;
int main (int argc, char *argv[])
{
auto_ptr <ACE_Select_Reactor> holder (new ACE_Select_Reactor);
auto_ptr <ACE_Reactor> reactor (new ACE_Reactor (holder.get ()));
ACE_Reactor::instance (reactor.get ());
Server_Logging_Daemon server (argc, argv, reactor.get ());
ACE_Thread_Manager::instance ()->spawn
(event_loop, ACE_static_cast (void *, reactor.get ()));

i
i

i hs_
2001/
page
i

Section 3.6 The ACE Select Reactor Class

79

19
20
ACE_Thread_Manager::instance ()->spawn
21
(controller, ACE_static_cast (void *, reactor.get ()));
22
23
return ACE_Thread_Manager::instance ()->wait ();
24 }

Lines 17 We include the relevant ACE header file, define some forward
declarations, and instantiate the Reactor_Logging_Server template with
the Logging_Acceptor_Ex class from page 58 to create a Server_Logging_
Daemon type definition.
Lines 1113 We set the implementation of the singleton ACE_Reactor to
be an ACE_Select_Reactor.
Lines 1518 We then create an instance of Server_Logging_Daemon and
use the ACE_Thread_Manager singleton described in Chapter 9 of [SH02]
to spawn a thread that runs the following event_loop() function:
void *event_loop (void *arg)
{
ACE_Reactor *reactor = ACE_static_cast (ACE_Reactor *, arg);
reactor->owner (ACE_OS::thr_self ());
return reactor->run_reactor_event_loop ();
}

Note how we set the owner of the reactor to the id of the thread that runs
the event loop. Section ?? explains the design rule governing the use of
thread ownership for the ACE_Select_Reactor.
Lines 2021 We spawn a thread to run the controller() function, which
is the next function shown below.
Line 23 We wait for the other threads to exit before returning from the
main() function.
The controller function is implemented as follows:
1 void *controller (void *arg)
2 {
3
ACE_Reactor *reactor = ACE_static_cast (ACE_Reactor *, arg);
4
5
Quit_Handler *quit_handler = new Quit_Handler (reactor);
6
7
for (;;) {
8
std::string user_input;

i
i

i hsbo
2001/
page 8
i

80

Section 3.6 The ACE Select Reactor Class

9
getline (cin, user_input, \n);
10
if (user_input == "quit")
11
return reactor->notify (quit_handler);
12
}
13
return 0;
14 }

Lines 36 After casting the void pointer argument back into an ACE_
Reactor pointer, we create a special concrete event handler called Quit_
Handler, which is shown shortly below.
Lines 712 We then go into a loop that waits for an administrator to
type quit on the standard input stream. When this occurs, we pass
the quit_handler to the reactor via its notify() method and exit the
controller thread.
We finally define the Quit_Handler class. Its handle_except() and
handle_close() methods simply shut down the ACE_Select_Reactors
event loop and delete the event handler, respectively, as shown below:
class Quit_Handler : public ACE_Event_Handler {
public:
Quit_Handler (ACE_Reactor *r): reactor_ (r) {}
virtual int handle_except (ACE_HANDLE) {
reactor_->end_reactor_event_loop ();
return -1; // Trigger call to handle_close() method.
}
virtual int handle_close (ACE_HANDLE, ACE_Reactor_Mask) {
delete this;
}
private:
ACE_Reactor *reactor_;
// Private destructor ensures dynamic allocation.
Quit_Handler () {}
};

The implementation shown above is portable to all ACE platforms that


support threads. Section 3.8 illustrates how to take advantage of Win32specific features to accomplish the same behavior without needing an additional controller thread.

i
i

i hsbo
2001/
page 8
i

Section 3.7 The ACE TP Reactor Class

3.7

81

The ACE TP Reactor Class

Motivation
Although the ACE_Select_Reactor is quite flexible, its limited since only
its owner thread can call its handle_events() event loop method. The
ACE_Select_Reactor therefore serializes event processing at the demultiplexing layer, which may be overly restrictive and non-scalable for certain networked applications. One way to solve this problem is to spawn
multiple threads and run the event loop of a separate instance of ACE_
Select_Reactor in each of them. This design can be hard to program,
however, since it requires programmers to keep track of tedious bookkeeping information, such as which thread and which reactor an event handler is registered. A more effective way to address the limitations with
ACE_Select_Reactor is to use the ACE_TP_Reactor class provided by the
ACE Reactor framework.
Class Capabilities
The ACE_TP_Reactor class is another implementation of the ACE_Reactor
interface. This class implements the Leader/Followers architectural pattern [SSRB00], which provides an efficient concurrency model where multiple threads take turns calling select() on sets of I/O handles to detect, demultiplex, dispatch, and process service requests that occur. In
addition to supporting all the features of the ACE_Reactor interface, the
ACE_TP_Reactor provides the following capabilities:




It enables a pool of threads to call its handle_events() method,


which can improve scalability by handling events on multiple handles concurrently.
Compared to other thread pool models, such as the Half-Sync/HalfAsync model in Chapter 5 of [SH02], the leader/followers implementation in ACE_TP_Reactor provides the following performance enhancements:
It enhances CPU cache affinity and eliminates the need to allocate memory dynamically and share data buffers between threads
It minimizes locking overhead by not exchanging data between
threads

i
i

i hsbo
2001/
page 8
i

82

Section 3.7 The ACE TP Reactor Class

It can minimize priority inversion because no extra queueing is


introduced in the server and
It doesnt require a context switch to handle each event, which
reduces event dispatching latency.
The ACE_TP_Reactor class is a descendant of ACE_Reactor_Impl, as
shown in Figure 3.6. It can therefore serve as a concrete implementation
of the ACE_Reactor interface. The ACE_TP_Reactor is structured internally much like the ACE_Select_Reactor. For example, ACE_TP_Reactor
uses the same instances of ACE_Handle_Set and the arrays of ACE_Event_
Handler pointers described on page 74.
The fundamental difference between the ACE_TP_Reactor and ACE_
Select_Reactor is the way they acquire and release the ACE_Token described in Sidebar 6 on page 76. In the ACE_TP_Reactor, the thread that
acquires the token is the leader and threads waiting to acquire the token
are the followers. The leader thread calls select() to wait for events to
occur on a set of I/O handles. After select() returns, the leader thread
does the following:






It picks one event and dispatches its associated event handler hook
method.
It suspends the I/O handle so that other threads cant detect events
on that handle. Suspension involves removing the handle from the
handle set the reactor uses to wait on during select().
It releases the ACE_Token.
When the original leader thread returns from its dispatching upcall,
it resumes the suspended handle, which adds the handle back to the
appropriate handle set.

After the leader thread releases the token to process an event, a follower
thread becomes the new leader. The ACE_TP_Reactor can therefore allow
multiple threads to process events on different handles concurrently. In
contrast, the ACE_Select_Reactor holds the token while it dispatches to
all handlers whose handles were active in the handle set, which serializes
the reactors dispatching mechanism.
Given the added capabilities of the ACE_TP_Reactor, you may wonder
why anyone would ever use the ACE_Select_Reactor. There are several
reasons:

i
i

i hs_
2001/
page
i

Section 3.7 The ACE TP Reactor Class

83

Less overheadAlthough the ACE_Select_Reactor is less powerful than the ACE_TP_Reactor it also incurs less overhead. Moreover, single-threaded applications can instantiate the ACE_Select_
Reactor_T template with an ACE_Null_Mutex to eliminate the overhead of acquiring and releasing tokens.
Implicit serializationThe ACE_Select_Reactor is particularly useful when application-level serialization is undesirable. For example,
applications that have pieces of data coming over different handles
may prefer handling the data using the same event handler for simplicity.

Example
To illustrate the power of the ACE_TP_Reactor, well revise the main()
function from page 78 to use a pool of threads that use the Leader/Followers
pattern to take turns sharing the Reactor_Logging_Servers I/O handles.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

#include "ace/Auto_Ptr.h"
// Forward declarations
void *controller (void *);
void *event_loop (void *);
typedef Reactor_Logging_Server<Logging_Acceptor_Ex>
Server_Logging_Daemon;
const size_t N_THREADS = 4;
int main (int argc, char *argv[])
{
size_t n_threads = argc > 1 ? atoi (argv[1]) : N_THREADS;
auto_ptr <ACE_TP_Reactor> holder (new ACE_TP_Reactor);
auto_ptr <ACE_Reactor> reactor (new ACE_Reactor (holder.get ()));
ACE_Reactor::instance (reactor.get ());
Server_Logging_Daemon server (argc, argv, reactor);
ACE_Thread_Manager::instance ()->spawn_n
(n_threads,
event_loop,
ACE_static_cast (void *, reactor.get ()));
ACE_Thread_Manager::instance ()->spawn

i
i

i hsbo
2001/
page 8
i

84

Section 3.8 The ACE WFMO Reactor Class

27
(controller, ACE_static_cast (void *, reactor.get ()));
28
29
return ACE_Thread_Manager::instance ()->wait ();
30 }

Lines 17 We include the relevant ACE header file, define some forward
declarations, and instantiate the Reactor_Logging_Server template with
the Logging_Acceptor_Ex class from page 58 to create a Server_Logging_
Daemon type definition.
Line 13 We determine the number of threads to include in the thread
pool.
Lines 1517 We set the implementation of the singleton ACE_Reactor to
be an ACE_TP_Reactor.
Lines 1924 We create an instance of the Server_Logging_Daemon template and then spawn n_threads, each of which runs the event_loop()
function shown on page 79.
Lines 2627 We spawn a thread to run the controller() function shown
on page 79. Note that the ACE_TP_Reactor ignores the owner() method
thats called in this function.
Line 29 We wait for the other threads to exit before exiting the main()
function.

3.8 The ACE WFMO Reactor Class


Motivation
Although the select() function is available on most operating systems,
its not always the most efficient or most powerful event demultiplexer on
an OS platform. In particular, select() has the following limitations:





On UNIX platforms, it only supports demultiplexing of I/O handles,


such as regular files, terminal and pseudo-terminal devices, STREAMSbased files, FIFOs and pipes. It does not support demultiplexing of
synchronizers, threads, or System V Message Queues.
On Win32, select() only supports demultiplexing of socket handles.
It can only be called by one thread at a time for a particular set of I/O
handles, which can degrade potential parallelism.

i
i

i hsbo
2001/
page 8
i

Section 3.8 The ACE WFMO Reactor Class

85

Sidebar 8: The Win32 WaitForMultipleObjects() Function


The Win32 WaitForMultipleObjects() system function is similar to the
select() system function. It blocks on an array of handles until one or
more of them become active (which is known as being signaled in Win32
terminology) or until the interval in its timeout parameter elapses. It can
be programmed to return to its caller either when any one of the handles
becomes active or when all the handles become active. In either case,
it returns the index location of the lowest active handle in the array of
handles passed to it as a parameter. Unlike the select() function that
only demultiplexes I/O handles, WaitForMultipleObjects() can wait for
any type of Win32 object, such as a thread, process, synchronizer (e.g.,
event, semaphore, and mutex), I/O handle, named pipe, socket, directory
change notification, console input, or timer.
To alleviate problems with select(), Win32 defines WaitForMultipleObjects(), which is described in Sidebar 8. Not only can this function
work with all types of Win32 I/O handles, multiple threads can call it
concurrently on the same set of handles, thereby enhancing potential parallelism. The WaitForMultipleObjects() function is tricky to use correctly [SS95a], however, for the following reasons:
 Each WaitForMultipleObjects() call only returns a single active
handle. Multiple WaitForMultipleObjects() calls must therefore
be invoked to achieve the same behavior as select(), which returns
a set of active I/O handles.
 WaitForMultipleObjects() doesnt guarantee a fair distribution of
notifications, i.e., the lowest active handle in the array is always returned, regardless of how long other handles further back in the array
may have had pending events.
To shield programmers from these low-level details, while preserving fairness and exposing the power of WaitForMultipleObjects() on Win32
platforms, the ACE Reactor framework defines the ACE_WFMO_Reactor class.
Class Capabilities
The ACE_WFMO_Reactor class is yet another implementation of the ACE_
Reactor interface. It uses the WaitForMultipleObjects() function to

i
i

i hsbo
2001/
page 8
i

86

Section 3.8 The ACE WFMO Reactor Class

wait for events to occur on a set of event sources. This class is the default
implementation of the ACE_Reactor on Win32 platforms. In addition to
supporting all the features of the ACE_Reactor interface, the ACE_WFMO_
Reactor class provides the following capabilities:





It enables a pool of threads to call its handle_events() method concurrently and


It allows applications to wait for a wide range of events, including
I/O events, general Win32 synchronization events (such as mutexes,
semaphores, and threads), and timer events.
It preserves fairness by dispatching all active handles in a handle
array before starting at the beginning of the handle array.

ACE_WFMO_Reactor inherits from ACE_Reactor_Impl, as shown in Figure 3.6. It can therefore serve as a concrete implementation of the ACE_
Reactor interface. Just as the internals of the ACE_Select_Reactor are
designed to leverage the capabilities of select(), ACE_WFMO_Reactor is
designed to leverage the capabilities of WaitForMultipleObjects(). Its
two most significant differences from the ACE_Select_Reactor are:

Socket event detectionAlthough socket handles arent part of the


handle space supported by WaitForMultipleObjects(), Win32 provides the following functions to enable the use of socket handles with
WaitForMultipleObjects():
Win32 Function
WSACreateEvent()

Description
Creates a Win32 event object whose handle can be passed to WaitForMultipleObjects().
WSAEventSelect()
Associates a set of event types on a given
socket handle with an event handle. The
occurrence of any of the events causes the
event handle to become signaled.
WSAEnumNetworkEvents() Retrieves the set of events that have occurred on a socket handle after its associated event handle is signaled.

The ACE_WFMO_Reactor implementation associates an event handle


with a registered socket handle. To maintain the same programming
interface and semantics for the ACE_Event_Handler class, the events
are demultiplexed internally and mapped to the appropriate callback
methods, e.g., ACCEPT, CLOSE, and READ Win32 events map to the

i
i

i hsbo
2001/
page 8
i

Section 3.8 The ACE WFMO Reactor Class

87

ACE_Event_Handler::handle_input() callback hook method. Since


the set of events that WaitForMultipleObjects() can wait for is
broader than that for select(), this mapping isolates the Win32specific mechanisms from the portable interface exported to application designers via the ACE Reactor framework. As a result, the
ACE_WFMO_Reactor class requires neither the three arrays of ACE_
Event_Handler pointers nor the ACE_Handle_Set class used by the
ACE_Select_Reactor. Instead, it allocates a single array of ACE_
Event_Handler pointers and an array of handles that it uses to store
the concrete event handlers registered by applications.

Multiple event loop threadsIts legal for multiple threads to execute WaitForMultipleObjects() concurrently on the same set of
I/O handles. This feature complicates how ACE_WFMO_Reactor registers and unregisters event handlers since multiple threads accessing
a set of registered handlers may be affected by each change. To execute changes to the registered handler set correctly, the ACE_WFMO_
Reactor therefore defers changes until the internal action stablizes
and the changes can be made safely.
The manner in which the ACE_WFMO_Reactor defers changes makes
one aspect of its behavior different from the ACE_Select_Reactor. In
particular, when an event handlers handle_close() hook method is
invoked (e.g., due to one of the handle_*() methods returning ,1
or by calling ACE_Reactor::remove_handler()), the call to handle_
close() is deferred until the ACE_WFMO_Reactors internal records
are updated. This update may not occur until some time after the
point at which the application requests the event handlers removal.
This means that an application cant delete an event handler immediately after requesting its removal from an ACE_WFMO_Reactor since
the handle_close() method may not have been called on the event
handler yet.

The two areas of difference described above show that two dissimilar
event demultiplexing mechanisms can be used effectively and portably by
applying patterns and abstraction principles to decouple the common interface from the divergent implementations.

i
i

i hs_
2001/
page
i

88

Section 3.8 The ACE WFMO Reactor Class

Sidebar 9: WRITE MASK Semantics on Win32 vs. UNIX


On UNIX platforms, if select() detects a WRITE event and the application
doesnt handle the event, select() will renotify the application of the
event next time its called. In particular, UNIX platforms will generate WRITE
events as long as its possible to write data without flow controlling. In
contrast, Win32 platforms dont generate multiple write events when they
are able to write. Instead, they only generate a single write event. Under
Win32 therefore you must continue to write until you get flow controlled or
the link drops.

Example
This example illustrates how to use the I/O handling capabilities of the
ACE_WFMO_Reactor to shut down the Reactor_Logging_Server without
the need for an additional controller thread. To accomplish this, we define
a different Quit_Handler class than the one shown on page 80.
class Quit_Handler : public ACE_Event_Handler {
private:
// Must be implemented by the <ACE_WFMO_Reactor>.
ACE_Reactor *reactor_;
// Private destructor ensures dynamic allocation.
Quit_Handler () {}
public:

Like the earlier Quit_Handler, this class inherits from ACE_Event_Handler.


Its used in an entirely different manner, however. For example, the constructor of our new Quit_Handler registers itself with a reactor to receive
a notification when an event occurs on the standard input.
Quit_Handler (ACE_Reactor *r): reactor_ (r) {
reactor_->register_handler (this, ACE_STDIN);
}

When an event occurs on standard input, the ACE_WFMO_Reactor dispatches the following handle_signal() method, which checks to see if
an administrator wants to shut down the reactors event loop.

i
i

i hs_
2001/
page
i

Section 3.8 The ACE WFMO Reactor Class

89

virtual int handle_signal (int, siginfo_t *, ucontext_t *) {


std::string user_input;
getline (cin, user_input, \n);
if (user_input == "quit") {
reactor_->end_reactor_event_loop ();
return -1; // Trigger removal of handler from reactor.
}
return 0;
}
};

The main() function is similar to the one shown on page 83, with the
main differences being






The ACE_WFMO_Reactor is used instead of ACE_TP_Reactor


The controller thread is replaced by an instance of Quit_Handler
registered with the reactor
The calls to the ACE_WFMO_Reactor::handle_events() in different
threads can actually run concurrently, rather than being serialized
using the Leader/Followers pattern as is the case with the ACE_TP_
Reactor and
The program will only run on Win32 platforms, instead of all ACE
platforms.

The complete main() function is shown below.


#include "ace/Auto_Ptr.h"
void *event_loop (void *); // Forward declaration.
typedef Reactor_Logging_Server<Logging_Acceptor_Ex>
Server_Logging_Daemon;
const size_t N_THREADS = 4;
int main (int argc, char *argv[])
{
size_t n_threads = argc > 1 ? atoi (argv[1]) : N_THREADS;
auto_ptr <ACE_WFMO_Reactor> holder (new ACE_WFMO_Reactor);
auto_ptr <ACE_Reactor> reactor (new ACE_Reactor (holder.get ()));
ACE_Reactor::instance (reactor.get ());
Server_Logging_Daemon server (argc, argv, reactor.get ());
Quit_Handler quit_handler (reactor.get ());

i
i

i hsbo
2001/
page 9
i

90

Section 3.9 Summary

ACE_Thread_Manager::instance ()->spawn_n
(n_threads,
event_loop,
ACE_static_cast (void *, reactor.get ()));
return ACE_Thread_Manager::instance ()->wait ();
}

3.9 Summary
This chapter shows how the ACE Reactor framework helps to simplify the
development of event-driven networked applications by applying




Patterns, such as Wrapper Facade, Facade, Bridge, and Iterator, with


C++ features, such as classes, inheritance, and dynamic binding.

The ACE Reactor framework provides reusable classes that perform all the
lower-level event detection, demultiplexing, and event handler dispatching. Only a small amount of application-defined code is therefore required
to implement event-driven applications, such as the logging server shown
in Sections 3.3 through 3.5. For example, the code in Sections 3.3 and 3.4
is concerned with application-defined processing activities, such as receiving log records from clients. All applications that reuse the ACE Reactor
framework can leverage the knowledge and experience of its skilled middleware developers, as well as its future enhancements and optimizations.
The ACE Reactor framework uses dynamic binding extensively since
the dramatic improvements in clarity, extensibility, and modularity provided by the ACE Reactor framework compensate for any decrease in efficiency resulting from indirect virtual table dispatching [HLS97]. The Reactor framework is often used to develop networked applications where
the major sources of overhead result from caching, latency, network/host
interface hardware, presentation-level formatting, dynamic memory allocation and copying, synchronization, and concurrency management. The
additional indirection caused by dynamic binding is therefore usually insignificant by comparison [Koe92]. In addition, good C++ compilers can
optimize virtual method overhead away completely via the use of adjustor
thunks [Sta96].

i
i

i hsbo
2001/
page 9
i

C HAPTER 4

The ACE Service Configurator


Framework

C HAPTER S YNOPSIS
This chapter describes the design and use of the ACE Service Configurator framework. This framework implements the Component Configurator
pattern [SSRB00], which increases system extensibility by decoupling the
behavior of services from the point of time when implementations of these
services are configured into application processes. We illustrate how the
ACE Service Configurator framework can help to improve the extensibility
of our networked logging server.

4.1

Overview

Section 2.2 described the naming and linking design dimensions that developers need to consider when configuring networked applications. An
extensible strategy for addressing these design dimensions is to apply the
Component Configurator design pattern [SSRB00]. This pattern allows an
application to link and unlink its services at run-time without having to
modify, recompile, or relink an application statically. This pattern also
supports the reconfiguration of services in an application process without
having to shut down and restart a running process.
The ACE Service Configurator framework is a portable implementation
of the Component Configuration pattern that allows applications to defer
configuration and/or implementation decisions about their services until
91

i
i

i hsbo
2001/
page 9
i

92

Section 4.1 Overview

late in the design cycle, i.e., at installation-time or even during run-time.


In this chapter, we examine the following classes in the ACE Service Configurator framework:
ACE Class
ACE_Service_Object

ACE_Service_Repository

ACE_Service_Repository_Iterator

ACE_Service_Config

Description
Defines a uniform interface that the
ACE Service Configurator framework
uses to configure and control the type
of application service or functionality provided by a service implementation. Common control operations include initializing, suspending, resuming, and terminating a service.
Manages all the services offered by
a Service Configurator-based application and allows an administrator to
control the behavior of application services.
Provides a portable mechanism for iterating through all the services in a
repository.
Coordinates the (re)configuration of
services by implementing a mechanism that interprets and executes
scripts specifying which services to
(re)configure into the application, e.g.,
by linking and unlinking dynamically
linked libraries (DLLs), and which services to suspend and resume.

Figure 4.1: The ACE Service Configurator Framework Classes


The most important relationships between the classes in the ACE Service Configurator framework are shown in Figure 4.1. The Component
Configurator pattern described in [SSRB00] divides its participants into
two layers:

Configuration management layer classes that perform applicationindependent strategies to install, initialize, control, and terminate
service objects. The classes in the configuration management layer
in the ACE Service Configurator framework include ACE_Service_

i
i

i hsbo
2001/
page 9
i

Section 4.1 Overview

93

Config, ACE_Service_Repository, and ACE_Service_Repository_


Iterator.
Application layer classes that implement concrete services to perform application-defined processing. In the ACE Service Configurator framework, all application layer classes are descendants of ACE_
Service_Object.

The ACE Service Configurator framework provides the following benefits:

UniformityThe framework imposes a uniform interface for managing the (re)configuration of networked services. This uniformity allows services to be treated as building blocks that can be composed
to form complete applications. Enforcing a common interface across
all services also ensures that they support the same management operations, such as initializing, suspending, resuming, and terminating
a service.
Centralized administrationThe framework groups one or more services in an application into a single administrative unit. By default,
the framework configures an application process by reading commands from a file called svc.conf. Alternative configuration files
can be specified by command-line options or by supplying commands
to ACE_Service_Config directly.
Modularity, testability, and reusabilityThe framework improves
application modularity and reusability by decoupling the implementation of services from the manner in which the services are configured
into application processes. This flexibility allows service implementations the freedom to evolve over time largely independent of configuration issues, such as which services should be collocated or what
concurrency model will be used to execute the services. In addition,
each service can be developed and tested independently, which simplifies service composition.
Enhanced dynamism and controlBy decoupling the applicationdefined portions of an object from the underlying platform configuration mechanisms, the framework enables a service to be reconfigured dynamically without modifying, recompiling, or statically relinking existing code. It may also be possible to reconfigure a service
without restarting the service itself or other services with which its

i
i

i hsbo
2001/
page 9
i

94

Section 4.2 The ACE Service Object Class

collocated. Such dynamic reconfiguration capabilities are often required for applications with high availability requirements, such as
mission-critical systems that perform on-line transaction processing
or real-time industrial process automation.
Tuning and optimizationThe framework increases the range of
service configuration alternatives available to developers by decoupling service functionality from service execution mechanisms, which
enables the optimization of certain service implementation or configuration parameters. For instance, depending on the parallelism available on the hardware and operating system, it may be either more or
less efficient to run multiple services in separate threads or separate
processes. The Service Configurator framework enables applications
to select and tune these behaviors flexibly at run-time, when more
information is available to help match client demands and available
system processing resources.

The remainder of this chapter motivates and describes the capabilities


of each class in the ACE Service Configurator framework. We also illustrate how this framework can be used to enhance the extensibility of our
networked logging server.

4.2 The ACE Service Object Class


Motivation
Enforcing a uniform interface across all networked services enables them
to be configured and managed consistently. In turn, this consistency simplifies application development and deployment by simplifying the creation of reusable administrative configuration tools and promoting the
principle of least surprise. To provide a uniform interface between the
ACE Service Configurator framework and the application-supplied services, each service must be a descendant of a common base class called
ACE_Service_Object.
Class Capabilities
The ACE_Service_Object class provides a uniform interface that allows
all service implementations to be configured and managed by the ACE Ser-

i
i

i hsbo
2001/
page 9
i

Section 4.2 The ACE Service Object Class

95

Figure 4.2: The ACE Service Object Class


vice Configurator framework. This class provides the following capabilities:





It provides hook methods that initialize a service and shut down a


service and clean up its resources.
It provides hook methods to suspend service execution temporarily
and to resume execution of a suspended service.
It provides a hook method that reports key information about the
service, such as its purpose and the port number where it listens for
client connections.

The interface for the ACE_Service_Object class is shown in Figure 4.2.


By inheriting from ACE_Event_Handler and ACE_Shared_Object, subclasses of ACE_Service_Object can be dispatched by the ACE Reactor
framework and can be linked/unlinked dynamically from a DLL, respectively. The key methods of ACE_Service_Object are outlined in the following table:
Method
init()

Description
A hook method used by the Service Configurator framework to instruct a service to initialize itself. A pair of argc/argv-style parameters can be passed to init() and used to control the initialization of a service.
fini()
A hook method used by the Service Configurator framework to
instruct a service to terminate itself. This method typically performs termination operations that release dynamically allocated
resources, such as memory, synchronization locks, or I/O descriptors.
suspend() Hook methods used by the Service Configurator framework to inresume() struct a service to suspend and resume its execution.
info()
A hook method used by the Service Configurator framework to
query a service for certain information about itself, such as its
name, purpose, and network address. Clients can query a server to
retrieve this information and use it to contact a particular service
running in a server.

These hook methods collectively impose a uniform contract between the


ACE Service Configurator framework and the services that it manages.
Application services that inherit from ACE_Service_Object can selectively override its hook methods, which are called back at the appropriate

i
i

i hsbo
2001/
page 9
i

96

Section 4.2 The ACE Service Object Class

time by the ACE Service Configurator framework in accordance to various


events. This highly extensible technique allows applications to defer the selection of a particular service implementation until late in the design cycle,
i.e., at installation-time or even during run-time. Developers can therefore concentrate on a services functionality and other design dimensions
without committing themselves prematurely to a particular service configuration. Developers can also configure complete applications by composing multiple services that are developed independently and therefore dont
require a priori global knowledge of each other, yet can still collaborate
effectively.
Example
To illustrate the ACE_Service_Object class, we reimplement our Reactorbased logging server from the Example portion in Section 3.5. This revision can be configured dynamically by the ACE Service Configurator framework, rather than configured statically into the main() program shown on
page 3.5. To accomplish this, well create the following Reactor_Logging_
Server_Adapter template:
template <class ACCEPTOR>
class Reactor_Logging_Server_Adapter : public ACE_Service_Object
{
public:
// Hook methods inherited from <ACE_Service_Object>.
virtual int init (int argc, char *argv[]);
virtual int fini ();
virtual int info (ACE_TCHAR **, size_t) const;
virtual int suspend ();
virtual int resume ();
private:
Reactor_Logging_Server<ACCEPTOR> *server_;
};

This template inherits from ACE_Service_Object and contains a pointer


to the Reactor_Logging_Server template defined on page 71. Weve instantiated this template with the ACCEPTOR parameter to defer our choice
of acceptor until later in the design cycle.

i
i

i hs_
2001/
page
i

Section 4.2 The ACE Service Object Class

97

When an instance of Reactor_Logging_Server_Adapter is configured


dynamically, the ACE Service Configurator framework invokes its init()
method:
template <class ACCEPTOR> int
Reactor_Logging_Server_Adapter<ACCEPTOR>::init
(int argc, char *argv[])
{
server_ = new Reactor_Logging_Server<ACCEPTOR>
(argc,
argv,
ACE_Reactor::instance ());
return 0;
}

This method creates an instance of Reactor_Logging_Server and keeps


track of it with the server_ pointer.
When instructed to remove the Reactor_Logging_Server_Adapter,
the ACE Service Configurator framework invokes its fini() method:
template <class ACCEPTOR> int
Reactor_Logging_Server_Adapter<ACCEPTOR>::fini ()
{
delete server_;
return 0;
}

This method deletes the instance of Reactor_Logging_Server created in


init().
The info() method is shown next:
1 template <class ACCEPTOR> int
2 Reactor_Logging_Server_Adapter<ACCEPTOR>::info
3
(ACE_TCHAR **bufferp, size_t length) const {
4
ACE_INET_Addr sa;
5
server_->acceptor ().get_local_addr (sa);
6
7
ACE_TCHAR buf[BUFSIZ];
8
sprintf (buf,
9
"%d/%s %s",
10
sa.get_port_number (),
11
"tcp",
12
"# Reactor-based logging server\n");

i
i

i hsbo
2001/
page 9
i

98

Section 4.2 The ACE Service Object Class

13
*bufferp = ACE_OS::strnew (buf);
14
ACE_OS::strcpy (*bufferp, buf);
15
return ACE_OS::strlen (buf);
16 }

Lines 15 Obtain the network address object from the instance of ACE_
SOCK_Acceptor thats used by the Reactor_Logging_Server.
Lines 712 Format an informative message that explains what the service does and how to contact it.
Lines 1315 Allocate a new memory buffer dynamically, store the formatted description string into this buffer, and return the buffers length.
The caller is responsible for deleting the buffer.
The suspend() and resume() methods are similar to each other, as
shown below:
template <class ACCEPTOR> int
Reactor_Logging_Server_Adapter<ACCEPTOR>::suspend ()
{
return ACE_Reactor::instance ()->suspend_handler (server_);
}
template <class ACCEPTOR> int
Reactor_Logging_Server_Adapter<ACCEPTOR>::resume ()
{
return ACE_Reactor::instance ()->resume_handler (server_);
}

Since Reactor_Logging_Server is a descendant of ACE_Event_Handler,


the server_ object can be passed to the singleton reactors suspend_
handler() and resume_handler() methods, which double-dispatch to
Reactor_Logging_Server::get_handle() to extract the underlying passivemode socket handle. These methods then use this socket handle to temporarily remove or resume the Reactor_Logging_Server from the list of
socket handles handled by the singleton reactor.
The Example portion of Section 4.4 shows how the Reactor_Logging_
Server_Adapter can be configured into and out of a generic server application dynamically.

i
i

i hsbo
2001/
page 9
i

Section 4.3 The ACE Service Repository and ACE Service Repository Iterator Classes

99

Figure 4.3: The ACE Service Repository Class

4.3

The ACE Service Repository and ACE Service Repository Iterator Classes

Motivation
The ACE Service Configurator framework supports the configuration of
both single-service and multi-service servers. To simplify run-time administration of these servers, its often necessary to individually and/or
collectively access and control the service objects that comprise a servers
currently active services. Rather than expecting application developers to
provide these capabilities in an ad hoc way, the ACE Service Configurator framework defines the ACE_Service_Repository and ACE_Service_
Repository_Iterator classes.

Class Capabilities
The ACE_Service_Repository implements the Manager pattern [Som97]
to control the life-cycle ofand the access toservice objects configured by
the ACE Service Configurator framework. This class provides the following
capabilities:




It keeps track of all service implementations that are configured into


an application and maintains each services status, such as whether
it is active or suspended and
It allows the ACE Service Configurator framework to control when a
service is configured into or out of an application.

The interface for the ACE_Service_Repository class is shown in Figure 4.3 and its key methods are outlined in the following table:

i
i

i hsbo
2001/
page 1
i

100

Section 4.3 The ACE Service Repository and ACE Service Repository Iterator Classes

Method
ACE_Service_Repository()
open()
ACE_Service_Repository()
close()
insert()
find()
remove()
suspend()
resume()
instance()

Description
Initialize the repository and allocate its dynamic resources.
Close down the repository and release its dynamically allocated resources.
Add a new service into the repository.
Locate an entry in the repository.
Remove an existing service from the repository.
Suspend a service in the repository.
Resume a suspended service in the repository.
A static method that returns a pointer to a singleton ACE_Service_Repository.

A search structure within the ACE_Service_Repository binds together




The name of a service, which is represented as an ASCII string, and


An instance of ACE_Service_Type, which is the class used by the
ACE Service Configurator framework to link, initialize, suspend, resume, remove, and unlink AC_Service_Objects from a server statically or dynamically.

Each ACE_Service_Type object contains:

Accessors that set/get the type of service that resides in a repository.


There are three types of services:
1. ACE_Service_Object_Type (ACE SVC OBJ T)This type provides
accessors that set/get a pointer to the associated ACE_Service_
Object.
2. ACE_Module_Type (ACE MODULE T) TBD
3. ACE_Stream_Type (ACE STREAM T) TBD
For dynamically linked service objects, an ACE_Service_Type stores
the handle of the underlying DLL where the object code resides. The
ACE Service Configurator framework uses this handle to unlink and
unload a service object from a running server when the service it
offers is no longer required.

Sidebar 10 shows how an application can use the ACE_Dynamic_Service


class to retrieve services from the ACE_Service_Repository programmatically.

i
i

i hsbo
2001/
page 1
i

Section 4.3 The ACE Service Repository and ACE Service Repository Iterator Classes

101

Sidebar 10: The ACE Dynamic Service Template


The ACE_Dynamic_Service template can be used to retrieve services registered with the ACE_Service_Repository. We use the C++ template
parameter to ensure that a pointer to the appropriate type of service is
returned, as shown below:
template <class TYPE>
class ACE_Dynamic_Service
{
public:
// Use <name> to search the <ACE_Service_Repository>.
static TYPE *instance (const ACE_TCHAR *name) {
const ACE_Service_Type *svc_rec;
if (ACE_Service_Repository::instance ()->find
(name, &svc_rec) == -1) return 0;
const ACE_Service_Type_Impl *type = svc_rec->type ();
if (type == 0) return 0;
void *obj = type->object ();
return ACE_dynamic_cast (TYPE *, obj);
}
};

An application can use the ACE_Dynamic_Service template as follows:


typedef Reactor_Logging_Server_Adapter<Logging_Acceptor>
Server_Logging_Daemon;
Server_Logging_Daemon *logging_server =
ACE_Dynamic_Service<Server_Logging_Daemon>::instance
("Server_Logging_Daemon");
cout << logging_server->info () << endl;

The ACE_Service_Repository_Iterator implements the Iterator pattern [GHJV95] to provide a way to access the ACE_Service_Type items
in an ACE_Service_Repository sequentially without exposing its internal
representation. The interface for the ACE_Service_Repository_Iterator
class is shown in Figure 4.4 and its key methods are outlined in the following table:

i
i

i hs_
2001/
page
i

102

Section 4.3 The ACE Service Repository and ACE Service Repository Iterator Classes

Figure 4.4: The ACE Service Repository Iterator Class

Method
ACE_Service_Repository_Iterator()
next()
done()
advance()

Description
Initialize the iterator.
Passes back the next unseen ACE_
Service_Type in the repository.
Returns 1 when all items have been
seen, else 0.
Move forward by one item in the
repository.

Its important not to delete entries from an ACE_Service_Repository thats


being iterated upon since the ACE_Service_Repository_Iterator is not
designed as a robust iterator [Kof93].
Example
The following example illustrates how ACE_Service_Repository and ACE_
Service_Repository_Iterator can be used to implement a Service_
Reporter class. This class provides a meta-service that clients can use
to obtain information on all the services that the ACE Service Configurator
framework has configured into an application statically or dynamically. A
client interacts with a Service_Reporter as follows:





A client establishes a connection to the TCP port number where the


Service_Reporter object is listening.
The Service_Reporter returns a list of all the servers services to the
client.
The client and Service_Reporter close their respective ends of the
TCP/IP connection.

Sidebar 11 describes how the Service_Reporter class differs from the


ACE_Service_Manager class thats bundled with the ACE toolkit.
The interface and implementation of the Service_Reporter class are
described below. We first create a file called Service_Reporter.h that
contains the following class definition:
class Service_Reporter : public ACE_Service_Object
{
public:

i
i

i hsbo
2001/
page 1
i

Section 4.3 The ACE Service Repository and ACE Service Repository Iterator Classes

103

Sidebar 11: The ACE Service Manager Class


ACE_Service_Manager provides clients with access to administrative
commands that publish and manage the services currently offered by a
network server. These commands externalize certain internal attributes
of the services configured into a server. During server configuration, an
ACE_Service_Manager is typically registered at a well-known communication port, e.g., port 9411, accessible by clients. Clients can connect to
an ACE_Service_Manager and send it the following commands:





Report servicesIf the command help is sent, a list of all services configured into an application via the ACE Service Configurator
framework is returned to the client.
ReconfigureIf the command reconfigure is sent, reconfiguration is triggered that will reread the local service configuration file.
Process directiveIf neither help nor reconfigure is sent,
the clients command is passed to the ACE_Service_Config::
process_directive() method, which is described on page 112.
This feature enables remote configuration of servers via commandline instructions like
% echo "suspend My_Service" | telnet hostname 9411

// Hook
virtual
virtual
virtual
virtual
virtual

methods inherited from <ACE_Service_Object>.


int init (int argc, char *argv[]);
int fini ();
int info (ACE_TCHAR **, size_t) const;
int suspend ();
int resume ();

private:
// Acceptor instance.
ACE_SOCK_Acceptor acceptor_;
enum { DEFAULT_PORT = 9411 };
};

Service_Reporter can be configured by the ACE Service Configurator


framework since it inherits from ACE_Service_Object.

i
i

i hs_
2001/
page
i

104

Section 4.3 The ACE Service Repository and ACE Service Repository Iterator Classes

The implementations of the Service_Reporter hook methods are placed


into the file Service_Reporter.cpp. The ACE Service Configurator framework calls the following Service_Reporter::init() hook method when
an instance of Service_Reporter is configured into an application:
1 int Service_Reporter::init (int argc, ACE_TCHAR *argv[]) {
2
ACE_INET_Addr local_addr (Service_Reporter::DEFAULT_PORT);
3
4
// Start at argv[0].
5
ACE_Get_Opt get_opt (argc, argv, "p:", 0);
6
7
for (int c; (c = get_opt ()) != -1;)
8
switch (c) {
9
case p:
10
local_addr.set ((u_short) ACE_OS::atoi (get_opt.optarg));
11
break;
12
}
13
14
acceptor_.open (local_addr);
15
return ACE_Reactor::instance ()->register_handler
16
(acceptor_.get_handle (),
17
this,
18
ACE_Event_Handler::ACCEPT_MASK);
19 }

Line 2 Initialize the local_addr to the default TCP port number used by
Service_Reporter.
Lines 512 Parse the service configuration options using the ACE_Get_
Opt class described in Sidebar 12. If the -p option is passed into init()
then the local_addr port number is reset to that value.
Lines 1418 Initialize the ACE_SOCK_Acceptor to listen on the local_
addr port number and then register the instance of Service_Reporter
with the singleton reactor.
When a connection request arrives from a client, the singleton reactor dispatches the following Service_Reporter::handle_input() hook
method:
1 int Service_Reporter::handle_input (ACE_HANDLE) {
2
// Connection to the client (we only support
3
// one client connection at a time).

i
i

i hsbo
2001/
page 1
i

Section 4.3 The ACE Service Repository and ACE Service Repository Iterator Classes

105

Sidebar 12: The ACE Get Opt Class


The ACE_Get_Opt class is an iterator for parsing command-line arguments
that provides the following features:




4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26 }

It provides a C++ wrapper facade for the standard C library


getopt() function. Unlike the standard getopt() function, however,
each instance of ACE_Get_Opt contains its own state, so it can be
used reentrantly.
It also supports so-called long option formats, which ...

ACE_SOCK_Stream peer_stream;
acceptor_.accept (peer_stream);
ACE_Service_Repository_Iterator iterator
(*ACE_Service_Repository::instance (), 0);
for (const ACE_Service_Type *st;
iterator.next (st) != 0;
iterator.advance ()) {
iovec iov[3];
iov[0].iov_base = st->name ();
iov[0].iov_len = strlen (iov[0].iov_base);
iov[1].iov_base =
st->active () ? " (active) " : " (paused) ";
iov[1].iov_len = strlen (" (active) ");
iov[2].iov_len = st->type ()->info (&iov[2].iov_base);
peer_stream.sendv_n (iov, 3);
delete [] iov[2].iov_base;
}
peer_stream.close ();
return 0;

Lines 45 Accept a new client connection.


Lines 78 Initialize an ACE_Service_Repository_Iterator, which well
use to report all the active and suspended services offered by the server.
Passing a 0 as the second argument to the constructor instructs it to return
information on suspended services, which are ignored by default.

i
i

i hs_
2001/
page
i

106

Section 4.3 The ACE Service Repository and ACE Service Repository Iterator Classes

Lines 1022 Iterate through each service, invoke their info() method
to obtain a descriptive synopsis of the service, and send this information
back to the client via the connected socket.
Line 24 Close down the connection to the client and release the socket
handle.
The Service_Reporter::info() hook method passes back a string
that explains what the service does and how to connect to it:
int Service_Reporter::info (ACE_TCHAR **bufferp,
size_t length = 0) const {
ACE_INET_Addr sa;
acceptor_.get_local_addr (sa);
ACE_TCHAR buf[BUFSIZ];
sprintf (buf,
"%d/%s %s",
sa.get_port_number (),
"tcp",
"# lists all services in the daemon\n");
*bufferp = ACE_OS::strnew (buf);
ACE_OS::strcpy (*bufferp, buf);
return ACE_OS::strlen (buf);
}

As with the info() method on page 97, a caller must delete the dynamically allocated buffer.
The Service_Reporters suspend() and resume() hook methods forward to the corresponding methods in the reactor singleton, as follows:
int Service_Reporter::suspend () {
return ACE_Reactor::instance ()->suspend_handler (this);
}
int Service_Reporter::resume () {
return ACE_Reactor::instance ()->resume_handler (this);
}

The Service_Reporter::fini() method is shown below:


int Service_Reporter::fini () {
acceptor_.close ();

i
i

i hsbo
2001/
page 1
i

Section 4.3 The ACE Service Repository and ACE Service Repository Iterator Classes

107

ACE_Reactor::instance ()->remove_handler
(this, ACE_Event_Handler::ACCEPT_MASK);
}

This method closes the ACE_SOCK_Acceptor endpoint and removes the


Service_Reporter from the singleton reactor. We neednt delete this
object in handle_close() since the ACE Service Configurator framework
is responsible for deleting a service object after calling its fini() hook
method.
Finally, we add the necessary ACE service macros to the Service_
Reporter implementation file. These macros create an instance of Service_
Reporter and register it with the ACE_Service_Repository, as described
in Sidebar 13 on page 108.
1
2
3
4
5
6
7
8
9
10
11
12
13

ACE_SVC_FACTORY_DEFINE (Service_Reporter);
ACE_STATIC_SVC_DEFINE {
Service_Reporter,
"Service_Reporter",
ACE_SVC_OBJ_T,
&ACE_SVC_NAME (Service_Reporter),
ACE_Service_Types::DELETE_THIS |
ACE_Service_Types::DELETE_OBJ,
0 // This object is not initially active.
};
ACE_STATIC_SVC_REQUIRE (Service_Reporter);

Line 1 Use the ACE SVC FACTORY


the following factory function:

DEFINE

macro to automatically generate

extern "C" ACE_Service_Object *make_Service_Reporter ()


{ return new Service_Reporter; }

The macro that generates make_Service_Reporter() defines the function


with extern "C" linkage. This C++ feature simplifies the design and improves the portability of the ACE_Service_Config implementation since
it can fetch the make_Service_Reporter() function from a DLLs symbol
table without requiring knowledge of the C++ compilers name mangling
scheme.

i
i

i hsbo
2001/
page 1
i

108

Section 4.3 The ACE Service Repository and ACE Service Repository Iterator Classes

Sidebar 13: The ACE Service Factory Macros


The ACE Service Configurator framework creates new service objects by
invoking factory functions. ACE defines the following macros in the ace/
OS.h header file to simplify the creation and use of factory functions:
ACE Macro
ACE_SVC_FACTORY_DECLARE(T)

ACE_SVC_FACTORY_DEFINE(T)

ACE_STATIC_SVC_DECLARE(T)
ACE_STATIC_SVC_DEFINE
(T,
NAME,
TYPE,
FUNC,
FLAGS,
ACTIVE)

ACE_STATIC_SVC_REQUIRE(T)

Description
Used in a header file to declare a factory
method for creating service objects of type
T. In this case, T is the name of a descendant of ACE_Service_Object.
Used in an implementation file to define a factory function that will create
a new object of type T. This function
will match the declaration created by
ACE SVC FACTORY DECLARE () and will also
define an exterminator function for deleting
the object created by the factory.
Used in a header file to declare that type T
will be used as a statically linked service.
Used in an implementation file to define an object of type ACE_Static_Svc_
Descriptor. Parameter T is the class being defined as a static service and the remaining parameters match the attributes
of the ACE_Static_Svc_Descriptor class.
In order to use the function defined by
ACE SVC FACTORY DEFINE () as the allocator
function the value assigned to FUNC should
be &ACE_SVC_NAME(T).
Used in an implementation file to add
the ACE_Static_Svc_Descriptor defined
by the ACE STATIC SVC DEFINE macro to
the ACE_Service_Configs list of static services.

Lines 311 The ACE STATIC SVC DEFINE macro initializes an instance of
ACE_Static_Svc_Descriptor, which is a class that stores the information needed to describe a statically configured service. Service_Reporter
is the service objects class name and "Service_Reporter" is the name
used to identify the service. ACE SVC OBJ T is the type of service ob-

i
i

i hsbo
2001/
page 1
i

Section 4.4 The ACE Service Cong Class

109

ject container and &ACE_SVC_NAME(Service_Reporter) is the address of


the make_Service_Reporter() factory function that creates an instance
of Service_Reporter. The DELETE THIS and DELETE OBJ are enumerated literals defined in the ACE_Service_Types class that ensure the container and the service object are deleted after the Service_Reporter is
destroyed.
Line 13 The ACE STATIC SVC REQUIRE macro automatically registers the
instance of the Service_Reporters ACE_Static_Svc_Descriptor object
with the ACE_Service_Repository.
The Example portion of Section 4.4 shows how a Service_Reporter
can be configured statically into a server application.

4.4

The ACE Service Config Class

Motivation
Before a service can execute, it must be configured into an applications address space. One way to configure the services that comprise a networked
application is to statically link the functionality provided by its various
classes and functions into separate OS processes. We used this approach
in the logging server examples in Chapter 3 and throughout [SH02], where
the logging server program runs in a process that handles log records from
client applications. Although our use of the ACE Reactor framework in the
previous chapter improved the modularity and portability of the networked
logging server, the following drawbacks arose from statically configurating
the Reactor_Logging_Server class with its main() program:

Service configuration decisions are made prematurely in the development cycle, which is undesirable if developers dont know the
best way to collocate or distribute services a priori. Moreover, the
best configuration may change as the computing context changes.
For example, an application may write log records to a local file when
its running on a disconnected laptop computer. When the laptop is
connected to a LAN, however, it may forward log records to a centralized logging server.1 Forcing networked applications to commit

If wait a minuteyou dont need a logging service for local disk file logging! comes
to mind, youre correct. ACE makes the switch easy via the ACE_Logging_Strategy class
thats covered in Appendix ??.

i
i

i hsbo
2001/
page 1
i

110

Section 4.4 The ACE Service Cong Class

prematurely to a particular service configuration impedes their flexibility and can reduce performance and functionality, as well as incur
costly redesign and reimplementation later in a projects lifecycle.
 Modifying a service may adversely affect other services if the implementation of a service is coupled tightly with its initial configuration. To enhance reuse, for example, a logging server may initially
reside in the same program as other services, such as a name service.
If the name service lookup algorithm is changed, however, all existing code in the server would require modification, recompilation, and
static relinking. Moreover, terminating a running process to change
its name service code would also terminate the collocated logging service. This disruption in service may not be acceptable for highly available systems, such as telecommunication switches or customer care
call centers [SS94].
 System performance may scale poorly since associating a separate
process with each service ties up OS resources, such as I/O descriptors, virtual memory, and process table slots. This design is particularly wasteful if services are often idle. Moreover, processes can be
inefficient for many short-lived communication tasks, such as asking
a time service for the current time or resolving a host address request
via the Domain Name Service (DNS).
To address these drawbacks, the ACE Service Configurator framework defines the ACE_Service_Config class.
Class Capabilities
The ACE_Service_Config class implements the Facade pattern [GHJV95]
to integrate the other ACE Service Configurator classes and coordinate the
activities necessary to manage the services in an application process. This
class provides the following capabilities:
 It provides mechanisms to dynamically link and unlink service implementations into and out of an application process.
 It interprets a simple scripting language that allows applications or
administrators to provide the ACE Service Configurator framework
with commands, called directives, to locate and initialize a services implementation at run-time, as well as to suspend, resume,
re-initialize, and/or terminate a component after its been initialized.
This interpretation can (re)configure an application process either

i
i

i hsbo
2001/
page 1
i

Section 4.4 The ACE Service Cong Class

111

Figure 4.5: The ACE Service Config Class

In batch-mode, i.e., via a series of directives in a configuration


file, known as the svc.conf file or
Interactively, i.e., by passing directives via strings.
The interface for the ACE_Service_Config class is shown in Figure 4.5.
The ACE_Service_Config has a rich interface since it exports all the features in the ACE Service Configurator framework. We therefore group the
description of its methods into the three categories described below.

1. Service Configurator lifecycle management methods. The following


methods initialize and terminate the ACE_Service_Config:

ACE Class
ACE_Service_Config()
open()
ACE_Service_Config()
close()

Description
These methods create and initialize the ACE_
Service_Config.
These methods shut down and finalize all the configured services and deletes the resources allocated when the ACE_Service_Config was initialized.

There is only one instance of the ACE_Service_Config in a process


since this class is a variant of the Monostate pattern [CB97], which ensures
a unique state for all objects by declare all data members to be static.
Moreover, the ACE_Service_Config methods are also declared as static.
The open() method is the most common way to initialize the ACE_
Service_Config. It parses arguments passed in the argc and argv parameters, which are illustrated in the following table:

i
i

i hsbo
2001/
page 1
i

112

Option
-b
-d
-f

-n
-s
-S
-y

Section 4.4 The ACE Service Cong Class

Description
Turn the application process into a daemon.
Turn on debugging mode, which displays diagnostic information as
directives are processed.
Supply a file containing directives other than the default svc.conf
file. This argument can be repeated to process multiple configuration
files.
Dont process any static directives, which eliminates the need to
initialize the ACE_Service_Repository statically.
Designate the signal to be used to cause the ACE_Service_Config to
reprocess its configuration file. By default, SIGHUP is used.
Supply a directive to the ACE_Service_Config directly. This argument can be repeated to process multiple directives.
Process static directives, which requires the static initialization of
the ACE_Service_Repository.

2. Service configuration methods. After parsing all its argc/argv arguments, the ACE_Service_Config::open() method calls one or both of
the following methods to configure the server:
Method
process_directives()

process_directive()

Description
Process a sequence of directives that are stored
in a designated script file, which defaults to svc.
conf. This method allows multiple directives to
be stored persistently and processed iteratively in
batch-mode. This method executes each service
configuration directive in a svc.conf file in the order in which they are specified.
Process a single directive passed as a string parameter. This method allows directives to be created dynamically and/or processed interactively.

The following table summarizes the service configuration directives that


can be interpreted by these two ACE_Service_Config methods:
Directive
dynamic
static
remove
suspend
resume
stream

Description
Dynamically link and initialize a service.
Initialize a service that was linked statically.
Remove a service completely, e.g., unlink it from the application
process.
Suspend service without completely removing it.
Resume a service that was suspended earlier.
Initialize an ordered list of hierarchically-related modules.

i
i

i hsbo
2001/
page 1
i

Section 4.4 The ACE Service Cong Class

113

We describe the syntax and semantics for the tokens in each of these directives below.

Dynamically link and initialize a service: dynamic svc_name


Service_Object * DLL:factory_func() ["argc/argv options"]
The dynamic directive is used to dynamically link and initialize a service object. The svc_name is the name assigned to the service. DLL is
the name of the dynamic link library that contains factory_func(),
which is an extern "C" function that ACE_Service_Config invokes
to create an instance of a service. The factory_func() must return
a pointer to an object derived from ACE_Service_Object.
The DLL can either be a full pathname or a filename without a suffix.
If its a full pathname, the ACE_DLL::open() method described in
Sidebar 14 is used to dynamically link the designated file into the
application process. If its a filename, however, ACE will use OSdependent mechanisms to locate the file, as follows:
DLL filename expansionIt will determine the name of the DLL
by adding the appropriate prefix and suffix, such as the lib prefix and .so suffix for Solaris shared libraries or the .dll suffix
for Win32 DLLs, and
DLL search pathIt will search for the designated expanded DLL
filename using the platforms DLL search path lookup environment variable, e.g., $LD LIBRARY PATH on many UNIX systems or
$PATH on Win32.
After ACE_Service_Config locates the file, it dynamically links it into
the address space of the process via ACE_DLL::open(). The dynamic
directive can be used portably across a range of OS platforms since
ACE encapsulates these platform-dependent DLL details.

The argc/argv options are a list of parameters that can be supplied


to initialize a service object via its init() method. The ACE_Service_
Config substitutes the values of environment variables that are included in an options string.
Initialize a statically linked service: static svc_name ["argc/
argv options"]
Although the ACE_Service_Config is commonly used to configure
services dynamically, it also supports the static configuration of services via the static directive. The svc_name and argc/argv options

i
i

i hsbo
2001/
page 1
i

114

Section 4.4 The ACE Service Cong Class

Sidebar 14: The ACE DLL Class


Dynamically linking and unlinking DLLs presents the following problems that
are similar to those with native OS Sockets API in Chapter 2 of [SH02]:
Non-uniform DLL programming interface thats much less
portable than the Socket API. Whereas the BSD Socket API is nearly
ubiquitous, the API for linking and unlinking DLL varies greatly between platforms.
Unsafe types that invite errors and misuse because various DLL
mechanisms return a weakly-typed handle thats passed to DLL functions, such as looking up symbols and unlinking the DLL. Since theres
nothing that distinguishes these handles from other handles, however,
their use is error-prone.
To address these problems, the ACE_DLL wrapper facade class abstracts
the functionality necessary to use a DLL object itself, rather than dealing
with procedural concepts and types. The ACE_DLL class eliminates the
need for applications to use weakly-typed handles and also ensures resources are released properly on object destruction.
The interface of ACE_DLL is shown in the figure above and its key methods are outlined in the following table:
ACE Class
ACE_DLL()
open()
ACE_DLL
close()
symbol()
error()

Description
Opens and dynamically links a designated DLL.
Closes the DLL.
Return a pointer to the designated symbol in the symbol table of the DLL.
Return a string explaining which failure occurred.

are the same as those in the dynamic directive. The syntax is simpler, however, since the service object must be linked into the program
statically, rather than linked dynamically. Static configuration trades
flexibility for increased security, which may be useful for certain types
of servers that must contain only trusted, statically linked services.
Remove a service completely: remove svc_name
The remove directive causes the ACE_Service_Config to query the

i
i

i hsbo
2001/
page 1
i

115

Section 4.4 The ACE Service Cong Class

ACE_Service_Repository for the designated svc_name service. If it


locates this service, it invokes its fini() hook method, which performs the activities needed to clean up resources when the service
terminates. If a service destruction function pointer is associated with
the service object, its called to destroy the service object itself. The
ACE SVC FACTORY DEFINE macro defines this function automatically.
Finally, if the service was linked dynamically from a DLL, its unlinked
via the ACE_DLL::close() method.
Suspend a service without removing it: suspend svc_name
The suspend directive causes the ACE_Service_Config to query the
ACE_Service_Repository for the designated svc_name service. If
this service is located, its suspend() hook method is invoked. A service can override this method to implement the appropriate actions
needed to suspend its processing.
Resume a service suspended previously: resume svc_name
The resume directive causes the ACE_Service_Config to query the
ACE_Service_Repository for the designated svc_name service. If
this service is located, its resume() hook method is invoked. A service
can override this method to implement the appropriate actions needed
to resume its processing, which typically reverse the effects of the
suspend() method.
Initialize an ordered list of hierarchically-related modules: stream
svc_name f module-list g
The stream directive causes the ACE_Service_Config to initialize
an ordered list of hierarchically-related modules. Each module consists of a pair of services that are interconnected and communicate
by passing ACE_Message_Blocks. The implementation of the stream
directive uses the ACE Streams framework described in Chapter ??.

The complete Backus/Naur Format (BNF) syntax for svc.conf files parsed
by the ACE_Service_Config is shown in Figure 4.6. Sidebar 15 describes
how to specify svc.conf files using XML syntax.
3. Utility methods. The key methods are outlined below:
Method
()

Description
.

i
i

i hs_
2001/
page
i

116

Section 4.4 The ACE Service Cong Class

<svc-config-entries> ::= svc-config-entries svc-config-entry | NULL


<svc-config-entry> ::= <dynamic> | <static> | <suspend> |
<resume> | <remove> | <stream>
<dynamic> ::= dynamic <svc-location> <parameters-opt>
<static> ::= static <svc-name> <parameters-opt>
<suspend> ::= suspend <svc-name>
<resume> ::= resume <svc-name>
<remove> ::= remove <svc-name>
<stream> ::= stream <svc-name> { <module-list> }
<module-list> ::= <module-list> <module> | NULL
<module> ::= <dynamic> | <static> | <suspend> |
<resume> | <remove>
<svc-location> ::= <svc-name> <type> <svc-initializer> <status>
<type> ::= Service_Object * | Module * | Stream * | NULL
<svc-initializer> ::= PATHNAME : FUNCTION ( )
<svc-name> ::= STRING
<status> ::= active | inactive | NULL
<parameters-opt> ::= STRING | NULL

Figure 4.6: BNF for the ACE Service Config Scripting Language

Sidebar 15: Using XML to Configure Services

Example
This example illustrates how to apply ACE_Service_Config and the other
classes in the ACE Service Configurator framework to configure a server
that behaves as follows:




It statically configures an instance of Server_Reporter.


It dynamically configures the Reactor_Logging_Server_Adapter template from the Example portion of Section 4.2 into the address space
of a server.

We then show how to dynamically reconfigure the server to support a different implementation of a reactor-based logging service.
Initial Configuration. The main() program below configures the Service_
Reporter and Reactor_Logging_Server_Adapter services into an application process and then runs the reactors event loop.

i
i

i hsbo
2001/
page 1
i

Section 4.4 The ACE Service Cong Class

117

#include "ace/Service_Config.h"
#include "ace/Reactor.h"
int main (int argc, char *argv[])
{
ACE_Service_Config::open (argc, argv);
ACE_Reactor::instance ()->run_reactor_event_loop ();
return 0;
}

There are no service-specific header files or code in the main() program.


The genericity of this server illustrates the power of the ACE Service Configurator framework.
When ACE_Service_Config::open() is called, it uses the ACE_Service_
Config::process_directives() method to interpret the following svc.
conf file:
1 static Server_Reporter "-p $SERVER_REPORTER_PORT"
2
3 dynamic Server_Logging_Daemon Service_Object *
4 SLD:make_Server_Logging_Daemon() "$SERVER_LOGGING_DAEMON_PORT"

Line 1 Initialize the Server_Reporter instance that was linked statically together with the main() program. The ACE STATIC SVC REQUIRE
macro used in the Service_Reporter.cpp file on page 107 ensures the
Service_Reporter object is registered with the ACE_Service_Repository
before the ACE_Service_Config::open() method is called.
Line 34 Dynamically link the SLD DLL into the address space of the
process and use ACE_DLL to extract the make_Server_Logging_Daemon()
factory function from the SLD symbol table. This function is called to obtain
a pointer to a dynamically allocated Server_Logging_Daemon. The framework then calls the Server_Logging_Daemon::init() hook method on
this pointer, passing in the "$SERVER_LOGGING_DAEMON_PORT" string as
its argc/argv argument. This string designates the port number where the
server logging daemon listens for client connection requests. If init() succeeds, the Server_Logging_Daemon pointer is stored in the ACE_Service_
Repository under the name "Server_Logging_Daemon".
The SLD DLL is generated from the following SLD.cpp file:

i
i

i hsbo
2001/
page 1
i

118

Section 4.4 The ACE Service Cong Class

typedef Reactor_Logging_Server_Adapter<Logging_Acceptor>
Server_Logging_Daemon;
ACE_SVC_FACTORY_DEFINE (Server_Logging_Daemon);

This file defines a typedef called Server_Logging_Daemon that instantiates the Reactor_Logging_Server_Adapter template with the Logging_
Acceptor shown on page 50 of Section 3.3. The ACE SVC FACTORY DEFINE
macro is then used to generate the make_Server_Logging_Daemon() factory function automatically.
The UML state diagram in Figure 4.7 illustrates the steps involved in
configuring the server logging daemon based on the svc.conf file shown
above. When the OPEN event occurs at run-time, the ACE_Service_Config
Figure 4.7: A State Diagram for Configuring the Server Logging Daemon
class calls process_directives(), which consults the svc.conf file.
When all the configuration activities have been completed, the main()
program invokes the ACE_Reactor::run_reactor_event_loop() method,
which in turn calls the Reactor::handle_events() method continuously.
As shown in Figure 4.7, this method blocks awaiting the occurrence of
events, such as connection requests or data from clients. As these events
occur, the reactor dispatches the handle_input() method of concrete
event handlers automatically to perform the designated services.
Reconfigured Server. The ACE Service Configurator framework can be
programmed to reconfigure a server at run-time in response to external
events, such as the SIGHUP or SIGINT signal. At this point, the framework rereads its configuration file(s) and performs the designated directives, such as inserting or removing service objects into or from a server,
and suspending or resuming existing service objects. We now illustrate
how to use these features to dynamically reconfigure our server logging
daemon.
The initial configuration of the logging server shown above used the
Logging_Acceptor implementation from page 50 of Section 3.3. This implementation didnt timeout logging handlers that remained idle for long
periods of time. To add this capability without affecting existing code or
the Service_Reporter service in the process, we can simply define a new

i
i

i hsbo
2001/
page 1
i

Section 4.4 The ACE Service Cong Class

119

svc.conf file and instruct the server to reconfigure itself by sending it the
appropriate signal.
1 remove Server_Logging_Daemon
2
3 dynamic Server_Logging_Daemon Service_Object *
4 SLDex:make_Server_Logging_Daemon_Ex() "$SERVER_LOGGING_DAEMON_PORT"

Line 1 Remove the existing server logging daemon from the ACE_Service_
Repository and unlink it from the applications address space.
Lines 34 Configure a different instantiation of the Reactor_Logging_
Server_Adapter template into the address space of the server logging
daemon. In particular, the make_Server_Logging_Daemon_Ex() factory
function shown in the SLDex.cpp file below instantiates the Reactor_
Logging_Server_Adapter template with the Logging_Acceptor_Ex shown
on page 58 of Section 3.4.
typedef Reactor_Logging_Server_Adapter<Logging_Acceptor_Ex>
Server_Logging_Daemon_Ex;
ACE_SVC_FACTORY_DEFINE (Server_Logging_Daemon_Ex);

The UML state diagram in Figure 4.8 illustrates the steps involved in
reconfiguring the server logging daemon based on the svc.conf file shown
above.
Figure 4.8: A State Diagram for Reconfiguring the Server Logging Daemon
The dynamic reconfiguration mechanism in the ACE Service Configurator framework enables developers to modify server functionality or finetune performance without extensive redevelopment and reinstallation effort. For example, debugging a faulty implementation of the logging service
can simply involve the dynamic reconfiguration of a functionally equivalent service that contains additional instrumentation to help identify the
erroneous behavior. This reconfiguration process may be performed without modifying, recompiling, relinking, or restarting the currently executing
server logging daemon. In particular, this reconfiguration neednt affect
the Service_Reporter that was configured statically.

i
i

i hsbo
2001/
page 1
i

120

Section 4.5 Summary

4.5 Summary
This chapter described the ACE Service Configurator framework, which allows services to be initiated, suspended, resumed, and terminated dynamically and/or statically. This framework helps to improve the extensibility
and performance of networked application software by deferring service
configuration decisions until late in the design cycle, i.e., at installationtime and/or at run-time, without changing the application or server implementations.
We applied the ACE Service Configurator pattern to enhance the networked logging service example described in previous chapters. The result
is a networked logging service that can be configured and deployed in various ways via the ACE Service Configurator framework. The logging service
provides a good example of why its useful to defer configuration decisions
until run-time. The extensibility afforded by the ACE Service Configurator framework allows operators and administrators to dynamically select
the features and alternative implementation strategies that make the most
sense in a particular context, as well as make localized decisions on how
best to initialize them.

i
i

i hsbo
2001/
page 1
i

C HAPTER 5

The ACE Task Framework

C HAPTER S YNOPSIS
This chapter describes the design and use of the ACE Task framework,
which can be used to implement common concurrency patterns [SSRB00],
such as Active Object and Half-Sync/Half-Async. We show how to apply
the ACE Task framework to enhance the concurrency of various parts of
our networked logging service.

5.1

Overview

The ACE_Thread_Manager wrapper facade class described in Chapter 9


of [SH02] implements portable multithreading capabilities. This class only
offers a function-oriented interface, however, since programmers pass an
entry point function to its spawn() and spawn_n() methods. To provide a more powerful and extensible object-oriented concurrency mechanism, ACE defines the ACE Task concurrency framework. This framework
provides a facility for spawning threads in the context of an object and
a flexible queueing mechanism for transferring messages between tasks
efficiently.
The ACE Task framework can be applied to implement common concurrency patterns [SSRB00], such as:
 The Active Object pattern, which decouples the thread used to invoke
a method from the thread used to execute a method. Its purpose is
to enhance concurrency and simplify synchronized access to objects
that reside in their own threads.
121

i
i

i hsbo
2001/
page 1
i

122

Section 5.1 Overview

The Half-Sync/Half-Async pattern, which decouples asynchronous


and synchronous processing in concurrent systems to simplify programming without reducing performance unduly. This pattern introduces three layers: one for asynchronous processing, one for synchronous service processing, and a queueing layer that mediates communication between the asynchronous and synchronous layers.

This chapter explains how these patternsand the ACE Task framework
that reifies themcan be applied to develop concurrent object-oriented
applications at a level higher than existing C APIs and the ACE_Thread_
Manager C++ wrapper facade. We focus on the following ACE Task framework classes that networked applications can use to spawn, manage, and
communicate between one or more threads within a process:
ACE Class
ACE_Message_Queue

ACE_Task

Description
Provides a powerful message queueing facility that enables applications to pass and queue messages between threads in a process.
Allows applications to create active objects that can
queue and process messages concurrently.

SYNCH

ACE_Thread_Manager

ACE_Task
0..1

SYNCH

ACE_Message_Block

ACE_Message_Queue
*

Figure 5.1: The ACE Task Framework Classes


The most important relationships between the classes in the ACE Task
framework are shown in Figure 5.1. This framework provides the following
benefits:

Improve the consistency of programming style by enabling developers to use C++ and object-oriented techniques throughout their

i
i

i hsbo
2001/
page 1
i

Section 5.2 The ACE Message Queue Class

123

concurrent networked applications. For example, the ACE_Task class


provides an object-oriented programming abstraction that associates
OS-level threads with C++ objects.

Manage a group of threads as a cohesive collection. Networked


applications that use multithreading often require multiple threads
to start and end as a group. The ACE_Task class therefore provides a
thread group capability that allows other threads to wait for an entire
group of threads to exit before proceeding with their processing.

This chapter motivates and describes the capabilities of each class in


the ACE Task concurrency framework. We illustrate how this framework
can be used to enhance the concurrency of our client and server logging
daemons.

5.2

The ACE Message Queue Class

Motivation
Although some operating systems supply intra-process message queues
natively, this capability isnt available on all OS platforms. When it is
offered, moreover, its either:




Highly non-portable, e.g., VxWorks message queues and/or


Inefficient, tedious, and error-prone to use, e.g., System V IPC
message queues.

It may be possible to encapsulate the spectrum of available message queue


mechanisms within a wrapper facade that emulates missing capabilities
where needed. Since ACE already supplies the convenient, efficient, and
portable ACE_Message_Block class described in Chapter 4 of [SH02], however, its easier and more portable to create a lightweight message queueing
mechanism that can be adapted easily to an OS platforms threading capabilities. The result is the ACE_Message_Queue class.
Class Capabilities
The ACE_Message_Queue class is a platform-independent, lightweight intraprocess message queueing mechanism that provides the following capabilities:

i
i

i hsbo
2001/
page 1
i

124








Section 5.2 The ACE Message Queue Class

It allows messages (which are instances of ACE_Message_Block) to


be enqueued at the front of the queue, the rear of the queue, or in
priority order. Messages are dequeued from the front of the queue.
Its use of ACE_Message_Blocks provides an efficient message buffering mechanism that minimizes dynamic memory allocation and data
copying.
It can be instantiated for either multi-threaded or single-threaded
configurations, which allows programmers to trade off strict synchronization for less overhead when concurrent access to a queue isnt
required.
In multi-threaded configurations it supports configurable flow control, which prevents fast message producer threads from swamping
the resources of slower message consumer threads.
It allows timeouts to be specified on both enqueue and dequeue operations to avoid indefinite blocking.
It provides allocators that can be strategized so the memory used
by messages can be obtained from various sources, such as shared
memory, heap memory, static memory, or thread-specific memory.

The interface for the ACE_Message_Queue class is shown in Figure 5.2


and its key methods are outlined in the following table:
Method
ACE_Message_Queue()
open()
ACE_Message_Queue()
close()
is_empty()
is_full()
enqueue_tail()
enqueue_head()
enqueue_prio()
dequeue_head()

Description
Initialize the queue.
Shutdown the queue and release its resources.
Checks if the queue is empty/full.
Insert a message at the back of the queue.
Insert a message at the head of the queue.
Insert a message according to its priority.
Remove and return the message at the front of the
queue.

The ACE_Message_Queue has powerful semantics. We therefore group


the description of its capabilities into the three categories described below.
1. Message buffering and flow control. The semantics of the ACE_
Message_Queue class are based on the design of the message buffering and

i
i

i hsbo
2001/
page 1
i

125

Section 5.2 The ACE Message Queue Class

SYNCH_STRATEGY

ACE_Message_Queue
#
#
#
#

head_ : ACE_Message_Block *
tail_ : ACE_Message_Block *
high_water_mark_ : size_t
low_water_mark_ : size_t

+
+
+
+

open (high : size_t, low : size_t) : int


is_empty () : int
is_full () : int
enqueue_tail (item : ACE_Message_Block *,
timeout : ACE_Time_Value *) :
enqueue_head (item : ACE_Message_Block *,
timeout : ACE_Time_Value *) :
enqueue_prio (item : ACE_Message_Block *,
timeout : ACE_Time_Value *) :
dequeue_head (item : ACE_Message_Block *&,
timeout : ACE_Time_Value *) :
close () : int

+
+
+
+

int
int
int
int

Figure 5.2: The ACE Message Queue Class


queueing facilities in System V STREAMS [Rag93]. Since messages passed
between threads using a message queue are instances of ACE_Message_
Block, two kinds of messages can be placed in an ACE_Message_Queue:




Simple messages, which contain a single ACE_Message_Block.


Composite messages, which contain multiple ACE_Message_Block
objects that are linked together in accordance with the Composite
pattern [GHJV95], which provides a structure for building recursive
aggregations. Composite messages generally consist of a control message followed by one or more data messages, which play the following
roles:
A control message contains bookkeeping information, such as
destination addresses and length fields and
The data message(s) contain the actual contents of a composite
message.

Figure 5.3 illustrates how simple and composite messages can be linked
together to form an ACE_Message_Queue. To optimize insertion and deletion in a queue, ACE_Message_Block messages are linked bi-directionally

i
i

i hsbo
2001/
page 1
i

126

Section 5.2 The ACE Message Queue Class

SYNCH STRATEGY

ACE_Message
_Block

ACE_Message
_Queue

next_
prev_
cont_

head_
tail_

ACE_Data_Block

ACE_Message
_Block
ACE_Message
_Block
next_
prev_
cont_

next_
prev_
cont_
ACE_Data_Block

ACE_Message
_Block
next_
prev_
cont_

ACE_Data_Block

ACE_Data_Block

Figure 5.3: The Structure of an ACE Message Queue


via a pair of pointers that can be obtained via the ACE_Message_Block
next() and prev() accessor methods. Composite messages are chained
together uni-directionally via the continuation pointer in each ACE_Message_
Block, which can be obtained via its cont() accessor method. Sidebar 16 describes the ACE_Message_Queue_Ex class, which is a variant of
ACE_Message_Queue thats more strongly typed.

Sidebar 16: The ACE Message Queue Ex Class


The ACE_Message_Queue enqueues and dequeues ACE_Message_Block
objects since they provide a highly flexible and dynamically extensible
means of representing messages. For situations where strongly-type messaging is required, ACE provides the ACE_Message_Queue_Ex class, which
enqueues and dequeues messages that are instances of a MESSAGE_TYPE
template parameter, rather than an ACE_Message_Block.
An ACE_Message_Queue contains a pair of high and low water mark
variables to implement flow control, which prevents a fast sender thread
from overrunning the buffering and computing resources of a slower receiver thread. The high water mark indicates the total number of message
bytes an ACE_Message_Queue is willing to buffer before it becomes flow
controlled, at which point enqueue_*() methods will block until a suffi-

i
i

i hsbo
2001/
page 1
i

Section 5.2 The ACE Message Queue Class

127

cient number of bytes of messages are dequeued. The low water mark indicates the number of message bytes at which a previously flow controlled
ACE_Message_Queue is no longer considered full. The Example portion of
Section 5.3 illustrates the use of high and low water marks to exert flow
control within a multithreaded logging server.
2. Parameterized synchronization strategies. If you look carefully at
the ACE_Message_Queue template in Figure 5.2 youll see that its parameterized by a SYNCH_STRATEGY class. This design is based on the Strategized Locking pattern [SSRB00], which parameterizes the synchronization
mechanisms that a class uses to protect its critical sections from concurrent access. Internally, the ACE_Message_Queue class uses the following
traits from its SYNCH_STRATEGY template parameter:
template <class SYNCH_STRATEGY>
class ACE_Message_Queue
{
// ...
protected:
// C++ traits that coordinate concurrent access.
typename SYNCH_STRATEGY::MUTEX lock_;
typename SYNCH_STRATEGY::CONDITION notempty_;
typename SYNCH_STRATEGY::CONDITION notfull_;
};

These traits enable the ACE_Message_Queue synchronization strategy to be


customized to suit particular needs. Sidebar 17 describes the C++ traits
idiom.

Sidebar 17: The C++ Trait Idiom


A trait is a type that conveys information used by another class or algorithm
to determine policies or implementation details at compile time. A traits
class [Jos99] is a useful way to collect a set of characteristics that should be
applied in a given situation to alter another classs behavior appropriately.
The C++ traits idiom and traits classes are widely used throughout the
C++ standard library [Aus98]. For example, ...
Sets of ACEs synchronization wrapper facades can be combined to form
traits classes that define customized synchronization strategies. ACE provides the following two traits classes that pre-package the most common
synchronization traits:

i
i

i hsbo
2001/
page 1
i

128

Section 5.2 The ACE Message Queue Class

ACE_NULL_SYNCHThe traits in this class are implemented in terms


of null locking mechanisms, as shown below.
class ACE_NULL_SYNCH
{
public:
typedef ACE_Null_Mutex MUTEX;
typedef ACE_Null_Mutex NULL_MUTEX;
typedef ACE_Null_Mutex PROCESS_MUTEX;
typedef ACE_Null_Mutex RECURSIVE_MUTEX;
typedef ACE_Null_Mutex RW_MUTEX;
typedef ACE_Null_Condition CONDITION;
typedef ACE_Null_Semaphore SEMAPHORE;
typedef ACE_Null_Semaphore NULL_SEMAPHORE;
};

The ACE_NULL_SYNCH class is an example of the Null Object pattern [Woo97], which simplifies applications by defining a no-op placeholder that removes conditional statements in a class implementation. ACE_NULL_SYNCH is often used in single-threaded applications,
or in applications where the need for inter-thread synchronization
has been either eliminated via careful design or implemented via some
other mechanism.
ACE_MT_SYNCHThe traits in this pre-defined class are implemented
in terms of actual locking mechanisms, as shown below:
class ACE_MT_SYNCH
{
public:
typedef ACE_Thread_Mutex MUTEX;
typedef ACE_Null_Mutex NULL_MUTEX;
typedef ACE_Process_Mutex PROCESS_MUTEX;
typedef ACE_Recursive_Thread_Mutex RECURSIVE_MUTEX;
typedef ACE_RW_Thread_Mutex RW_MUTEX;
typedef ACE_Condition_Thread_Mutex CONDITION;
typedef ACE_Thread_Semaphore SEMAPHORE;
typedef ACE_Null_Semaphore NULL_SEMAPHORE;
};

The ACE_MT_SYNCH traits class defines a strategy with portable, efficient synchronization classes suitable for multi-threaded applications.
Parameterizing the ACE_Message_Queue template with a traits class
provides the following benefits:

It allows ACE_Message_Queues to work correctly and efficiently in

i
i

i hsbo
2001/
page 1
i

Section 5.2 The ACE Message Queue Class

129

both single-threaded or multi-threaded configurations, without requiring changes to the class implementation and

It allows the synchronization aspects of a particular instantiation of


the ACE_Message_Queue template to be changed whole-sale via the
Strategized Locking pattern.

For example, if the ACE_NULL_SYNCH strategy is used, the ACE_Message_


Queues MUTEX and CONDITION traits resolve to ACE_Null_Mutex and ACE_
Null_Semaphore, respectively. In this case, the resulting message queue
class behaves like a non-synchronized message queue and incurs no synchronization overhead.
In contrast, if an ACE_Message_Queue is parameterized with the ACE_
MT_SYNCH strategy, its MUTEX and CONDITION traits resolve to ACE_Thread_
Mutex and ACE_Condition_Thread_Mutex, respectively. In this case, the
resulting message queue class behaves in accordance with the Monitor
Object design pattern [SSRB00], which




Synchronizes concurrent method execution to ensure that only one


method at a time runs within an object and
Allows an objects methods to schedule their execution sequences cooperatively.

3. Blocking and timeout semantics. When an ACE_Message_Queue


template is instantiated with ACE_MT_SYNCH its synchronized enqueue and
dequeue methods support blocking, non-blocking, and timed operations.
If a synchronized queue is empty then calls to its dequeue_head() method
will block until a message is enqueued and the queue is no longer empty.
Likewise, if a synchronized is full calls to its enqueue_head() or enqueue_
tail() method will block until a message is dequeued and the queue is no
longer full. The default blocking behavior can be modified by passing the
following types of ACE_Time_Value values to these methods:

i
i

i hsbo
2001/
page 1
i

130

Section 5.2 The ACE Message Queue Class

Value
ACE_Time_Value
pointer
NULL

Non-NULL ACE_Time_Value
pointer whose sec() and
usec() methods return 0
A non-NULL
ACE_Time_Value pointer
whose sec() or usec()
method returns > 0

Behavior
Indicates that the enqueue or dequeue method
should wait indefinitely, i.e., it will block until
the method completes, the queue is closed, or a
signal occurs.
Indicates that enqueue and dequeue methods
should perform a peek, i.e., if the method
doesnt succeed immediately, return ,1 and set
errno to EWOULDBLOCK.
Indicates that enqueue or dequeue method
should wait until the absolute time elapses, returning ,1 with errno set to EWOULDBLOCK if
the method does not complete by this time.

The Client_Logging_Daemon example on page 132 illustrates the use of


the ACE_MT_SYNCH traits class.
In summary, the ACE_Message_Queue implementation applies the following patterns and idioms from POSA2 [SSRB00]:






Strategized LockingC++ traits are used to strategize the synchronization mechanism in accordance with the Strategized Locking pattern [SSRB00].
Monitor ObjectWhen ACE_Message_Queue is parameterized by ACE_
MT_SYNCH its methods behave as synchronized methods in accordance with the Monitor Object pattern [SSRB00].
Thread-Safe InterfaceThe public methods acquire locks and delegate to the private implementation methods, which assume locks are
held and actually enqueue/dequeue messages.
Scoped LockingThe ACE_GUARD* macros from Chapter 10 of [SH02]
ensure that any synchronization wrapper facade whose signature conforms to the ACE_LOCK* pseudo-class is acquired and released automatically in accordance with the Scoped Locking idiom [SSRB00].

These and other patterns from the POSA2 book are used to implement key
portions of the ACE Task framework. A message queue implementation
similar to the ACE_Message_Queue is also shown in Chapter 10 of [SH02].
Example
The following example shows how to use the ACE_Message_Queue to implement a client logging daemon. As described in Section 1.5 on page 17,

i
i

i hsbo
2001/
page 1
i

Section 5.2 The ACE Message Queue Class

131

a client logging daemon runs on every host participating in the networked


logging service and performs the following tasks:
 It receives log records from client applications running on the same
host via some type of local IPC mechanism, such as shared memory,
pipes, or loopback sockets.
 It uses a remote IPC mechanism, such as TCP/IP, to forward log
records to a server logging daemon running on a designated host.
Our example uses two threads to implement a bounded buffer [BA90] using
a synchronized ACE_Message_Queue, i.e., one whose SYNCH_STRATEGY is
instantiated using the ACE_MT_SYNCH traits class shown on page 128.
In our client logging daemon implementation, a receiver thread uses
the ACE Reactor framework to read log records from sockets connected
to client applications via the network loopback device. It queues each log
record in a synchronized ACE_Message_Queue. The forwarder thread runs
concurrently, performing the following steps continuously:
1. Dequeueing messages from the message queue,
2. Buffering the messages into larger chunks and then
3. Forwarding the chunks to the server logging daemon over a TCP connection.
The relationships between the receiver thread, the forwarder thread,
and the ACE_Message_Queue that connects them are shown in Figure 5.4.
By using a synchronized message queue, the receiver thread can continue
to read requests from client applications as long as the message queue
isnt full. Overall server concurrency is therefore enhanced, even if the
forwarder thread blocks on send operations when the connection to the
logging server is flow controlled.
We start our implementation by including the necessary ACE header
files.
#include
#include
#include
#include
#include
#include
#include

"ace/OS.h"
"ace/Message_Queue.h"
"ace/Synch.h"
"ace/Thread_Manager.h"
"ace/SOCK_Connector.h"
"ace/Reactor.h"
"Logging_Acceptor.h"

Since the client logging daemon design uses different logic than the
logging server we cant reuse our Reactor_Logging_Server class from the

i
i

i hs_
2001/
page
i

132

Section 5.2 The ACE Message Queue Class

CLIENT
APPLICATIONS

CLIENT LOGGING DAEMON

P1
LOCAL

P2
P3

IPC

Logging
Handlers
Logging
Acceptor

Message
Queue

CONSOLE

Logging
Handler

TCP
CONNECTION
LOGGING
SERVER

CLIENT
TANGO

NETWORK

Figure 5.4: Multi-threaded Client Logging Daemon


Example portion of Section 3.5. Instead, we define the following Client_
Logging_Daemon class:
class Client_Logging_Daemon : public ACE_Service_Object
{
public:
// Reactor hook methods.
virtual int handle_input (ACE_HANDLE handle);
virtual int handle_close (ACE_HANDLE = ACE_INVALID_HANDLE,
ACE_Reactor_Mask = 0);
virtual ACE_HANDLE get_handle () const;
// Service Configurator hook methods.
virtual int fini ();
virtual int init (int argc, char *argv[]);
virtual int info (ACE_TCHAR **bufferp, size_t length = 0) const;
virtual int suspend ();
virtual int resume ();
// Forward log records to the server logging daemon.
virtual int forward ();
protected:
// Entry point into forwarder thread of control.
static void *run_svc (void *arg);

i
i

i hsbo
2001/
page 1
i

Section 5.2 The ACE Message Queue Class

133

// A synchronized <ACE_Message_Queue> that queues messages.


ACE_Message_Queue<ACE_MT_SYNCH> msg_queue_;
// Factory that passively connects <ACE_SOCK_Stream>s.
ACE_SOCK_Acceptor acceptor_;
};
ACE_SVC_FACTORY_DECLARE (Client_Logging_Daemon);

The Client_Logging_Daemon class inherits from ACE_Service_Object.


It can therefore be linked dynamically by the ACE Service Configurator
framework. It can also use the ACE Reactor framework to accept connections and wait for log records to arrive from any client applications
connected to the client logging daemon via loopback TCP sockets.
The methods are defined in the Client_Logging_Daemon.cpp file. When
a connection request or a log record arrives at the client logging daemon,
the singleton reactor dispatches the following Client_Logging_Daemon::
handle_input() hook method:
1 int Client_Logging_Daemon::handle_input (ACE_HANDLE handle) {
2
if (handle == acceptor_.get_handle ()) {
3
ACE_SOCK_Stream peer_handler;
4
peer_acceptor_.accept (peer_handler);
5
return ACE_Reactor::instance ()->register_handler
6
(peer_handler.get_handle (),
7
this,
8
ACE_Event_Handler::READ_MASK);
9
} else {
10
ACE_Message_Block *mblk;
11
Logging_Handler logging_handler (handle);
12
13
if (logging_handler.recv_log_record (mblk) != -1)
14
// Just enqueue the log record data, not the hostname.
15
if (msg_queue_.enqueue_tail (mblk->cont ()) != -1)
16
return 0;
17
else mblk->release ();
18
19
return -1;
20
}
21 }

Lines 28 If the handle argument matches the ACE_SOCK_Acceptor factorys handle, we accept the connection and then use the three parameter
variant of register_handler() to register the Client_Logging_Daemon

i
i

i hs_
2001/
page
i

134

Section 5.2 The ACE Message Queue Class

object with the singleton reactor for READ events. This reactor method enables the client logging daemon to reuse a single C++ object for its acceptor
factory and all of its logging handlers.
Lines 1019 If the handle argument is not the ACE_SOCK_Acceptor factory handle then it must be a handle to a connected client application
socket. In this case, we read a log record out of the socket handle parameter, store the record into an ACE_Message_Block, and insert this message
into the synchronized queue serviced by the forwarder thread.
If a client application disconnects or if a communication error occurs,
the handle_input() hook method returns ,1. This value triggers the
reactor to call the following handle_close() hook method that cleans up
all the Client_Logging_Daemons resources:
int Client_Logging_Daemon::handle_close (ACE_HANDLE,
ACE_Reactor_Mask) {
if (acceptor_.get_handle () != ACE_INVALID_HANDLE) {
message_queue_.close ();
acceptor_.close ();
// The close() method sets the handle to ACE_INVALID_HANDLE.
}
return 0;
}

We first check to see if the ACE_SOCK_Acceptors handle has the value of


ACE INVALID HANDLE since the handle_close() method can be called multiple times. For example, it will be called when handle_input() returns
,1, as well as when the following Client_Logging_Daemon::fini() hook
method is invoked by the ACE Service Configurator framework:
int Client_Logging_Daemon::fini () {
return handle_close ();
}

Note that we neednt delete this object in handle_close() since the ACE
Service Configurator framework is responsible for deleting a service object
after calling its fini() hook method.
We next show the Client_Logging_Daemon::forward() method, which
establishes a connection with the server logging daemon and forwards log
records to it, as shown below:
1 int Client_Logging_Daemon::forward () {
2
// Connection establishment and data transfer objects.

i
i

i hsbo
2001/
page 1
i

Section 5.2 The ACE Message Queue Class

3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49 }

135

ACE_SOCK_Stream logging_peer;
ACE_SOCK_Connector connector;
ACE_INET_Addr server_addr;
server_addr.set ("ace_logger", LOGGING_SERVER_HOST);
connector.connect (logging_peer, server_addr);
int bufsiz = ACE_DEFAULT_MAX_SOCKET_BUFSIZ;
logging_peer.set_option (SOL_SOCKET,
SO_SNDBUF,
(void *) &bufsiz,
sizeof (bufsiz));
// Max # of <iov>s OS can send in a gather-write operation.
iovec iov[ACE_IOV_MAX];
int i = 0;
for (ACE_Message_Block *mblk = 0;
msg_queue_->dequeue_head (mblk);
) {
iov[i].iov_base = mblk->rd_ptr ();
iov[i].iov_len = mblk->length ();
// Dont delete the data in the message.
mblk->set_flags (ACE_Message_Block::DONT_DELETE);
mblk->release ();
if (i++ >= ACE_MAX_IOVLEN) {
// Send all buffered log records in one operation.
logging_peer_.sendv_n (iov, i);
// Clean up the buffers.
for (--i; ; --i) {
delete [] iov[i].iov_base;
if (i == 0) break;
}
}
}
if (i > 0) {
// Send all remaining log records in one operation.
logging_peer_.sendv_n (iov, i);
for (--i; i >= 0; --i) delete iov[i].iov_base;
}
return 0;

i
i

i hsbo
2001/
page 1
i

136

Section 5.2 The ACE Message Queue Class

Lines 38 We use the ACE Socket wrapper facades from Chapter 4 of


[SH02] to establish a TCP connection with the server logging daemon.
Lines 1014 We increase the socket send buffer to its largest size to maximize throughput over high-speed networks.
Lines 1748 We then run an event loop that dequeues a pointer to the
next ACE_Message_Block from the msg_queue_ and stores the log record
data in a buffer of size ACE MAX IOVLEN. Whenever this buffer fills up, we
use the ACE_SOCK_Stream::sendv_n() call to transfer all log record data
to the server logging daemon in one gather-write operation. Since an ACE_
Message_Block is reference counted, its release() method deletes all the
resources associated with mblk except the log record data, which will be
deleted after the entire buffer of ACE MAX IOVLEN log records is transmitted.
The forward() method runs in a separate thread of control thats
spawned in the Client_Logging_Daemon::init() hook method shown
below:
1 int Client_Logging_Daemon::init (int argc, char *argv[]) {
2
ACE_INET_Addr local_addr (ACE_CLIENT_LOGGING_DAEMON_PORT);
3
4
// Start at argv[0].
5
ACE_Get_Opt getopt (argc, argv, "p:", 0);
6
7
for (int c; (c = getopt ()) != -1;)
8
switch (c) {
9
case p:
10
local_addr.set ((u_short) ACE_OS::atoi (getopt.optarg));
11
break;
12
}
13
14
acceptor_.open (local_addr);
15
ACE_Reactor::instance ()->register_handler
16
(this, ACE_Event_Handler::ACCEPT_MASK);
17
18
ACE_thread_t thread_id;
19
return ACE_Thread_Manager::instance ()->spawn
20
(&Client_Logging_Daemon::run_svc,
21
ACE_static_cast (void *, this),
22
THR_SCOPE_SYSTEM,
23
&thread_id);
24 }

Line 2 Initialize the local_addr to the default TCP port number used by
the client logging daemon.

i
i

i hs_
2001/
page
i

Section 5.2 The ACE Message Queue Class

137

Lines 512 Parse the service configuration options using the ACE_Get_
Opt class described in Sidebar 12 on page 105. If the -p option is passed
into init() then the local_addr port number is reset to that value.
Lines 1416 Initialize the ACE_SOCK_Acceptor to listen at the local_
addr port number and then register this object with the singleton reactor to accept new connections. The reactor will call back to the following Client_Logging_Daemon::get_handle() method to obtain the acceptors socket handle:
ACE_HANDLE Client_Logging_Daemon::get_handle () const {
return acceptor_.get_handle ();
}

When a connect request arrives, the reactor will dispatch the Client_
Logging_Daemon::handle_input() method shown on page 133.
Lines 1823 We finally use the ACE_Thread_Manager from Chapter 9
of [SH02] to spawn a system-scoped thread that executes the Client_
Logging_Daemon::run_svc() static method concurrently with the main
thread of control. The run_svc() static method casts its void* argument
to a Client_Logging_Daemon pointer and then delegates its processing to
forward() method, as shown below:
void *Client_Logging_Daemon::run_svc (void *arg) {
Client_Logging_Daemon *client_logging_daemon =
ACE_static_cast (Client_Logging_Daemon *, arg);
return client_logging_daemon->forward ();
}

For completeness, the suspend(), resume(), and info() hook methods are shown below:
int Client_Logging_Daemon::info (ACE_TCHAR **bufferp,
size_t length = 0) const {
ACE_INET_Addr sa;
acceptor_.get_local_addr (sa);
ACE_TCHAR buf[BUFSIZ];
sprintf (buf,
"%d/%s %s",
sa.get_port_number (),
"tcp",

i
i

i hs_
2001/
page
i

138

Section 5.2 The ACE Message Queue Class

"# client logging daemon\n");


*bufferp = ACE_OS::strnew (buf);
ACE_OS::strcpy (*bufferp, buf);
return ACE_OS::strlen (buf);
}
int Client_Logging_Daemon::suspend () {
return ACE_Reactor::instance ()->suspend_handler (this);
}
int Client_Logging_Daemon::resume () {
return ACE_Reactor::instance ()->resume_handler (this);
}

Note that the ACE_Reactors suspend_handler() and resume_handler()


will call back to the get_handle() method to obtain the socket handle of
the ACE_SOCK_Acceptor factory.
We can now place the ACE SVC FACTORY DEFINE macro into the implementation file.
ACE_SVC_FACTORY_DEFINE (Client_Logging_Daemon);

This macro automatically defines the make_Client_Logging_Daemon()


factory function, which is used in the following svc.conf file:
dynamic Client_Logging_Daemon
Service_Object *
CLD:make_Client_Logging_Daemon() "-p $CLIENT_LOGGING_DAEMON_PORT"

This file configures the client logging daemon by dynamically linking the
CLD DLL into the address space of the process and using ACE_DLL to
extract the make_Client_Logging_Daemon() factory function from the
CLD symbol table. This function is called to obtain a pointer to a dynamically allocated Client_Logging_Daemon. The framework then calls
Client_Logging_Daemon::init() hook method on this pointer, passing
in the "$CLIENT_LOGGING_DAEMON_PORT" string as its argc/argv argument. This string designates the port number where the client logging
daemon listens for client application connection requests. If init() succeeds, the Client_Logging_Daemon pointer is stored in the ACE_Service_
Repository under the name "Client_Logging_Daemon".
Were now ready to show the main() function, which is identical to the
one on page 116 in Section 4.4.

i
i

i hsbo
2001/
page 1
i

Section 5.3 The ACE Task Class

139

#include "ace/Service_Config.h"
#include "ace/Reactor.h"
int main (int argc, char *argv[])
{
ACE_Service_Config::open (argc, argv);
ACE_Reactor::instance ()->run_reactor_event_loop ();
return 0;
}

5.3

The ACE Task Class

Motivation
Although the ACE_Thread_Manager class provides a portable threading abstraction, the threads it spawns and manages are not object-oriented, i.e.,
they are C-style functions rather than objects. C-style functions make it
hard to associate data members and methods with a thread. To resolve
these issues, ACE provides the ACE_Task class.
Class Capabilities
The ACE_Task class is the basis of ACEs object-oriented concurrency framework. It provides the following capabilities:






It uses an instance of ACE_Message_Queue from Section 5.2 to queue


messages that are passed between tasks.
It can be used in conjunction with the ACE_Thread_Manager to become an active object [SSRB00] and process its queued messages in
one or more threads of control.
Since it inherits from ACE_Service_Object, instances of ACE_Task
can be linked/unlinked dynamically via the ACE Service Configurator
framework from Chapter 4, which implements the Component Configurator pattern [SSRB00].
Since ACE_Service_Object inherits from ACE_Event_Handler, instances of ACE_Task can serve as concrete event handlers via the
ACE Reactor framework from Chapter 3, which implements the Reactor pattern [SSRB00].

i
i

i hsbo
2001/
page 1
i

140

Section 5.3 The ACE Task Class

It can be subclassed to create application-defined methods that queue


and/or process messages.

Our focus in this section is on the ACE_Task capabilities for message processing. It obtains the event handling and dynamic linking/unlinking capabilities by inheriting from the ACE_Service_Object class described in
the previous two chapters.
The interface for the ACE_Task class is shown in Figure 5.5 and its key
methods are shown in the following table:

Method
open()
close()
put()

Description
Hook methods that perform application-defined initialization and
termination activities.
A hook method that can be used to pass a message to a task,
where it can be processed immediately or queued for subsequent
processing in the svc() hook method.
svc()
A hook method run by one or more threads to process messages
that are placed on the queue via put().
getq()
Insert and remove messages from the tasks message queue (only
putq()
visible to subclasses of ACE_Task).
thr_mgr()
Get and set a pointer to the tasks ACE_Thread_Manager.
activate() Uses the ACE_Thread_Manager to convert the task into an active
object that runs the svc() method in one or more threads.

ACE_Task must be customized by subclasses to provide applicationdefined functionality by overriding its hook methods. For example, ACE_
Task subclasses can override its open() and close() hook methods to
perform application-defined ACE_Task initialization and termination activities. These activities include spawning and canceling threads and allocating and freeing resources, such as connection control blocks, I/O handles,
and synchronization locks. ACE_Task subclasses can perform applicationdefined processing on messages by overriding its put() and svc() hook
methods to implement the following two message processing models:
1. Synchronous processing. The put() method is the entry point into
an ACE_Task, i.e., its used to pass messages to a task. To minimize overhead, pointers to ACE_Message_Block objects are passed between tasks to
avoid copying their data. Task processing can be performed synchronously
in the context of the put() method if it executes solely as a passive object,
i.e., if its callers thread is borrowed for the duration of its processing.

i
i

i hs_
2001/
page
i

141

Section 5.3 The ACE Task Class

ACE_Task
+ thr_mgr_ : ACE_Thread_Manager *
+ thr_count_ : size_t
+ msg_queue_ : ACE_Message_Queue *
+ open (args : void *) : int
+ close (flags : u_long) : int
+ put (mb : ACE_Message_Block *,
timeout : ACE_Time_Value *) : int
+ svc () : int
+ getq (mb : ACE_Message_Block *&,
timeout : ACE_Time_Value *) : int
+ putq (mb : ACE_Message_Block *,
timeout : ACE_Time_Value *) : int
+ activate (flags : long, threads : int) : int
+ thr_mgr (): ACE_Thread_Manager *
+ thr_mgr (mgr : ACE_Thread_Manager *)

Figure 5.5: The ACE Task Class


2. Asynchronous processing. The svc() method can be overridden
by a subclass and used to perform application-defined processing asynchronously with respect to other activities in an application. Unlike put(),
the svc() method is not invoked by a client of a task directly. Instead,
its invoked by one or more threads when a task becomes an active object, i.e., after its activate() method is called. This method uses the
ACE_Thread_Manager associated with an ACE_Task to spawn one or more
threads, as follows:
template <class SYNCH_STRATEGY> int
ACE_Task<SYNCH_STRATEGY>::activate (long flags,
int n_threads,
/* Other params omitted */)
{
// ...
thr_mgr ()->spawn_n (n_threads,
&ACE_Task<SYNCH_STRATEGY>::svc_run,
ACE_static_cast (void *, this),
flags,
/* Other params omitted */);
// ...

i
i

i hsbo
2001/
page 1
i

142

Section 5.3 The ACE Task Class

4. template <SYNCH_STRATEGY> void *


1. ACE_Task::activate ()
ACE_Task<SYNCH_STRATEGY>::svc_run
2. ACE_Thread_Manager::spawn
(ACE_Task<SYNCH_STRATEGY> *t) {
(svc_run, this);
// ...
3. CreateThread
void *status = t->svc ();
(0, 0,
// ...
svc_run, this,
R
U
N
T
I
M
E
return status; // Return value of thread.
0, &thread_id);
T HR EA D ST A CK }

Figure 5.6: Task Activate Behavior


}

The ACE_Task::svc_run() method is a static method used as an adapter


function. It runs in the newly spawned thread(s) of control, which provide an execution context for the svc() hook method. Figure 5.6 illustrates the steps associated with activating an ACE_Task using the Win32
CreateThread() function to spawn the thread. Naturally, the ACE_Task
class shields applications from any Win32-specific details.
When an ACE_Task subclass executes as an active object, its svc()
method often runs an event loop that waits for messages to arrive on the
tasks ACE_Message_Queue. This queue can be used to buffer a sequence
of data messages and control messages for subsequent processing by a
tasks svc() method. As messages arrive and are enqueued by a tasks
put() method, the svc() method runs in separate thread(s) dequeueing
the messages and performing application-defined processing concurrently,
as shown in Figure 5.7. Sidebar 18 compares and contrasts the ACE_Task

t1 : SubTask
: Task
State

t3 : SubTask
6 : p u t ( ms g)

1 : p u t ( ms g)

: A C E _ M e s s a ge _ Q u e u e

2 : p u tq ( ms g)

t2 : SubTask
: Task
State
: A C E _ M e s s a ge _ Q u e u e

: Task
State
: A C E _ M e s s a ge _ Q u e u e

3: sv c ( )
4 : ge tq ( ms g)
5 : d o_ w or k( ms g)

Figure 5.7: Passing Messages Between ACE Task Objects


capabilities with the Java Runnable interface and Thread class.

i
i

i hsbo
2001/
page 1
i

Section 5.3 The ACE Task Class

143

Sidebar 18: ACE_Task vs. Java Runnable and Thread


If youve used Javas Runnable interface and Thread class [Lea99], the
ACE_Task design should look familiar. The following are the similarities and
differences between the two designs:




ACE_Task::activate() is similar to the Java Thread::start(), i.e.,


they both spawn internal threads. Java Thread::start() only
spawns one thread, however, whereas activate() can spawn multiple threads within the same ACE_Task, which makes it easy to implement thread pools, as shown in Section 5.3.
ACE_Task::svc() is similar to the Java Runnable::run() method,
i.e., both methods are hooks that run in newly spawned thread(s) of
control.
ACE_Task also contains a message queue, which allows applications
to exchange and buffer messages. In contrast, this queueing capability must be added by Java developers explicitly.

Example
This example shows how to combine the ACE_Task and ACE_Message_
Queue classes with the ACE_Reactor from Chapter 3 and the ACE_Service_
Config from Chapter ?? to implement a concurrent logging server. This
server design is based on the Half-Sync/Half-Async pattern [SSRB00] and
the eager spawning thread pool strategy described in Chapter 5 of [SH02].
As shown in Figure 5.8, a pool of worker threads is pre-spawned when the
logging server is launched. Log records can be processed concurrently until
the number of simultaneous client requests exceeds the number of worker
threads in the pool. At this point, additional requests are buffered in a synchronized ACE_Message_Queue until a worker thread becomes available.
The ACE_Message_Queue plays several roles in our thread pool logging
servers half-sync/half-async concurrency design:
 It decouples the main thread from the pool of worker threads
This design allows multiple worker threads to be active simultaneously. It also offloads the responsibility for maintaining the queue
from kernel-space to user-space.
 It helps to enforce flow control between clients and the server
When the number of bytes in the queue reaches its high-water mark,

i
i

i hsbo
2001/
page 1
i

144

Section 5.3 The ACE Task Class

CLIENT
LOGGING

SERVER LOGGING DAEMON

DAEMONS

worker threads
Logging
Handlers

P1
TCP
CONNECTIONS

Logging
Acceptor

svc()

svc()

svc()

CONSOLE

Message
Queue

P2
P3

LOGGING
SERVER

NETWORK

Figure 5.8: Architecture of the Thread Pool Logging Server


its flow control protocol blocks the main thread. As the underlying
TCP socket buffers fill up, this flow control propagates back to the
servers clients, thereby preventing them from establishing new connections or sending log records. New connections and log records will
not be accepted until after the worker threads have a chance to catch
up and unblock the main thread.
Pre-spawning and queueing help to amortize the cost of thread creation
and bound the use of OS resources, which can improve server scalability
significantly.
We start by including the necessary ACE header files and defining some
helper functions:
#include "ace/Synch.h"
#include "ace/Task.h"
#include "Reactor_Logging_Server.h"

The following table outlines the classes that well cover in the example
below:

i
i

i hsbo
2001/
page 1
i

145

Section 5.3 The ACE Task Class

Class
TP_Logging_Task
TP_Logging_Acceptor
TP_Logging_Handler
TP_Logging_Server

Description
Runs as an active object processing and printing log
records.
A factory that accepts connections and creates TP_
Logging_Handler objects.
Target of upcalls from the ACE_Reactor that receives
log records from clients.
A facade class that integrates the other three classes
together.

TP Logging Task. This class inherits from ACE_Task and is configured to


use a synchronized ACE_Message_Queue and a pool of worker threads that
all run the same svc() method.
class TP_Logging_Task :
public ACE_Task<ACE_MT_SYNCH>
// Instantiated with an MT synchronization trait.
{
public:
enum { MAX_POOL_THREADS = 4 };
// ...Methods defined below...
};

The TP_Logging_Task::open() hook method calls the ACE_Task::


activate() method to convert this task into an active object.
virtual int open (void *) {
return activate (THR_NEW_LWP | THR_DETACHED,
MAX_POOL_THREADS);
}

When activate() returns, the TP_Logging_Task::svc() method will be


running in MAX POOL THREADS separate threads. We show the Logging_
Task::svc() method implementation on page 147 after we describe the
TP_Logging_Acceptor and TP_Logging_Handler classes.
The TP_Logging_Task::put() method inserts a message block containing a log record into the queue.
virtual int put (ACE_Message_Block *mblk, ACE_Time_Value *) {
return putq (mblk);
}

The TP_Logging_Task::close() hook method closes the message queue,


which will cause the threads in the pool to exit.
virtual close (u_int) { msg_queue ()->close (); }

i
i

i hs_
2001/
page
i

146

Section 5.3 The ACE Task Class

TP Logging Acceptor. This class inherits from the Logging_Acceptor


on page 50 of Section 3.3 and overrides its handle_input() method to
create instances of TP_Logging_Handler.
class TP_Logging_Acceptor : public Logging_Acceptor
{
public:
TP_Logging_Acceptor (ACE_Reactor *r = ACE_Reactor::instance ())
: Logging_Acceptor (r) {}
virtual int handle_input (ACE_HANDLE) {
TP_Logging_Handler *peer_handler;
ACE_NEW_RETURN (peer_handler,
TP_Logging_Handler (reactor ()),
-1);
if (peer_acceptor_.accept (peer_handler->peer ()) == -1) {
delete peer_handler;
return -1;
}
else if (peer_handler->open () == -1)
peer_handler->close ();
return 0;
}
};

TP Logging Handler. This class inherits from Logging_Handler_Adapter


on page 53 in Section 3.3 and is used to receive log records from a connected client.
class TP_Logging_Handler : public Logging_Handler_Adapter
{
public:
TP_Logging_Handler (ACE_Reactor *r): Logging_Handler_Adapter (r) {}
// Called when input events occur (e.g., connection or data).
virtual int handle_input (ACE_HANDLE h);
};
ACE_SVC_FACTORY_DECLARE (TP_Logging_Handler);

The difference between Logging_Handler_Adapter::handle_input()


and TP_Logging_Handler::handle_input() is that the latter doesnt process a log record immediately after we receive it. Instead, it combines the
log record with a message block containing the clients log file and inserts
the resulting composite message at the end of the tasks message queue,
as shown below:

i
i

i hsbo
2001/
page 1
i

Section 5.3 The ACE Task Class

147

1 int TP_Logging_Handler::handle_input (ACE_HANDLE) {


2
ACE_Message_Block *mblk;
3
if (logging_handler_.recv_log_record (mblk)) != -1) {
4
ACE_Message_Block *log_blk =
5
new ACE_Message_Block
6
(ACE_reinterpret_cast (char *, &log_file_));
9
log_blk->cont (mblk);
10
logging_task_.put (log_blk);
11
return 0;
12
} else
13
return -1;
14 }

Lines 23 First read a log record from a socket into an ACE_Message_


Block.
Lines 49 Create another ACE_Message_Block called log_blk that contains a pointer to the log_file. Attach the mblk to log_blks continuation
chain to form a composite message.
Line 10 Insert the composite message into the TP_Logging_Tasks message queue.
Now that weve shown the TP_Logging_Handler class, we can show
the TP_Logging_Task::svc() method, which runs concurrently in each
of the worker threads. This method runs its own event loop that blocks
on the synchronized message queue. After a message is enqueued by the
TP_Logging_Handler::handle_input() method, itll be dequeued by an
available worker thread, demarshaled, and written to the appropriate log
file corresponding to their client.
1 int TP_Logging_Task::svc () {
2
for (ACE_Message_Block *log_blk; getq (log_blk) != -1; ) {
3
ACE_FILE_IO *log_file =
4
ACE_reinterpret_cast (ACE_FILE_IO *, log_blk->rd_ptr ());
5
6
Logging_Handler logging_handler (log_file);
7
logging_handler.write_log_record (log_blk->cont ());
8
9
log_blk->cont (0);
10
log_blk->release ();
11
}
12
return 0;
13 }

i
i

i hs_
2001/
page
i

148

Section 5.3 The ACE Task Class

Lines 24 Call the getq() method, which blocks until a message block is
available. Each message block is actually a composite message that contains the following three message blocks chained together via their continuation pointers:
1. The ACE_FILE_IO object thats used to write the log record
2. The marshaled log record contents and
3. The hostname of the connected client
Lines 67 Initialize a Logging_Handler with the log_file and then call
its write_log_record() method, which writes the log record to the log file.
The write_log_record() method is responsible for releasing its message
blocks, as shown on in Chapter 4 of [SH02].
Lines 910 After the log record is written, we release the log_blk message block to reclaim the message block allocated to store the ACE_FILE_IO
pointer. We first set the continuation field to NULL so we just release the
mblk. Note that we dont delete the ACE_FILE_IO memory since its borrowed from the TP_Logging_Handler rather than allocated dynamically.
TP Logging Server. This facade class contains an instance of TP_Logging_
Task and Reactor_Logging_Server.
class TP_Logging_Server : public ACE_Service_Object
{
protected:
// Contains the reactor, acceptor, and handlers.
typedef Reactor_Logging_Server<TP_Logging_Acceptor>
LOGGING_DISPATCHER;
LOGGING_DISPATCHER *logging_dispatcher_;
// Contains the pool of worker threads.
TP_Logging_Task *logging_task_;
public:
// Other methods defined below...
};

The TP_Logging_Server::init() hook method enhances the reactorbased logging server implementation in Chapter 3 by pre-spawning a pool
of worker threads that process log records concurrently.
virtual init (int argc, char *argv[]) {
logging_dispatcher_ =
new TP_Logging_Server::LOGGING_DISPATCHER

i
i

i hsbo
2001/
page 1
i

Section 5.3 The ACE Task Class

149

(argc, argv, ACE_Reactor::instance ();


logging_task_ = new TP_Logging_;
return logging_task->open ();
}

The TP_Logging_Server::fini() method is shown next:


1 virtual int fini () {
2
logging_task_->close ();
3
4
logging_task_->thr_mgr ()->wait ();
5
6
delete logging_dispatcher_;
7
delete logging_task_;
8
return 0;
9 }

Line 2 Close the TP_Logging_Task, which signals the worker threads in


the pool to exit.
Lines 48 Use the ACE_Thread_Managers barrier synchronization feature to wait for the pool of threads to exit and then delete the dynamically
allocated data members and return from fini().
For brevity, we omit the suspend(), resume(), and info() hook methods, which are similar to those shown in earlier examples.
We place the following ACE SVC FACTORY DEFINE macro into the TPLS.
cpp implementation file.
ACE_SVC_FACTORY_DEFINE (TP_Logging_Server);

This macro automatically defines the make_TP_Logging_Server() factory


function thats used in the following svc.conf file:
dynamic TP_Logging_Server Service_Object *
TPLS:make_TP_Logging_Server() "-p $TP_LOGGING_SERVER_PORT"

This file configures the thread pool logging server by dynamically linking the TPLS DLL into the address space of the process and using ACE_
DLL to extract the make_TP_Logging_Server() factory function from the
TPLS symbol table. This function is called to obtain a pointer to a dynamically allocated TP_Logging_Server. The framework then calls the
TP_Logging_Server::init() hook method on this pointer, passing in
the "$TP_LOGGING_SERVER_PORT" string as its argc/argv argument. This
string designates the port number where the logging server listens for client
connection requests. If init() succeeds, the TP_Logging_Server pointer

i
i

i hsbo
2001/
page 1
i

150

Section 5.4 Summary

is stored in the ACE_Service_Repository under the name "TP_Logging_


Server".
Were now ready to show the main() function, which is identical to the
ones weve shown earlier in this chapter and in Chapter 4.
#include "ace/Service_Config.h"
#include "ace/Reactor.h"
int main (int argc, char *argv[])
{
ACE_Service_Config::open (argc, argv);
ACE_Reactor::instance ()->run_reactor_event_loop ();
return 0;
}

5.4 Summary
The ACE Task framework allows developers to create and configure concurrent networked applications in a powerful and extensible object-oriented
fashion. This framework provides the ACE_Task class that integrates multithreading with object-oriented programming and queueing. The queueing mechanism in the ACE_Task is based on the ACE_Message_Queue class
that transfers messages between tasks efficiently. Since ACE_Task derives
from the ACE_Service_Object class in Section 4.2, its easy to design services that can run as active objects and be dispatched by the ACE Reactor
framework.
This chapter illustrates how the ACE Reactor framework can be combined with the ACE Task framework to implement variants of the HalfSync/Half-Async pattern [SSRB00]. The ACE Task framework classes can
also be combined with the ACE_Future, ACE_Method_Request, and ACE_
Activation_List classes to implement the Active Object pattern [SSRB00],
as shown in the supplemental material at the ACE website http://ace.
ece.uci.edu.

i
i

i hsbo
2001/
page 1
i

Bibliography

[Ann98] Anne Thomas, Patricia Seybold Group. Enterprise JavaBeans


Technology. java.sun.com/products/ejb/white paper.html, December
1998. Prepared for Sun Microsystems, Inc.
[Aus98]

Matt Austern. Generic Programming and the STL: Using and Extending
the C++ Standard. Addison-Wesley, 1998.

[BA90]

M. Ben-Ari. Principles of Concurrent and Distributed Programming.


Prentice Hall International Series in Computer Science, 1990.

[BEA99] BEA Systems, et al. CORBA Component Model Joint Revised


Submission. Object Management Group, OMG Document
orbos/99-07-01 edition, July 1999.
[Bja00]

Bjarne Stroustrup. The C++ Programming Language, 3rd Edition.


Addison-Wesley, 2000.

[BL88]

Ronald E. Barkley and T. Paul Lee. A Heap-Based Callout


Implementation to Meet Real-Time Needs. In Proceedings of the USENIX
Summer Conference, pages 213222. USENIX Association, June 1988.

[BMR+ 96] Frank Buschmann, Regine Meunier, Hans Rohnert, Peter Sommerlad,
and Michael Stal. Pattern-Oriented Software Architecture A System of
Patterns. Wiley and Sons, 1996.
[CB97]

John Crawford and Steve Ball. Monostate Classes: The Power of One.
C++ Report, 9(5), May 1997.

[CL97]

Patrick Chan and Rosanna Lee. The Java Class Libraries: java.applet,
java.awt, java.beans, Volume 2. Addison-Wesley, Reading,
Massachusetts, 1997.

[Com84] Douglas E. Comer. Operating System Design: The Xinu Approach.


Prentice Hall, Englewood Cliffs, NJ, 1984.
151

i
i

i hsbo
2001/
page 1
i

152

BIBLIOGRAPHY

[FBB+ 99] Martin Fowler, Kent Beck, John Brant, William Opdyke, and Don
Roberts. Refactoring - Improving the Design of Existing Code.
Addison-Wesley, Reading, Massachusetts, 1999.
[FJS99a] Mohamed Fayad, Ralph Johnson, and Douglas C. Schmidt, editors.
Object-Oriented Application Frameworks: Problems & Perspectives.
Wiley & Sons, New York, NY, 1999.
[FJS99b] Mohamed Fayad, Ralph Johnson, and Douglas C. Schmidt, editors.
Object-Oriented Application Frameworks: Applications & Experiences.
Wiley & Sons, New York, NY, 1999.
[FY99]

Brian Foote and Joe Yoder. Big Ball of Mud. In Brian Foote, Neil
Harrison, and Hans Rohnert, editors, Pattern Languages of Program
Design. Addison-Wesley, Reading, Massachusetts, 1999.

[GHJV95] Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides.
Design Patterns: Elements of Reusable Object-Oriented Software.
Addison-Wesley, Reading, Massachusetts, 1995.
[HJE95] Herman Hueni, Ralph Johnson, and Robert Engel. A Framework for
Network Protocol Software. In Proceedings of OOPSLA 95, Austin,
Texas, October 1995. ACM.
[HLS97] Timothy H. Harrison, David L. Levine, and Douglas C. Schmidt. The
Design and Performance of a Real-time CORBA Event Service. In
Proceedings of OOPSLA 97, pages 184199, Atlanta, GA, October 1997.
ACM.
[HP91]

Norman C. Hutchinson and Larry L. Peterson. The x-kernel: An


Architecture for Implementing Network Protocols. IEEE Transactions on
Software Engineering, 17(1):6476, January 1991.

[HV99]

Michi Henning and Steve Vinoski. Advanced CORBA Programming With


C++. Addison-Wesley, Reading, Massachusetts, 1999.

[JF88]

R. Johnson and B. Foote. Designing Reusable Classes. Journal of


Object-Oriented Programming, 1(5):2235, June/July 1988.

[JKN+ 01] Philippe Joubert, Robert King, Richard Neves, Mark Russinovich, and
John Tracey. High-Performance Memory-Based Web Servers: Kernel
and User-Space Performance. In Proceedings of the USENIX Technical
Conference, Boston, MA, June 2001.
[Joh97]

Ralph Johnson. Frameworks = Patterns + Components.


Communications of the ACM, 40(10), October 1997.

[Jos99]

Nicolai Josuttis. The C++ Standard Library: A Tutorial and Reference.


Addison-Wesley, Reading, Massachusetts, 1999.

i
i

i hsbo
2001/
page 1
i

BIBLIOGRAPHY

153

[KMC+ 00] Eddie Kohler, Robert Morris, Benjie Chen, John Jannotti, and
M. Frans Kaashoek. The Click Modular Router. ACM Transactions on
Computer Systems, 18(3):263297, August 2000.
[Koe92]

Andrew Koenig. When Not to Use Virtual Functions. C++ Journal, 2(2),
1992.

[Kof93]

Thomas Kofler. Robust iterators for ET++. Structured Programming,


14(2):6285, 1993.

[Lea99]

Doug Lea. Concurrent Java: Design Principles and Patterns, Second


Edition. Addison-Wesley, Reading, Massachusetts, 1999.

[Mey97]

Bertrand Meyer. Object-Oriented Software Construction, Second Edition.


Prentice Hall, Englewood Cliffs, NJ, 1997.

[Obj98]

Object Management Group. CORBAServices: Common Object Services


Specification, Updated Edition, 95-3-31 edition, December 1998.

[Obj01]

Object Management Group. The Common Object Request Broker:


Architecture and Specification, 2.5 edition, September 2001.

[OOS01] Ossama Othman, Carlos ORyan, and Douglas C. Schmidt. An Efficient


Adaptive Load Balancing Service for CORBA. IEEE Distributed Systems
Online, 2(3), March 2001.
[OSI92a] OSI Special Interest Group. Data Link Provider Interface Specification,
December 1992.
[OSI92b] OSI Special Interest Group. Transport Provider Interface Specification,
December 1992.
[Pre94]

Wolfgang Pree. Design Patterns for Object-Oriented Software


Development. Addison-Wesley, Reading, Massachusetts, 1994.

[Rag93]

Steve Rago. UNIX System V Network Programming. Addison-Wesley,


Reading, Massachusetts, 1993.

[Ric97]

Jeffrey Richter. Advanced Windows, Third Edition. Microsoft Press,


Redmond, WA, 1997.

[Rit84]

Dennis Ritchie. A Stream InputOutput System. AT&T Bell Labs


Technical Journal, 63(8):311324, October 1984.

[Rob99]

Robert Sedgwick. Algorithms in C++, Third Edition. Addison-Wesley,


1999.

[Sch98]

Douglas C. Schmidt. Evaluating Architectures for Multi-threaded


CORBA Object Request Brokers. Communications of the ACM special
issue on CORBA, 41(10), October 1998.

i
i

i hsbo
2001/
page 1
i

154

BIBLIOGRAPHY

[Sch00a] Douglas C. Schmidt. Applying a Pattern Language to Develop


Application-level Gateways. In Linda Rising, editor, Design Patterns in
Communications. Cambridge University Press, 2000.
[Sch00b] Douglas C. Schmidt. Applying Design Patterns to Flexibly Configure
Network Services in Distributed Systems. In Linda Rising, editor,
Design Patterns in Communications. Cambridge University Press, 2000.
[Sch00c] Douglas C. Schmidt. Why Software Reuse has Failed and How to Make
it Work for You. C++ Report, 12(1), January 2000.
[SH02]

Douglas C. Schmidt and Stephen D. Huston. C++ Network


Programming: Resolving Complexity Using ACE and Patterns.
Addison-Wesley, Reading, Massachusetts, 2002.

[SLM98] Douglas C. Schmidt, David L. Levine, and Sumedh Mungee. The


Design and Performance of Real-Time Object Request Brokers.
Computer Communications, 21(4):294324, April 1998.
[Sol98]

David A. Solomon. Inside Windows NT, 2nd Ed. Microsoft Press,


Redmond, Washington, 2nd edition, 1998.

[Som97] Peter Sommerland. The Manager Design Pattern. In Robert Martin,


Frank Buschmann, and Dirk Riehle, editors, Pattern Languages of
Program Design. Addison-Wesley, Reading, Massachusetts, 1997.
[SS93]

Douglas C. Schmidt and Tatsuya Suda. Transport System Architecture


Services for High-Performance Communications Systems. IEEE Journal
on Selected Areas in Communication, 11(4):489506, May 1993.

[SS94]

Douglas C. Schmidt and Tatsuya Suda. An Object-Oriented


Framework for Dynamically Configuring Extensible Distributed
Communication Systems. IEE/BCS Distributed Systems Engineering
Journal (Special Issue on Configurable Distributed Systems), 2:280293,
December 1994.

[SS95a]

Douglas C. Schmidt and Paul Stephenson. Experiences Using Design


Patterns to Evolve System Software Across Diverse OS Platforms. In
Proceedings of the 9th European Conference on Object-Oriented
Programming, Aarhus, Denmark, August 1995. ACM.

[SS95b]

Douglas C. Schmidt and Tatsuya Suda. Measuring the Performance of


Parallel Message-based Process Architectures. In Proceedings of the
Conference on Computer Communications (INFOCOM), pages 624633,
Boston, MA, April 1995. IEEE.

[SSRB00] Douglas C. Schmidt, Michael Stal, Hans Rohnert, and Frank


Buschmann. Pattern-Oriented Software Architecture: Patterns for

i
i

i hsbo
2001/
page 1
i

BIBLIOGRAPHY

155

Concurrent and Networked Objects, Volume 2. Wiley & Sons, New York,
NY, 2000.
[Sta96]

Stan Lippman. Inside the C++ Object Model. Addison-Wesley, 1996.

[Ste98]

W. Richard Stevens. UNIX Network Programming, Volume 1: Networking


APIs: Sockets and XTI, Second Edition. Prentice Hall, Englewood Cliffs,
NJ, 1998.

[Ste99]

W. Richard Stevens. UNIX Network Programming, Volume 2:


Interprocess Communications, Second Edition. Prentice Hall, Englewood
Cliffs, NJ, 1999.

[SW93]

W. Richard Stevens and Gary Wright. TCP/IP Illustrated, Volume 2.


Addison Wesley, Reading, Massachusetts, 1993.

[VL97]

George Varghese and Tony Lauck. Hashed and Hierarchical Timing


Wheels: Data Structures for the Efficient Implementation of a Timer
Facility. IEEE Transactions on Networking, December 1997.

[Vli98]

John Vlissides. Pattern Hatching: Design Patterns Applied.


Addison-Wesley, Reading, Massachusetts, 1998.

[WLS+ 85] D. Walsh, B. Lyon, G. Sager, J. M. Chang, D. Goldberg, S. Kleiman,


T. Lyon, R. Sandberg, and P. Weiss. Overview of the SUN Network File
System. In Proceedings of the Winter USENIX Conference, Dallas, TX,
January 1985.
[Woo97] Bobby Woolf. The Null Object Pattern. In Robert Martin, Frank
Buschmann, and Dirk Riehle, editors, Pattern Languages of Program
Design. Addison-Wesley, Reading, Massachusetts, 1997.

i
i

i
i

i hsbo
2001/
page 1
i

i
i

i hsbo
2001/
page 1
i

Back Cover Copy

Software for networked applications must possess the following all-toorare qualities to be successful in todays competitive, fast-paced computing
industry:





Efficiency, to provide low latency to delay-sensitive applications, high


performance to bandwidth-intensive applications, and predictability
to real-time applications
Flexibility, to support a growing range of multimedia datatypes, traffic
patterns, and end-to-end quality of service (QoS) requirements without rewriting applications from scratch and
Extensibility, to support successions of quick updates and additions
to take advantage of new requirements and emerging markets.

This book describes how frameworks can help developers navigate between
the limitations of
1. Lower-level native operating system APIs, which are inflexible and
non-portable and
2. Higher-level distributed computing middleware, which often lacks the
efficiency and flexibility for networked applications with stringent QoS
and portability requirements.
This book illustrates how to develop networked applications using ACE
frameworks, and shows how key patterns and design principles can be
used to develop and deploy successful object-oriented networked application software.
If youre designing software and systems that must be portable, flexible,
extensible, predictable, reliable, and affordable, this book and the ACE
157

i
i

i hsbo
2001/
page 1
i

158

BIBLIOGRAPHY

frameworks will enable you to be more effective in all of these areas. Youll
get first hand knowledge from Doug Schmidt, the original architect and
developer of ACE, the reasons and thought processes behind the patterns
and ideas that went into key aspects of the ACE frameworks. Co-author
Steve Hustons expertise and knowledge from his extensive background in
hardware and software environments and hands-on involvement in ACE
since 1996 will take you further in understanding how to effectively tap
the rich features embodied in the ACE framework.
Dr. Douglas C. Schmidt is an Associate Professor at the University
of California, Irvine, where he studies patterns and optimizations for distributed real-time and embedded middleware. Doug led the development
of ACE and TAO, which are widely-used open-source middleware containing a rich set of reusable frameworks that implement key concurrency and
networking patterns. Hes also been the editor-in-chief of the C++ Report
magazine and co-authors the Object Interconnections column for the C/C++
Users Journal.
Stephen D. Huston is President and CEO of Riverace Corporation, the
premier provider of technical support and consulting services for companies looking to work smarter and keep software projects on track using
ACE. Mr. Huston has more than twenty years of software development experience, focusing primarily on network protocol and networked application development in a wide range of hardware and software environments.
He has been involved with ACEs development since 1996 and has enjoyed
helping many companies develop clean and efficient networked applications quickly using ACE.

i
i

You might also like