Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1
Half Sync Half Async

Half Sync Half Async

Ratings: (0)|Views: 356 |Likes:
Published by newtonapples

More info:

Published by: newtonapples on Mar 12, 2010
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





An Architectural Pattern for Efficient and Well-structured Concurrent I/O
Douglas C. Schmidt and Charles D. Cranor
schmidt@cs.wustl.edu and chuck@maria.wustl.eduDepartment of Computer ScienceWashingtonUniversitySt. Louis, MO 63130, (314) 935-7538An earlier version of this paper appeared in a chapter inthe book “Pattern Languages of Program Design 2” ISBN0-201-89527-7, edited by John Vlissides, Jim Coplien, andNorm Kerth published by Addison-Wesley, 1996.
This paper describes the Half-Sync/Half-Async pattern,which integrates synchronous and asynchronous I/O modelsto support both programming simplicity and execution effi-ciency in complex concurrent software systems. In this pat-tern, higher-level tasks use a synchronous I/O model, whichsimplifies concurrent programming. In contrast, lower-leveltasks use an asynchronous I/O model, which enhances ex-ecution efficiency. This pattern is widely used in operatingsystems suchasUNIX,Mach, WindowsNT,andVMS,aswellas other complex concurrent systems.
1 Intent
The Half-Sync/Half-Async pattern decouples synchronousI/OfromasynchronousI/Oinasystemtosimplifyconcurrentprogramming effort without degrading execution efficiency.
2 Motivation
To illustrate the Half-Sync/Half-Async pattern, consider thesoftware architecture of the BSD UNIX [1] networkingsub-system shown in Figure 1. The BSD UNIX kernel coor-dinates I/O between asynchronous communication devices(such as network adapters and terminals) and applicationsrunning on the OS. Packets arriving on communication de-vices are delivered to the OS kernel via interrupt handlersinitiatedasynchronously by hardware interrupts. These han-dlers receive packets from devices and trigger higher layerprotocol processing (such as IP, TCP, and UDP). Valid pack-etscontainingapplicationdataare queuedat theSocketlayer.The OS then dispatches any user processes waiting to con-sume the data. These processes synchronously receive datafrom the Socket layer using the
system call. A userprocess can make
calls at any point; if the data is not
   S   O   C   K   E   T   L   A   Y   E   R   B   S   D   U   N   I   X   K   E   R   N   E   L   U   S   E   R
   L   E   V   E   L   P   R   O   C   E   S   S   E   S
21, 4: read(data)3: enqueue (data)
2: interrupt
Figure 1: BSD UNIX Software Architectureavailabletheprocess willsleep untilthedata arrivesfromthenetwork.In the BSD architecture, the kernel performs I/O asyn-chronously in response to device interrupts. In contrast,user-level applicationsperformI/Osynchronously. Thissep-aration of concerns into a “half synchronous and half asyn-chronous” concurrent I/O structure resolves the followingtwo forces:
Need for programming simplicity:
Programming anasynchronous I/O model can be complex because input andoutput operations are triggered by interrupts. Asynchronycan cause subtle timing problems and race conditions whenthe current thread of control is preempted by an interrupthandler. Moreover, interrupt-driven programs require ex-tra data structures in addition to the run-time stack. Thesedata structures are used to save and restore state explicitly1
when events occur asynchronously. In addition, debuggingasynchronous programs is hard since external events occurat different points of time during program execution.Incontrast, programmingapplicationswitha synchronousI/O model is easier because I/O operations occur at well de-finedpointsintheprocessingsequence. Moreover, programsthat use synchronous I/O can block awaiting the completionof I/O operations. The use of blocking I/O allows programsto maintain state information and execution history in a run-time stack of activation records, rather than in separate datastructures. Thus, there is a strong incentive to use a syn-chronous I/O model to simplify programming.
Need for execution efficiency:
The asynchronous I/Omodel maps efficientlyontohardware devices that are drivenbyinterrupts. AsynchronousI/Oenablescommunicationandcomputation to proceed simultaneously. In addition, contextswitching overhead is minimized because the amount of in-formation necessary to maintain program state is relativelysmall [2]. Thus, there is a strong incentive to use an asyn-chronous I/O model to improve run-time performance.In contrast, a completely synchronous I/O model may beinefficient if each source of events (such as network adapter,terminal, or timer) is associated with a separate active object(such as a process or thread). Each of these active objectscontain a number of resources (such as a stack and a set of registers) that allow it to block while waiting on its sourceof events. Thus, this synchronous I/O model increases thetime and space required to create, schedule, dispatch, andterminate separate active objects.
3 Solution
To resolve the tension between the need for concurrentprogramming simplicity and execution efficiency use the
 Half-Sync/Half-Async pattern
. This pattern integrates syn-chronous and asynchronous I/O models in an efficient andwell-structured manner. In this pattern, higher-level tasks(such as database queries or file transfers) use a synchronousI/O model, which simplifies concurrent programming. Incontrast, lower-level tasks (such as servicing interruptsfromnetwork controllers) use an asynchronous I/O model, whichenhances execution efficiency. Because there are usuallymore high-level tasks than low-level tasks in a system, thispattern localizes the complexity of asynchronous processingwithin a single layer of a software architecture. Communi-cation between tasks in the Synchronous and Asynchronouslayers is mediated by a Queueing layer.
4 Applicability
Use the Half-Sync/Half-Async pattern when
A system possesses the followingcharacteristics:
The system must perform tasks in respond to ex-ternal events that occur asynchronously, and
   Q   U   E   U   E   I   N   G   Q   U   E   U   E   I   N   G   L   A   Y   E   R   L   A   Y   E   R   A   S   Y   N   C   H   R   O   N   O   U   S   A   S   Y   N   C   H   R   O   N   O   U   S    T   A   S   K    L   A   Y   E   R    T   A   S   K    L   A   Y   E   R   S   Y   N   C   H   R   O   N   O   U   S   S   Y   N   C   H   R   O   N   O   U   S    T   A   S   K    L   A   Y   E   R    T   A   S   K    L   A   Y   E   R
221, 4: read(data)1, 4: read(data)3: enqueue(data)3: enqueue(data)2: interrupt2: interrupt
Figure2: TheStructureofParticipantsintheHalf-Sync/Half-Async Pattern
it is inefficient to dedicate a separate thread of controltoperformsynchronousI/Oforeachsourceof external events, and
the higher-level tasks in the system can be sim-plified significantly if I/O is performed syn-chronously.
One or more tasks in a system
run in a single-thread of control, while other tasks may benefit frommulti-threading.
For example, legacy libraries like X windows andSun RPCare often non-reentrant. Therefore, mul-tiple threads of control cannot safely invoke theselibraryfunctionsconcurrently. However,toensurequalityofservice ortotakeadvantages ofmultipleCPUs, it may be necessary to perform bulk datatransfers or database queries in separate threads.The Half-Sync/Half-Async pattern can be used todecouple the single-threaded portions of an ap-plication from the multi-threaded portions. Thisdecoupling enables non-reentrant functions to beused correctly, without requiringchanges to exist-ing code.
5 Structure and Participants
Figure2illustratesthestructureofparticipantsintheHalf-Sync/Half-Async pattern. These participants are describedbelow.
Synchronous task layer
User processes
Thetasksinthislayerperformhigh-levelI/Ooper-ationsthat transferdata synchronouslytomessagequeues in the Queueing layer. Unlike the Asyn-chronous layer, tasks inthe Synchronouslayer are
active objects
[3] that have their own run-timestack and registers. Therefore, they can blocwhile performing synchronous I/O.
Queueing layer
Socket layer
This layer provides a synchronization and buffer-ing point between the Synchronous task layer andtheAsynchronoustasklayer. I/Oeventsprocessedby asynchronous tasks are buffered in messagequeues at the Queueing layer for subsequent re-trieval by synchronous tasks (and vice versa).
Asynchronous task layer
BSD UNIX kernel
The tasks in this layer handle lower-level eventsfrom multipleexternal event sources (such as net-work interfaces or terminals). Unlike the Syn-chronous layer, tasks in the Asynchronous layerare
passive objects
that do not have their ownrun-time stack or registers. Thus, they cannot block indefinitely on any single source of events.
External event sources
Network interfaces
External devices (such as network interfaces anddisk controllers) generate events that are receivedand processed by the Asynchronous task layer.
6 Collaborations
Figure 3 illustratesthe dynamic collaborationamong partici-pantsintheHalf-Sync/Half-Asyncpatternwheninputeventsarrive at an external event source (output event processing issimilar). These collaborations are divided intothe followingthree phases:
 Async phase
– in this phase external sources of eventsinteract with the Asynchronoustask layer via interruptsor asynchronous event notifications.
Queueing phase
– in this phase the Queueing layer pro-vides a well-defined synchronization point that buffersmessages passed between the Synchronous and Asyn-chronous task layers in response to input events.
Sync phase
– in this phase tasks in the Synchronouslayer retrieve messages placed into the Queueing layerby tasks in the Asynchronous layer. Note that the pro-tocol used to determine how data is passed between theSynchronous and Asynchronous task layers is orthogo-nal tohowthe Queueinglayer mediates communicationbetween the two layers.The Asynchronous and Synchronous layers in Figure 3communicate in a “producer/consumer” manner by passing
ExternalExternalEvent SourceEvent SourceAsyncAsyncTask Task SyncSyncTask Task MessageMessageQueueQueue
    A    S    Y    N    C    A    S    Y    N    C    P    H    A    S    E    P    H    A    S    E    Q    U    E    U    E    I    N    G    Q    U    E    U    E    I    N    G    P    H    A    S    E    P    H    A    S    E    S    Y    N    C    S    Y    N    C    P    H    A    S    E    P    H    A    S    E
Figure 3: Collaboration between Layers in the Half-Sync/Half-Async Patternmessages. The key to understanding the pattern is to recog-nize that Synchronous tasks are active objects. Thus, theycan make blocking
calls at any point inaccordance with their protocol. If the data is not yet avail-able tasks implemented as active objects can sleep until thedata arrives. In contrast, tasks in the Asynchronous layerare passive objects. Thus, they cannot block on
calls.Instead, tasks implemented as passive objects are triggeredbynotificationsorinterruptsfromexternal sources of events.
7 Consequences
The Half-Sync/Half-Async pattern yields the followingben-efits:
 Higher-level tasks are simplified 
because they areshielded from lower-level asynchronous I/O. Complexconcurrency control, interrupt handling, and timing is-sues are delegated to the Asynchronoustask layer. Thislayer handles the low-level details (such as interrupthandling) of programming an asynchronous I/O sys-tem. The Asynchronouslayer alsomanages theinterac-tionwithhardware-specific components(such as DMA,memory management, and device registers).
Synchronization policies in each layer are decoupled.
Therefore each layerneed notuse thesame concurrencycontrol strategies. For example, in the single-threadedBSD UNIX kernel the Asynchronous task layer imple-ments concurrency control via low-level mechanisms(such as raising and lowering CPU interrupt levels). Incontrast, user processes in the Synchronous task layerimplement concurrency control via higher-level syn-chronization constructs (such as semaphores, messagequeues, conditionvariables, and record locks).
 Inter-layer communicationis localized at a single point 
because all interaction is mediated by the Queueinglayer. The Queueing layer buffers messages passed3

Activity (2)

You've already reviewed this. Edit your review.
1 thousand reads
1 hundred reads

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->