Actor-Oriented Programming for Wireless Sensor Networks

Elaine Cheong

Electrical Engineering and Computer Sciences University of California at Berkeley
Technical Report No. UCB/EECS-2007-112 http://www.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-112.html

August 30, 2007

Copyright © 2007, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission.

Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong

B.S. (University of Maryland, College Park) 2000 M.S. (University of California, Berkeley) 2003

A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences in the GRADUATE DIVISION of the UNIVERSITY OF CALIFORNIA, BERKELEY

Committee in charge: Professor Edward A. Lee, Chair Professor Eric A. Brewer Professor Paul K. Wright Fall 2007

Actor-Oriented Programming for Wireless Sensor Networks Copyright 2007 by Elaine Cheong .

node-centric. Actor-oriented programming provides a common high-level language that unifies the programming interface between the operating system. middleware. Chair Wireless sensor networks is an emerging area of embedded systems that has the potential to revolutionize our lives at home and at work. business asset management. Building sensor networks today requires piecing together a variety of hardware and software components. Lee. I advocate using an actor-oriented approach to designing. In this dissertation. This dissertation also presents methods for using higher-order actors with various metapro- . health care. and TOSSIM.1 Abstract Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California. including environmental monitoring and conservation. Viptos is built on Ptolemy II. and simulating wireless sensor network applications. programming. TinyGALS is implemented in the galsC programming language. This dissertation then describes Viptos (Visual Ptolemy and TinyOS). transportation. each with different design methodologies and tools. Berkeley Professor Edward A. with wide-ranging applications. an interrupt-level discrete-event simulator for TinyOS networks. and macroprogramming layers of a sensor network application. which provides constructs to systematically build concurrent tasks called actors. manufacturing and industrial control. a joint modeling and design environment for wireless networks and sensor node software. an actor-oriented graphical modeling and simulation environment for embedded systems. and home automation. which is built on the TinyOS programming model. seismic and structural monitoring. Locally Synchronous) programming model. making it a challenging and error-prone process. The galsC compiler toolsuite provides high-level type checking and code generation facilities and allows developers to deploy actor-oriented programs on actual sensor node hardware. This dissertation presents the TinyGALS (Globally Asynchronous. generating.

The networked embedded computing community can use these tools and the knowledge shared in this dissertation to improve the way we program wireless sensor networks. parameterizable descriptions and automatically generate sensor network simulation scenarios from these models.2 gramming and generative programming techniques that enable wireless sensor network application developers to create high-level. All of the tools I developed and describe in this dissertation are open-source and freely available on the web. Professor Edward A. Lee Dissertation Committee Chair .

and guided me through life. .i To the teachers who have challenged. encouraged.

Actors. Avocado & Watercress Salad with Discrete-Event Miso Dressing Corn and Cherry Tomato Salad with Arugula Soups Mexican Chicken Soup Butternut Squash Bisque Why-the-Chicken-Crossed-the-Model Santa Fe-Tastic Tortilla Soup Entrees Chicken Pot Pie Fettuccine with Concurrent Meyer Lemon Butter Sauce Chicken Tikka Masala Farfalle with Ptalon Pesto. Avocado. Feta. and Cherry Tomatoes Salads Fuyu Persimmon. Crème Fraîche. Metacabbage & Pancetta Risotto with Basil Oil Desserts Marillenknödel: Austrian apricot dumplings Zwetschgendatschi: Bavarian plum delicacy Drinks Plum Granita with Limoncello Vinho do IOPorto Lychee-flavoured Ramune (ラムネ) .ii Café Ptolemy Executive Chefs: Elaine Cheong and Andrew Mihal Appetizers Avocado Smash and Double-Decker Baked Quesadillas Corn Pancakes with Green Onions. and Cherry Tomatoes Butternut Squash Lasagna Cumin Crusted Chicken with Cotija and Mango-Garlic Sauce with Green Onion Pesto Mashed Motes Butternut Squash.

. . . . .1 Links and connections . . . . . . . . . . . . . . . . . . . . . . .3 Link model within actors . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . 3. . .2 Execution model and language semantics . 3. 1. . . . .4 System initialization and start of execution . . .3 Code Generation . . . . . . . . . . . .1 TinyOS . . . . . . . . . . . 3. . .1. . . . . .2. . . . . . . . . . . . . . . . . . . .3. . . . . .2 Communication . . . . . . . . . . . . . .3. . . . . . . Background 2. . . . . . . . . . 3. . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . .1 NesC syntax . 3. . . . . . . . . . . . .1 The TinyGALS Programming Model and galsC Language 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Determinacy . . .1. . . . . . . .4 Previously Published Work . .3 Actor-Oriented Programming for Wireless Sensor Networks 1. . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . .2 TinyOS execution model 2. . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . .3 Summary . . . 3. . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . 3 TinyGALS and galsC 3. . . . . . . . 2. . . . . .2 Ptolemy II . . . . . . .1. . . . . . . . . . . . 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . 2. . . . . . . . . . . . . . . .1. . . . . . . . . . . . .1 VisualSense . . . . . . . .2. . . . . . . . . . . . . . . .2 Actor-Oriented Programming . . . . . .2. . . . 2 . . . . . . . . .1 Programming constructs and language syntax . . . . . . . . . . . . . . . .3 Summary . . . . . . . . .1.1 Wireless Sensor Networks . 3. . . . . . .2 Concurrency and Determinacy Issues . . . . . . . .1 Concurrency .iii Contents List of Figures List of Tables 1 Introduction 1. . . . . . 3. . . . . . . .3 TinyGUYS . . . .2. . . . . . .4 Type inference and type checking . . . . . . . . 3. . . . . . . .5 Summary . . . . . . . . . . . . . . . . . . . . . . . . vi viii 1 2 3 5 8 9 9 10 10 11 14 17 19 22 22 28 36 38 39 40 40 41 48 49 51 51 53 53 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . .3. . . . . . . . . . . . . . .

. . . . . . . . .1 Comparison to TOSSIM . . . . . . . . . . . . . . . . . . .2. . . 84 . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . .1. . .2. 98 .3 Parameter Sweep . . . . . . . . . . . . . . . . . . .4 Higher-order actors . . . . . . . . . . . . . . . . . . . . . . . . 84 . . . . . . . . 4. . . . . . . . . . . . . . .5 Scheduling . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . .5 Discussion . . . . . . . . 6. . . . . . . . . 4. . . . . . . . . . . . . Metaprogramming for Wireless Sensor Networks 5. . . . 3. . .4 Click . . . . . . . . . . . . . . . . . . . .1 TinyGALS and galsC . and Deployment Environments . . . . . . . . . . . . . . .1 Generative Programming and Metaprogramming 5. .3 Port-Based Objects . . . . .6 Timed Multitasking . . . . .2. . . . . . . . . . . . . . . . . . 81 . . . 4. . . . . . . . . . . . 6. .1. . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . .4. . . . . . . . 4. . . . . . . . .4 Generation of code for simulation . . . . . . . 6. 82 . . . . .1. . . . .1 Design and simulation environments . . . . . 6. . . . . 4. . . .6 Memory usage Example . . . . . . . . . . . . . . . . . .2 Transformation of nesC components . . . . . . . . . . . . . . . . . . . . . . . . . .5 Click and Ptolemy II . 81 . . . . .5 Summary . . . . . . . 6. . . . . . . . . . . . . .1. .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 55 55 57 59 62 62 64 66 68 69 74 77 78 79 3. . . . . . . . 90 . .5 4 Viptos 4. . .4. . . . . .1 Motivation . . . . . . . . . 6. . . .2 Reconfiguration in Ptalon . . . Actors. . . . . Simulation. . . . . . . . . . . . . . . .3 Summary . . . . . . . . . . . . . . . . . . . . . .2 Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Design . . .2 Performance Evaluation . . . . . . . . . . . . . . . . . . . .1 A simple example . .4 3. . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . .2 Radio . . . . . . . . . . . .1. . . .3. . . 5 . . . . . . . . . . . . 6. . Summary . . . .1. . . . . . . . . . . . . . 5. . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . 89 . . . . . . . and Components 5.2 TinyOS development and editing environments 6. . . . . . . .2 Higher-order Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . 5. . . . . . . . . . . . .5 Simulation of TinyOS in Viptos . 95 . . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . 4. . . . . . .2 Small World . .3 Ptalon . . . . 6.2 MPI . . . . . 86 . . . . . . . . . . . .4 Specifying WSN Applications Programmatically 5. . . . . . . . . . . . . . . . . . . . . .4. 5. .iv 3. . . . . . .3. . . 89 . . . . . .2. . . 6. . . . . . . .1 Representation of nesC components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 . . . . . . . . . . . .3 Generation of code for target deployment 4. 5. . . . . .3 Summary . .1. . . . . . 102 105 105 105 106 108 110 118 118 119 119 123 123 124 6 Related Work 6. . . . . .1 Non-blocking . . . . . . . . . 5. . . . . . . . . . . . . .3 Programming and deployment environments . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . .4. . . . . . . . . .

v 7 Conclusion 125 127 Bibliography .

. . . . . . Code generation for the SenseTag application. . . . . . . . .6 3. . . . Top-level.1 2. . . . . . . .7 Object-oriented design vs. . . . SenseToLeds application in Viptos. . . .6 4. . . . . . . . Generated MoML by nc2moml for TimerC. . . . . . . . . . . . . . . . . . . . . . . Two events are produced at the same time. . . . . . . . . . . . . . . . A single-output. . . . . . . . . . . . . . .9 3. . . . . . Sample nesC source code. . . . . . . . . . . . . . . . . . . . . . . . . .2 3. .2 2. . . . . . . . . . . . . . . . . . . . . Illustration of an actor-oriented model (top) in Ptolemy II and straction (bottom). . . . . . . . . . . . . . . Active system state determined by adding the active system state after one noninterleaved interrupt. . . . . . . . . . Source code for the TimerC and TimerM components. . . . .1 4. . . . . per-node view of the object detection application. . Generated MoML by ncapp2moml for SenseToLeds. . . . . . . . . . . . . . . . . Viptos version of TOSSIM scheduling algorithm. . . . 5 6 11 13 15 22 24 27 29 32 34 38 41 42 45 46 47 47 50 54 56 58 63 65 66 67 70 71 75 Graphical representation of the SenseTag application. . . . . Source code for the SenseTag application. . . . . . . . . . . . . .15 3. . . . . . . . . . . . . . . . . . . . . .14 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 1. . . . . . . .nc . . . . . . . .5 3. . . . . . . .4 3. . . . . its hierarchical ab.16 3. . . .3 3. . . . . . Source: Edward A. . . . . . . . . . . . . . . . . . . XML representation of the Sinewave source. . . . . . . . . . . . . . . .10 3. . . . . . . . . . . . . . . . . .5 4. . . . .7 3. . . . . . Active system state after one interrupt. . . . . A self-loop actor triggered by an interrupt. . One or more interrupts where actors have delayed output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 3. . . . . . . . . Sample nesC source code. . . . . . . Sensor array for object detection and reporting. .17 4. . . . . . . . . . . . . . .12 3. . . . Lee. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 4. . . . . . . . . . . . .1 3. . . . . . Directed acyclic graphs (DAGs) within actors. . . . . . . . . . . . . . . . . . . . . . . Source code for TimerActor and SenseActor. . . . . . . . . . . . . Type checking example. . . . . . . . . . . . . . . . . . . . .3 3. . . .nc TOSSIM scheduling algorithm. . . . . . . . Send and receive application in Viptos. . . . . . . . . .4 4. . . . . . . . . . . . . . . TinyGALS scheduling algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multiple-input connection. . . . . A single interrupt. . .8 3. . . . . . . . . . . . . . . . actor-oriented design. . . . .2 4. . . . . . . . . . . . . . . . . . . . . . . .2 2. . . . . . . . .13 3. . . .vi List of Figures 1. . . WSN landscape. . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 MultipleNodesMoML. . . . . . . . . . . . 97 5. . . . .10 Ptalon version of Small World in Ptolemy II. . . . . 96 ParameterSweep version of Small World model with MultiInstanceComposite in Ptolemy II. . . . . . . . .5 5. Pull processing across multiple nodes. . . TinyGALS. . . . . . . . . Execution time of the SenseToLeds application as a function of the number of nodes. . . . . . . . . 111 112 113 115 117 117 . . . . . . 92 ParameterSweep version of Small World in Ptolemy II. . . . . . . . . . . . . . . . . . . . . . .8 4.7 5. . . . . . . . A sensor network application. . .4 5. . . . .6 5. . . . . . . .5. . .ptln). .5 6. . . . . . . . . . . . . . . .3 6. . . . . . . . . . . . . . . . . . . . 94 SDF model for changing parameter values of Small World model in Ptolemy II. .1 6.vii 4. . . . . . . . . . . a configuration for the application in Figure 6. . . . . . . . . . Source: Eddie Kohler. . . 99 5. . . . Flowchart for Click configuration shown in Figure 6. . . . . . . . . . . . . . . . . . . . . 101 6. . Each simulation ran for 300. . . . 93 Modal model for changing parameter values of Small World model in Ptolemy II. . . . . . . . . . .2. . . 5.8 76 78 79 MultipleNodesMoML. . . . . . . . . . . . . . A simple Click configuration with sequence diagram. . . . . . . . . . . . . . . . . . .10 Execution time of a radio send and receive model in Viptos as a function of the number of senders and receivers. . . . .9 Ptalon code for SmallWorld (SmallWorld. . . . . . . . 100 5. . . . . . . . . . . Click vs. . . . . . . . . . . . .4 6. . . . . . . 86 PtalonActor in Ptolemy II. . . .1 5. . . . . .11 Excerpt of MoML code for Ptalon version of Small World. . .2 5. . 4. . . . . . . . 88 Small World in Ptolemy II. . . .0 virtual seconds. . . Source: Eddie Kohler. . . . . . . . . . . . . . . .6 An example Click element. . . . . . . .9 Multi-hop routing in Viptos. . . . .2 6. . . . . . . . . . Source: Eddie Kohler. . . .3 5. .0 virtual seconds. . . . . . . . . . . . . . . . . . . . . . . . . . .ptln . . . . . . . . . Each simulation ran for 120. . . . . . . . . . . .xml .

.1 Summary of valid types of links in TinyGALS/galsC. .3 4. . . . . . . . . . . . . . 37 50 51 63 Comparison of number of bytes between different implementations of SmallWorld. 102 . . . . .1 3. . . . . . . . . . . . . . . . . . . . . . . . . . .2 3. . . . . Generated code for parameters (TinyGUYS) in galsC. . . . . . . Representation scheme for nesC components in Viptos. .viii List of Tables 3. . . . . . . . .1 5. . . . . Generated code for ports in galsC. . . . . .

as well as Backen k¨ stlich wie noch nie. and the rest of the Ptolemy Group and NEST Group.ix Acknowledgments I would like to thank Jie Liu for his invaluable guidance throughout my graduate career. David B. Judy Liebman. David Gay. Eddie Kohler. I would especially like to thank my advisor. Parke. Yang Zhao. Andrew Mihal. and/or help with hardware and software. Eric Brewer. J¨ rn Janneck. without which this dissertation would not be possible. Xiaojun o Liu. Stewart. For their feedback. for his advice and support throughout all these years. and Paul Wright. Lee. John Reekie. for their feedback. Adam Cataldo. advice. Recipes courtesy of Food Network and Cooking Fresh from the Bay Area. Phoebus Chen. and Greens Restaurant. Jackie Leung. o . I would like to thank my undergraduate advisor. Epicurious. I would also like to thank Feng Zhao for giving me the opportunity to work with him and Jie at both PARC and Microsoft Research. Mary Stewart. Rob Szewczyk. Kamin Whitehouse. I would like to thank my dissertation committee. I would like to thank: Christopher Brooks. L. Finally. Prabal Dutta. Heather Taylor. Edward A. Steve Neuendorffer. Cook’s Illustrated. Roberto Passerone. Edward. for introducing me to embedded systems and starting me on this path.

x .

The concrete machine program is concerned with bits. Radical progress is going to have to come from attacking the essential difficulties of fashioning complex conceptual constructs. and with concomitant gains in reliability. Most observers credit that development with at least a factor of five in productivity. simplicity. or classes. Brooks elaborates further: Most past progress in software productivity has come from eliminating noninherent difficulties such as awkward machine languages and slow batch turnaround. and such. There are not a lot more of these easy pickings. In the twentieth-anniversary edition (1995) of the text. Brooks. branches. and eliminated the vast amounts of work and the copious opportunities for error that dwell at the individual statement level.1 Chapter 1 Introduction In his classic software engineering text. What does a high-level language accomplish? It frees a program from much of its accidental complexity. The Mythical Man Month: Essays on Software Engineering [17]. registers. and comprehensibility. . and communication. The above passage was originally published in 1975. or modules. The most obvious way to do this recognizes that programs are made up of conceptual chunks much larger than the individual high-level language statement—subroutines. sequences. Frederick P. conditions. An abstract program consists of conceptual constructs: operations. datatypes. To the extent that the high-level language embodies the constructs wanted in the abstract program and avoids all lower ones. it eliminates a whole level of complexity that was never inherent in the program at all. we have radically raised the conceptual level. disks. If we can limit design and building so that we only do the putting together and parameterization of such chunks from prebuilt collections. channels. Jr. and simplicity has been the progressive use of high-level languages for programming. reliability. discusses high-level languages as an essential tool for increasing programmer productivity: Surely the most powerful stroke for software productivity.

This dissertation presents methods for raising the conceptual level of building wireless sensor network applications. and/or video or still cameras. business asset management. as in the Tenet architecture [36]. health care. acoustic microphone arrays. A wireless sensor network may also be augmented with a higher tier of more powerful. Little or no integration exists among the tools necessary to create these software components. Nodes in this higher tier are sometimes called masters [36] or microservers [67]. network stack protocols. seismic and structural monitoring. tetherless. Each sensor node communicates wirelessly with a few other neighboring nodes within its radio communication range [107]. sensor networks are spatially aware and are more closely linked to geographic location and the physical environment than centralized systems. In addition. Wireless sensor networks provide a way to create flexible. In addition. Each node is equipped with one or more sensing devices. a microprocessor. and partitioning of tasks across multiple nodes. 1. scheduler services. humidity. application-level tasks. wired nodes with greater network capacity and computation power. making it a challenging and errorprone process. these tools typically have little infrastructure . each with different design methodologies and tools. acceleration or vibration. with wide-ranging applications.2 Although twelve years have passed since Brooks wrote the above passage. it still applies today to new high-level programming concepts. changing magnetic field.1 Wireless Sensor Networks Wireless sensor networks is an emerging area of embedded systems that has the potential to revolutionize our lives at home and at work. or temperature. Unlike traditional networked systems. automated data collection and monitoring systems. pH. and home automation [107]. including environmental monitoring and conservation. mostly because the interactions between the programming models are poorly understood. transportation. A sensor node in a typical sensor network has a battery. An application area where these concepts are particularly needed is that of wireless sensor networks. a sensor network is constrained by finite on-board battery power and limited network communication bandwidth. such as sensors for visible or infrared light. and a small amount of memory for signal processing and task scheduling. Typical networked embedded system software development may require the design and implementation of device drivers. using actor-oriented programming concepts. manufacturing and industrial control. electrical resistance. Building sensor networks today requires piecing together a variety of hardware and software components.

Instead of object-oriented design. but conceptually it represents signaling between components. The actor model was originally proposed by Carl Hewitt. from actors which exchange and process data [74]. Agha assumes that each actor encapsulate a thread of control. 1. which emphasizes inheritance and procedural interfaces. I view actor-oriented programming as an approach to system-level design. Hewitt [46] showed that control structures can be represented as patterns of message passing between simple actors with a conditional construct but no local state. though the meaning of the term has evolved over time.3 for building models and interactions that are not part of their original scope or software design paradigms. software components are parameterized actors with ports. Unlike Agha’s actors. Edward A. it does not have call-return semantics. Ports and parameters define the interface of an actor. Gul Agha extended the notion of actor to include history-sensitive behavior necessary for shared. Hewitt and Agha view actors as a universal concept. The precise semantics depends on the model of computation. Hewitt proposes actors as an approach to modeling intelligence as a society of communicating knowledge-based problem-solving experts. which encapsulate data and do not interact with one another. This dissertation uses Lee’s concept of actors.2 Actor-Oriented Programming Actor-oriented programing is a high-level programming concept that can increase software productivity. everything in the system is an actor that responds to messages. He intended actors to be used as a paradigm for exploiting parallelism on massively parallel architectures and as a suitable language for concurrency [2]. he suggests the term actor-oriented design as a refactored software architecture. Actors are concurrent dataflow-oriented components that specify behavior abstractly without relying on low-level implementation constructs . These actors are objects that interact in a purely local way by sending messages to one another. In his model. One can view each of the experts as a society that can be further decomposed until reaching the primitive actors of the system. but unlike a method. A brief history of actor research up to 1993 is summarized by Agha [3] and excerpted and extended here. mutable data objects [1]. where instead of objects. Lee’s actors are not required to encapsulate a thread of control [60]. A port represents an interaction with other actors. and simplicity. Lee generalized the notion of actors and applied it to software design for concurrent systems [49]. Like Neuendorffer [74]. Lee distinguishes data tokens. reliability.

or distributed computing infrastructure [74]. similar to Lauer and Needham’s concept of the duality of message-oriented systems and procedure-oriented systems [58]. In actor-oriented programming. Actor-oriented programming can be combined with object-oriented programming and other procedure-oriented systems in a structured way to achieve the best of both worlds. In other words.1). Other systems are able to be partitioned into subsystems. The factors and design decisions of the system upon which the process and synchronization facilities are built are the things which make one or the other style more attractive or more tedious. Some systems are implemented in a style which is very close in spirit to one model or the other. what flows through an object is sequential control. threads. the arrangement of peripheral devices and interrupts.” “i. the size of the stateword which must be saved on every context switch. Other constraints are those “imposed by the machine architecture and hardware. things happen to objects. .” They suggest that a message-oriented (actor-oriented) style is best when it is easy to allocate message blocks and queue messages but difficult to build a protected procedure call mechanism. they lie in the substrate upon which the system is built. Actor-oriented programming and object-oriented programming are duals of each other.” Actor-oriented programming and other message-oriented systems are well-suited to embedded systems and other highly concurrent systems.e.” They conclude that “the considerations for choosing which model to adopt in a given system are not found in the applications which that system is meant to support. with a fast response rate.” such as the “organization of real and virtual memory. machine architecture and/or programming environment—on which the process and synchronization facilities are implemented. actors make things happen (see Figure 1. Lauer and Needham explain that though “no real system precisely agrees with either model in all respects.4 such as function calls..” “most modern operating systems can be usefully classified using them. each of which corresponds to one of the models. and the architecture of the instruction set and the programmable registers. where a variety of peripheral devices and interrupts must be accessed frequently. In traditional objectoriented programming. what flows through an object is evolving data. the ease with which scheduling and dispatching can be done. Instead. In other words. and which are coupled by explicit interface mechanisms. and memory space is at a premium (and memory protection is often not provided in the underlying infrastructure).

and more abstract programming models are used. or emulation for the Atmel AVR microcontroller instruction set. I advocate using an actor-oriented approach to designing. SNACK [37]. . MantisOS [13]. Software in the node-centric layer runs on a single node on top of the operating system. and the Object State Model [53]. Existing approaches to building wireless sensor networks can be divided into four layers.NET. call The alternative: Actor-oriented: actor name data (state) parameters ports What flows through an object is evolving data.1 The node-centric approach forms the next layer above the operating system layer. Examples include Mat´ [64]. Source: Edward A. e The middleware approach forms the third layer. return Things happen to objects. and .5 The established: Object-oriented: class name data methods What flows through an object is sequential control. NutOS [11]. SOS [40]. which begins to include programming abstractions that allow the user to address multiple nodes. whose focus is to provide basic programming abstractions to allow a program to run on the sensor node hardware. programming. as shown in the vertical axis of Figure 1. Instruction-level emulation lies below the operating system approach and is not shown in the figure. Examples include directed diffusion [50].2 rely on either simulation with a combination of TOSSIM and gdb. which makes programming easier for the user. actor-oriented design. Linux. Contiki [27]. The operating system approach forms the bottom-most layer. and simulating wireless sensor network applications. Lee. In this dissertation. 1. generating. with concurrency at many different levels.2. Examples include TinyOS [48]. 1 Many of the examples shown in Figure 1.3 Actor-Oriented Programming for Wireless Sensor Networks Wireless sensor networks are highly concurrent systems. Actors make things happen.1: Object-oriented design vs. Token Machines [75]. Input data Output data Figure 1.

2: WSN landscape.6 Figure 1. .

Regiment [76]. from design to simulation and testing.2 PIECES (Programming and Interaction Environment for Collaborative Embedded Systems) [68] is a higher-level simulation tool implemented in a mixed Java-Matlab environment. and TinyViz for visualization. rather than an integrated approach. rather than programming individual nodes separately. and UML (Unified Modeling Language). networking. and embedded web services [67]. Agilla [30]. and to deployment. which allows the user to create an application by programming the wireless sensor network as a whole. Macroprogramming is also known as programming the ensemble. These tools. and actorNet [56]. with the exception of Em*. by jointly considering the physical. OPNET. Prowler. or communication model between actors. existing development tools are disjoint and difficult to integrate. DSN (Declarative Sensornet) [94]. Unfortunately. The macroprogramming approach forms the top layer. are usually stand-alone and not designed for hardware deployment. J-Sim. The goal of this work is to create integrated tools and programming models for networked embedded application developers to model and simulate their algorithms and quickly transition to 2 Chapter 6 contains a more detailed discussion of these simulation tools. though it does not translate easily to actual deployment. IDSQ (information-driven sensor querying) [108]. Examples include TinyDB [70]. . These constraints dictate that sensor network problems are best approached in a holistic manner. However.7 abstract regions [101]. Many of the tools shown in Figure 1. OMNeT++. and deployment. and not simulation. wireless sensor networks are often deployed in resourceconstrained environments. and Em*. The process of building a wireless sensor network can be divided into three stages of development: design. simulation. Simulation tools that fall somewhere between the middleware and node-centric layers include ns-2. Hood [102]. Most existing work focuses on only one stage of development.2 rely on the TOSSIM TinyOS simulator for operating system-level simulation and testing. In this dissertation. The developer can choose the model of computation. including Semantics Streams [103]. Kairos [38]. DHT (Distributed Hash Table). and application layers and making major design trade-offs across the layers [107]. Other tools are programming models or languages that focus solely on design. actor-oriented programming provides a common high-level language that unifies the programming interface across the four application layers and between the different stages of development. that best fits the target application. SensorSim.

and published under the same title [25]. 1. Design and Implementation of TinyGALS: A Programming Model for Event-Driven Embedded Systems [20]. TinyGALS: A Programming Model for Event-Driven Embedded Systems [23] was the first paper published on this topic. updated. Chapter 4 introduces an actor-oriented modeling and design environment for wireless sensor networks. These four publications are combined and updated to form the basis of Chapter 3 and part of Chapter 6. and Chapter 7 concludes this dissertation. node-centric model called TinyGALS for programming individual sensor nodes. The language implemented for the programming model described in these two papers was redesigned and reimplemented as part of the nesC compiler and described in galsC: A Language for Event-Driven Embedded Systems [24]. Chapter 3 describes an actor-oriented. These two publications are combined and updated to form the basis of Chapter 4 and part of Chapter 6. A summary of how these papers have been incorporated into this dissertation follows. Viptos: A Graphical Development and Simulation Environment for TinyOS-based Wireless Sensor Networks [22] was the first paper published on this topic. I use TinyOS as an interface for the bottom-most layer. Chapter 2 also introduces Ptolemy II.8 testing their software on real hardware in the field. while allowing them to use the model of computation most appropriate for each part of the system. a runtime environment for wireless sensor nodes. and extended as Joint Modeling and Design of Wireless Networks and Sensor Node Software [21]. This tool. Chapter 5 describes various techniques for using higher-order actors to generate multiple simulation scenarios for design and test of wireless sensor network applications. . which allows construction of actor-oriented models of computation. a Java-based software framework with a graphical user interface. Chapter 2 introduces the reader to TinyOS. and it was revised. revised. the operating system approach. and it was extended into a master’s report. called Viptos. encompasses multiple layers and lies above the operating system approach. Chapter 6 discusses related work. which was later condensed.4 Previously Published Work Some of the material in this dissertation was previously published in technical reports or con- ference proceedings.

9

Chapter 2

Background
In this chapter, I present TinyOS, one of the most popular software toolsuites in the wireless sensor network research and development community. I also present Ptolemy II, the current version of one of the most influential actor-oriented design frameworks. Together, these tools form the background knowledge required for understanding the implementation of the tools and techniques presented later in this dissertation.

2.1

TinyOS
TinyOS [47, 48] is an open-source runtime environment designed for sensor network nodes

known as motes. TinyOS has a large user base—over 500 research groups and companies use TinyOS on the Berkeley/Crossbow motes. It has been ported to over a dozen platforms and numerous sensor boards, and new releases see over 10,000 downloads. TinyOS differs from traditional operating system models in that events drive the behavior of the system. Using this type of execution, battery-operated nodes can preserve energy by entering a sleep mode when no interesting events are happening. According to the TinyOS website [95], “TinyOS’s event-driven execution model enables fine-grained power management yet allows the scheduling flexibility made necessary by the unpredictable nature of wireless communication and physical world interfaces.” In this section, I present the details of the nesC syntax and the TinyOS execution model. Note that in this dissertation, I focus on TinyOS 1.x. TinyOS 2.x is a rewritten implementation of TinyOS 1.x that provides users with a cleaner interface. All material presented in this dissertation can easily be transferred to TinyOS 2.x.

10

2.1.1

NesC syntax

TinyOS provides a library of reusable software components written in nesC, an extension to the C programming language. A TinyOS application connects these components using a wiring specification that is independent of the component implementation. Some TinyOS components are thin wrappers around hardware, though most are software modules which process data. The distinction is invisible to the developer. Decomposing different OS services into separate components allows unused services to be excluded from the application. Figure 2.1(a) shows a TinyOS program called SenseToLeds that displays the value of a photosensor in binary on the LEDs of a mote. The TinyOS component library includes those “wired” together in SenseToLeds: Main, SenseToInt (shown in Figure 2.1(b)), IntToLeds, TimerC, and DemoSensorC. A nesC component may expose a set of interfaces. Each interface is a set of methods. A method may be either an event or a command, where an event is usually called “upwards” from a hardware interrupt handler, and a command is usually called “downwards” from the application code. A nesC component provides methods that it implements, and uses methods that are implemented by other components. A nesC component is either a configuration that contains a wiring of other components, or a module that contains an implementation of its interface methods. NesC interfaces may also be parameterized to provide multiple instances of the same interface. In Figure 2.1(a), SenseToLeds is a configuration that exposes no interface methods. The TimerC.Timer interface is parameterized. The Timer interface of SenseToInt connects to a unique instance of the corresponding interface of TimerC. If another component connects to the TimerC.Timer interface, it connects to a different instance. Each timer can be initialized with different periods.

2.1.2

TinyOS execution model

TinyOS contains a single thread of control managed by the scheduler, which may be interrupted by hardware events. Component methods encapsulate hardware interrupt handlers. These methods may transfer the flow of control to another component by calling a uses method. Computation performed in a sequence of method calls must be short, or it may delay the processing of other events. There are two sources of concurrency in TinyOS: tasks and events. Tasks are a deferred computation mechanism. A long-running computation can be encapsulated in a task, which a component method posts to the scheduler task queue. The post operation returns immediately, deferring the computation until the scheduler executes the task later. The TinyOS scheduler processes the tasks

11

configuration SenseToLeds { } implementation { components Main, SenseToInt, IntToLeds, TimerC, DemoSensorC as Sensor; Main.StdControl -> SenseToInt; Main.StdControl -> IntToLeds; SenseToInt.Timer -> TimerC.Timer[unique("Timer")]; SenseToInt.TimerControl -> TimerC; SenseToInt.ADC -> Sensor; SenseToInt.ADCControl -> Sensor; SenseToInt.IntOutput -> IntToLeds; } (a)

module SenseToInt { provides { interface StdControl; } uses { interface Timer; interface StdControl as TimerControl; interface ADC; interface StdControl as ADCControl; interface IntOutput; } } implementation { ... }

(b)

Figure 2.1: Sample nesC source code. in the queue in FIFO order whenever it is not executing an interrupt handler. Tasks run to completion and do not preempt each other. Events signify either an event from the environment or the completion of a split-phase operation. Split-phase operations are long-latency operations where operation request and completion are separate functions. Commands are typically requests to execute an operation. If the operation is split-phase, the command returns immediately and completion is signaled later with an event; non-split-phase operations do not have completion events. Events also run to completion, but they may preempt the execution of a task or another event. Resource contention is typically handled through explicit rejection of concurrent requests. Because tasks execute non-preemptively, TinyOS has no blocking operations. TinyOS execution is ultimately driven by events representing hardware interrupts.

2.2

Ptolemy II
Ptolemy II, a modeling and design framework for concurrent systems, and VisualSense, an

extension to Ptolemy II that supports modeling and simulation of wireless sensor networks, form the basis of the tools described in this dissertation. In this section, I excerpt and summarize information from Hylands, et al. [49] and Baldwin, et al. [8]. The Ptolemy Project conducts foundational and applied research in software-based design tech-

Actors. controls the execution of a model. though Chapter 4 explores the relationship between Ptolemy II components and TinyOS/nesC components. but not always. and provide a framework within which a designer builds models. More commonly. hardware and software. . then the model of computation may literally be the laws of physics. The “port/parameters” shown in Figure 2. The focus is also on systems that are complex in the sense that they mix widely different operations. Often. This interface abstracts the internal state and behavior of an actor. It serves as a laboratory for experimenting with design techniques. The focus is on embedded systems. and complements. parameter values are part of the a priori configuration of an actor and do not change when a model is executed. they interact by sending messages through channels.12 niques for embedded systems. Whereas with object-oriented design. analog and digital electronics. in that there might be distinct sets of rules that impose identical constraints on behavior. the model of computation is a set of rules that are more abstract. and restricts how an actor interacts with its environment. however. It studies heterogeneous modeling. and electronics and mechanical devices. Executable models are constructed under a model of computation.1 If a model describes a mechanical system. have a well-defined component interface. A model of computation may have more than one semantics. which is a component specific to the model of computation used. as illustrated in Figure 2. object-oriented design by emphasizing concurrency and communication between components. but not all. mode changes. Central to actor-oriented design are the communication channels that pass data from one port to another according to some messaging scheme. simulation. Components called actors execute and communicate with other actors in a model. like objects. models of computation in Ptolemy II support actor-oriented design. and design of concurrent systems. signal processing. Most. Ptolemy II is the current software infrastructure of the Ptolemy Project and is published freely in open-source form. sequential decision making. A set of rules that govern the interaction of components is called the semantics of the model of computation. such as networking. which is the set of the “laws of physics” that govern the interaction of components in the model. A director. components interact primarily by transferring control through method calls. and parameters which are used to configure the operation of an actor. The interface includes ports that represent points of communication for an actor. feedback control. in actor-oriented design.2 The use of channels to mediate communication implies 1 These components are not the same as TinyOS/nesC components. This contrasts with. 2 These channels may be wired or wireless.2 function as both ports and parameters.2. particularly those that mix technologies including. and user interfaces. The next section discusses wireless channels in more detail.

13 annotation director port/parameters external port model hierarchical abstraction Figure 2.2: Illustration of an actor-oriented model (top) in Ptolemy II and its hierarchical abstraction (bottom). .

This interface consists of external ports and external parameters. such as in Figure 2. and channels describe the abstract syntax of actor-oriented design.2. physical media such as acoustic channels. VisualSense uses a specialization of the discrete-event (DE) domain of Ptolemy II. Ptolemy II offers all three alternatives. The DE domain of Ptolemy II [15] provides execution semantics where interaction between components occurs via events with time stamps.1 VisualSense VisualSense [8] is a modeling and simulation framework for wireless sensor networks that builds on Ptolemy II. and perform external communication. Models. and an extensible visualization framework. A relation is an object used to represent the (wired) interconnection. parameters. The model of computation might give operational rules for executing a model. The interface of a model is called its hierarchical abstraction. the concepts of models. These rules determine when actors perform internal computation. and wired subsystems. It is important to realize that the syntactic structure of an actor-oriented design says little about its semantics. such as in a bubble-and-arc or block-and-arrow diagram. This framework supports actor-oriented definition of sensor nodes. 2. which are distinct from the ports and parameters of the individual actors in the model. update their internal state. External parameters of a model can be used to determine the values of the parameters of actors inside the model. Custom nodes can be defined by subclassing the base classes and defining the behavior in Java or by creating composite models using any of several Ptolemy II modeling environments. A sophisticated calendar-queue .14 that actors interact only with the channels to which they are connected and not directly with other actors. a library of subclasses that provide specific wireless channel models and node models. Taken together. actors. Custom wireless channels can be defined by subclassing the WirelessChannel base class and by attaching functionality defined in Ptolemy II models. may also define an external interface. like actors. The external ports of a model can be connected by channels to other external ports of the model or to the ports of actors that compose the model. This syntax can be represented concretely in several ways: graphically. To support this style of modeling. The software architecture consists of a set of base classes for defining wireless channels and sensor nodes. The semantics is largely orthogonal to the syntax and is determined by a model of computation. wireless communication channels. ports. in XML (Extensible Markup Language). such as in SystemC.3. or in a program designed to a specific API (Application Programming Interface). The model of computation also defines the nature of communication between components.

0"/> <property name="phase" class="ptolemy.actor.actor.output" relation="relation2"/> <link port="AddSubtract.PortParameter" value="440.data.TypedIORelation"/> <relation name="relation" class="ptolemy.input" relation="relation4"/> <link port="TrigFunction.parameters.output" relation="relation4"/> </class> Figure 2.lib.TypedCompositeActor"> <property name="samplingFrequency" class="ptolemy.parameters.actor.SDFDirector"/> <property name="frequency" class="ptolemy.TypedIORelation"/> <relation name="relation4" class="ptolemy.TypedIOPort"> <property name="output"/> </port> <entity name="Ramp" class="ptolemy.0"/> <property name="SDF Director" class="ptolemy.domains.TrigFunction"/> <entity name="Const" class="ptolemy.output" relation="relation"/> <link port="TrigFunction.Const"> <property name="value" class="ptolemy.sdf.Parameter" value="8000.parameters.actor.actor.actor.actor.PortParameter" value="0.PortParameter" value="(frequency*2*PI/samplingFrequency)"/> </entity> <entity name="TrigFunction" class="ptolemy.actor.0"/> <port name="frequency" class="ptolemy.actor.kernel.data.lib.data.output" relation="relation3"/> <link port="Const.0" standalone="no"?> <!DOCTYPE class PUBLIC "-//UC Berkeley//DTD MoML 1//EN" "http://ptolemy.Parameter" value="0"/> <property name="init" class="ptolemy.parameters.edu/xml/dtd/MoML_1.expr.lib.plus" relation="relation2"/> <link port="AddSubtract.actor.Ramp"> <property name="firingCountLimit" class="ptolemy. .actor.actor.actor.Parameter" value="0"/> <property name="step" class="ptolemy.TypedIORelation"/> <link port="output" relation="relation3"/> <link port="Ramp.parameters.data.berkeley.ParameterPort"> <property name="input"/> </port> <port name="phase" class="ptolemy.lib.expr.Parameter" value="phase"/> </entity> <entity name="AddSubtract" class="ptolemy.3: XML representation of the Sinewave source.TypedIORelation"/> <relation name="relation2" class="ptolemy.actor.expr.eecs.actor.expr.15 <?xml version="1.ParameterPort"> <property name="input"/> </port> <port name="output" class="ptolemy.dtd"> <class name="Sinewave" extends="ptolemy.AddSubtract"/> <relation name="relation3" class="ptolemy.plus" relation="relation"/> <link port="AddSubtract.

The precision in the semantics prevents the unexpected behavior that sometimes occurs due to modeling idiosyncrasies in some modeling frameworks. or moving actors. In particular. However. every actor that sends data to a wireless channel requires that every recipient from that channel be able to .16 scheduler is used to efficiently process events in chronological order. In particular. sensor nodes themselves can be modeled in Java. deleting. Components (which are called actors) have ports. actors. “RadioChannel”). for example by adding. or continuous-time models). The results are predictable and consistent. VisualSense is a subclass of the DE modeling framework in Ptolemy II that is specifically intended to model sensor networks. the type system in Ptolemy II includes a type constraint for each connection in a block diagram. Ptolemy II provides a visual editor for constructing DE models as block diagrams. a sensor node can have as an icon a translucent circle that represents (roughly or exactly) its transmission range. The most straightforward uses of the DE domain in Ptolemy II are similar to other discreteevent modeling frameworks such as ns [77]. and a type resolution algorithm identifies the most specific types that satisfy all the constraints. using more conventional DE models (as block diagrams) or other Ptolemy II models (such as dataflow models. The DE domain has a formal semantics that ensures determinate execution of deterministic models [59]. Visual depictions of systems can help to offset the increased complexity that is introduced by heterogeneous modeling. By default. and the ports are interconnected to model the communication topology. For example. The algorithm for determining connectivity is itself encapsulated in a component as a wireless channel model. In this type system. The software is carefully architected to support multithreaded access to this mutation capability. Thus. Connectivity can then be determined on the basis of the physical locations of the components. and to lend insight into the behavior of models. Changes in connectivity are treated as mutations of the model structure. The DE domain in Ptolemy II supports models with dynamically changing interconnection topologies. In VisualSense. parameters. one thread can be executing a simulation of the model while another changes the structure of the model. in wireless models. Another feature of Ptolemy II and VisualSense is a sophisticated type system [105]. finite-state machines. and instead associates ports with wireless channels by name (e. these connections do not represent all the type constraints. it removes the need for explicit connections between ports. and hence can be developed by the model builder. Ptolemy II and VisualSense permit customized icons for components in a model. OPNET [78]. and VHDL. although stochastic models for Monte Carlo simulation are also well supported. or changing the connectivity between actors. and ports can all impose constraints on types.. or more interestingly.g.

so that the reader can understand the underlying implementation of the tools and techniques presented in the following chapters. They are inferred from the ultimate sources of the data and propagated throughout the model. so unless a particular model builder needs more sophisticated constraints.17 accept that data type. the model builder does not need to specify particular data types in the model. 2. . VisualSense imposes this constraint in the WirelessChannel base class.3 Summary This chapter summarized background information on TinyOS and Ptolemy II.

18 .

since a node can go into a sleep mode to preserve energy when no interesting events are happening. as in most imperative languages. there is a fundamental gap between this event-driven execution model and sequential programming languages. most of these high-level languages are designed for writing sequential programs to run on an operating system and fail to handle concurrency intrinsically. For many networked embedded systems. do not scale with the growing complexity of today’s applications. The TinyGALS (Globally Asynchronous and Locally Synchronous) programming model [23] aims to fill this gap by providing language constructs to systematically build concurrent tasks called actors. Despite the fact that “high-level” languages such as C and C++ have recently replaced assembly language as the dominant embedded software programming languages. inherited from writing device drivers and optimizing assembly code to achieve a fast response and small memory footprint. handling irregular interrupts. where conceptually concurrent components are activated by incoming signals (or events). Within each actor. components communicate synchronously via method calls. these actors communicate with each other asynchronously via message passing. Event-driven embedded software is similar to hardware. These tasks become even more challenging when the resources of the hardware platforms are too limited. avoiding concurrency errors. Event-driven execution is particularly suitable for untethered devices such as sensor network nodes. maintaining consistent state across multiple tasks. and conserving power. to host a full-scale modern operating system. . in terms of CPU speed and memory size. Traditional technologies for developing embedded software. At the application level.19 Chapter 3 TinyGALS and galsC Networked embedded software designers face issues such as managing computation as well as communication.

which makes TinyOS applications difficult to develop. synchronous often refers to computational steps and communication (propagation of computed signal values) that take no time (or. in practice. galsC [24. However.” Access to these variables is thread-safe. very little time compared to the intervals between successive arrivals of input signals). including an application-specific operating system scheduler. such as race conditions. The TinyGALS notion of synchronous and asynchronous. galsC takes advantage of the nesC specification for TinyOS 1. Thus. a set of guarded yet synchronous variables (called TinyGUYS) is provided at the system level for actors to exchange global information “lazily.” “asynchronous. In the system modeling community. thus causing confusion. In order to incorporate shared variable semantics where only the latest value matters. TinyOS/nesC components provide an interface abstraction that is consistent with synchronous communication via method calls. Automatically generated code also reduces implementation and . Steps do not take infinite time. at a high level. synchronous means that the software flow of control transfers immediately to another component and the calling code blocks awaiting return. however. where synchronous refers to circuits that are driven by a common clock [51]. Lack of explicit management of concurrency forces TinyOS component developers to manage concurrency by themselves (locking and unlocking semaphores). concurrent tasks in TinyOS are not exposed as part of the galsC component interface. The circuit and processor design communities use these terms for synchronous and asynchronous circuits. Asynchronous means that the software flow of control does not transfer immediately to another component. The galsC language provides basic concurrency constructs. This language has a type system that spans synchronous and asynchronous communication boundaries. GALS then refers to a modeling paradigm that uses events and handshaking to integrate subsystems that share a common tick (an abstract notion of an instant in time) [10]. locally synchronous (GALS)” mean different things to different communities. In this programming model. and they can develop software components without the burden of thinking about multiple threads. This generative approach allows further analysis of concurrency problems. In this chapter. and the galsC compiler generates executable code.20 The terms “synchronous. is consistent with the usage of these terms in distributed programming paradigms [72]. execution of the other component is decoupled. from high-level specifications.” and “globally asynchronous. 25] is a language that implements the TinyGALS programming model. the TinyGALS programming model is globally asynchronous and locally synchronous in terms of transfer of the flow of control. yet components can quickly read their values. control eventually returns to the calling code.x. application developers have precise control over the concurrency in the system.

3 explains a code generation technique based on the twolevel execution hierarchy and a system-level scheduler. most of the processor time is spent waiting for an external trigger or event. TinyGALS is closer to system-level hardware/software codesign languages. Section 3. since the developer does not need to reimplement standard constructs (e. high-level programming language. As an actor-oriented. in that it allows designers to directly control the concurrent execution and sizes of buffers between asynchronous actors. Section 3. communication ports. Various dataflow models [73] use FIFO queues to separate flow of control.1 compiler and toolsuite for the wireless sensor network nodes known as the Berkeley motes.4 describes a sample application implemented in galsC.1 describes the TinyGALS programming model and galsC language. Section 3. queues. and they are easy to develop and backwards compatible with most legacy software. Instead. In a reactive. functions. Section 3. Components in the TinyGALS model are entirely sequential. The TinyGALS approach differs from coordination models like those discussed above. In particular. and guards on variables). the galsC compiler generates the scheduling framework as part of the application.1. TinyGALS programs do not rely on the existence of an operating system. The galsC compiler and toolsuite is built on the nesC 1. than embedded software languages such as nesC. A reasonably small amount of additional code to enhance software modularity will not greatly affect the performance of the system.21 debugging time. the port-based object (PBO) model [92] has a global shared variable space mediating component interaction.g. The POLIS codesign approach [7] uses an event-driven model for both hardware and software execution. such as SystemC [12] and VCC [57].5 summarizes this chapter. synchronous languages try to compile away concurrent executions based on the synchronous (zero-time execution) assumption [39]. the TinyGALS/galsC framework can greatly improve software productivity and encourage component reuse. The remainder of this chapter is organized as follows.. event-driven system. The design of TinyGALS is influenced by the trend of introducing formal concurrency models in embedded software. To some extent. it uses a thread-safe global data space to store messages that do not trigger reactions. At the same time. Section 3. When it is not possible to compile away concurrency. .2 discusses concurrency and determinacy issues in TinyGALS programs.

1 introduces the basic constructs in the TinyGALS programming model and the syntax of the galsC programming language. The system tags the resulting sensor value with the latest value of the counter and sends it downstream for further processing. In this example.1 Programming constructs and language syntax There are three basic constructs in TinyGALS and galsC: components. Reading the sensor may take time. a hardware clock triggers the system to update a time tick counter. A downsampled clock signal triggers the system to read the light intensity level from a photoresistor at a lower rate. Section 3.1 The TinyGALS Programming Model and galsC Language This section uses a simple sensing application to illustrate the TinyGALS programming model and galsC syntax and semantics.1.1. shown in Figure 3.3 describes valid links in TinyGALS/galsC.22 uint16_t count = 0 count TimerActor actorControl IntOutput. 3. This section presents the abstract TinyGALS notation. and applications. output TimerC Photo Figure 3.4 discusses type inference and checking in galsC.1. and Section 3..2 explains the semantics of TinyGALS and galsC.1. as well as the concrete galsC syntax. Section 3. Section 3. 3.1.. .output trigger ADC ADCControl .1.output actorControl Counter StdControl Timer StdControl Timer TimerControl Trigger trigger trigger 64 trigger StdControl SenseActor count SenseToInt IntOutput. for each construct.1: Graphical representation of the SenseTag application. actors.

A component that provides an interface (in PROV IDESC ) contains an implementation of the interface method(s). LINKSC is the set of relations among the interface methods of the components (including that of C). but with explicit definition of the external methods it uses. Using the tuple . a component is like an object in most objectoriented programming languages.USESC . A component is either a module or a configuration.23 TinyGALS Components Components are the most basic elements of a TinyGALS program. TimerC contains a module named TimerM that implements the provided interface methods. COMPONENT SC is the set of components that form C.VC ). LINKSC . Thus. and VC is the set of internal variables that carry the state of C from one invocation of an interface method of C to another. Figure 3.1) where PROV IDESC and USESC are the sets of methods that constitute the interface of C. A TinyGALS component C is a 4-tuple: C = (PROV IDESC . The implementation of a module contains executed code.1. Components in galsC are written in the nesC programming language. whereas the implementation of a configuration only contains a list of components and the links between their interface methods. whereas a component that uses. or requires.COMPONENT SC . Syntactically. an interface (in USESC ) expects another component to implement the interface. (3. a component is defined in two parts—an interface definition and an implementation.2 shows the source code for the TimerC configuration used in Figure 3.

return call Clock.Clock -> ClockC.setRate( mInterval... 1 The . StdControl}.Timer)}.StdControl). .. / COMPONENT SC = {TimerM. .ClockC.start(char.. TimerM.. and StdControl. } implementation { // Each bit represents a timer state.Clock). the TimerC component can be defined as1 C = (PROV IDESC = {Timer. / interface keyword in nesC refers to a set of methods. uint32_t mState.stop(). ...1. uint8_t mScale. the TinyGALS notation used in this chapter only lists the name of a given interface. . Timer = TimerM. provides interface StdControl.init() { mState = 0.2: Source code for the TimerC and TimerM components. mInterval = 230. .24 configuration TimerC { provides interface Timer[uint8_t id].. } implementation { components TimerM. Timer.. TimerM.fired(). notation given in Equation 3.stop(). uint32 t).start().init().. uses interface Clock. USESC = 0.Clock. } .. (Timer. For brevity. mScale). TimerM. rather than the individual methods in the interface. So. and Timer.StdControl. provides interface StdControl. mInterval. StdControl = TimerM. (StdControl. NesC allows the shorthand notation of linking two interfaces of the same type. ClockC. StdControl..Timer. mScale = 3.. command result_t StdControl. and the StdControl interface refers to the set containing StdControl. VC = 0). } module TimerM { provides interface Timer[uint8_t id]. LINKSc = {(TimerM.. . which means that each of the individual methods are linked.. } Figure 3.}..ClockC. the Timer interface refers to the set containing Timer.

LINKSR . whose source code was shown in Figure 3. The interface of an actor consists of a set of input and/or output ports and a set of parameters.trigger. COMPONENT SR is the set of components that form the actor. including which configurations of components within an actor are valid. Trigger.2. writes to the count parameter. INITR ). TimerActor has an output port named trigger. which is linked to a component interface method. PROV IDESC and USESC refer to component methods and may be linked to actor ports in INPORT SR and OUT PORT SR . whereas INPORT SR and OUT PORT SR are not. Counter. (3.2 and 3.3 also shows the source code for 2 Refer 3 Sections to information on TinyGUYS in Section 3. for initializing hardware components). An actor implementation contains a list of components and links. A link can join a component interface method to one of four types of endpoints: (1) another component interface method. OUT PORT SR .g. (2) a port. or (4) some combination of these. PARAMET ERSR is the set of external variables—they are global variables that can be both read and written2 . PARAMET ERSR . TimerActor exports the StdControl interfaces of Count and Trigger for system initialization.2) where INPORT SR and OUT PORT SR are the sets that specify the input ports and output ports of R. INPORT SR and OUT PORT SR of an actor R are not the same as PROV IDESC and USESC of a component C. A different component interface method. .1) and the input and output ports of R (INPORT SR and OUT PORT SR ) and the parameters of R (PARAMET ERSR ). and INITR is the list of initialization methods that belong to the components in COMPONENT SR . PROV IDESC and USESC are executable.IntOutput. However. respectively. which contains the TimerC component. A TinyGALS actor R is a 6-tuple: R = (INPORT SR . Figure 3. LINKSR of an actor R is similar to LINKSC of a component C..25 TinyGALS Actors Actors are the major building blocks of a TinyGALS program. Figure 3.1. 3. LINKSR is the set that specifies the relations among the interface methods of the components (PROV IDESC AND USESC in Equation 3.3 An actor may also contain an actorControl section which exports the StdControl interface of any of its components to the application level for system initialization (e.3 shows the source code for TimerActor.1.3 describe links in more detail. The galsC syntax for an actor is similar to that of a galsC (or nesC) configuration component.COMPONENT SR . encompassing one or more TinyGALS components. (3) a parameter.2.output. The only difference is that the relations in LINKSR may also include actor ports and parameters.1. Actors are different from components.

TimerC. COMPONENT SR = {Counter.Timer.StdControl]).26 SenseActor. CONNECT IONSA of an application A differs from LINKSR of an actor R in 4 The unique() function in nesC is a constant function that evaluates to a constant at compile time.IntOutput.1. 5 Refer to information on TinyGUYS in Section 3. count). each call returns a different unsigned integer in the range {0.Timer[0]).2.CONNECT IONSA .Timer[1]). “Timer”). n − 1}. .1.output and the value read from the count parameter.out put. / OUT PORT SR = {trigger}.TimerControl).trigger)}.VARMAPSA . ACT ORSA is the list of actors that form A.TimerControl. TimerC. (Trigger.1 shows a graphical representation of the actors. (Counter. Trigger. VARMAPSA is a set of mappings. If the program contains n calls to unique() with the same identifier string (in this example.2. Its output port output is connected to the concatenation of the component interface method SenseToInt. . INITR = [Counter. PARAMET ERSR = {count}. (Trigger.StdControl. (3.2. TimerC. .trigger. actors are connected to form a complete application. Using the tuple notation given in Equation 3. STARTA is the list of input ports of actors in the application. Figure 3. CONNECT IONSA is the set of the relations between actor input and output ports. LINKSR = {(Counter. The semantics of the execution of components within an actor are discussed in more detail in Section 3. ACT ORSA . .Timer. Trigger}. .IntOut put. STARTA ). TinyGALS Application At the top level of a TinyGALS program. TimerActor can be defined as4 R = (INPORT SR = 0.3) where GLOBALSA is the set of global variables. (Trigger. each of which maps a global variable in GLOBALSA to a parameter (PARAMET ERSR in Equation 3. A TinyGALS application A is a 5-tuple: A = (GLOBALSA . TimerC.2) of an actor in ACT ORSA 5 .

3: Source code for TimerActor and SenseActor. Trigger. } implementation { components SenseToInt.TimerControl -> TimerC. actorControl { Counter. Trigger.Timer[unique("Timer")]. Photo.StdControl.ADC -> Photo.Timer -> TimerC. actor SenseActor { port { in trigger.trigger -> trigger.IntOutput.ADCControl -> Photo.Timer[unique("Timer")].27 actor TimerActor { port { out trigger.IntOutput.Timer -> TimerC.output. TimerC. actorControl { SenseToInt. } } } Figure 3. Trigger. trigger -> SenseToInt. SenseToInt. } implementation { components Counter. } } } (SenseToInt. } parameter { uint16_t count. . Counter.trigger. } parameter { uint16_t count. Counter. count) -> output. Trigger.StdControl.output -> count. Trigger. out output.StdControl. SenseToInt.

3. (count.)}.count)}. between components within an actor. whereas links inside an actor (between components) do not. and some downstream actors. SenseActor. TimerActor. A mapping associates application parameters (global names) with actor parameters (local names). mappings. which is initialized to zero and connected to the corresponding parameters of TimerActor and SenseActor.]. The output port trigger of TimerActor is connected to the corresponding input port of SenseActor. Note that arguments (initial data) may also be passed to the port. A galsC program is created by writing a galsC application file that contains zero or more parameters (global variables) and an implementation containing a list of actors. 3.1. SenseActor. Figure 3.out put. ... A connection connects actor output ports with actor input ports. It also includes a discussion of the conditions for well-formedness an application.count). with an optional declaration of the port queue size (defaults to size one). which contains TimerActor.. the example application can be defined as A = (GLOBALSA = {count}. .trigger()]).1. ACT ORSA = [TimerActor.28 that connections between actors contain an implicit queue. SenseActor. and connections. . STARTA = [SenseActor. (SenseActor. SenseActor.. Using the tuple notation given in Equation 3.trigger).2 Execution model and language semantics This section discusses the semantics of execution within a component. as well as an application start section. The application contains a parameter (global variable) named count. VARMAPSA = {(count.2 describes which configurations of actors within an application are valid.4 shows the source code for the SenseTag application. and between actors within an application. The appstart section declares that an initial token is to be placed in the input port of SenseActor. CONNECT IONSA = {(TimerActor. with a queue size of 64.trigger. Section 3.

However. appstart { SenseActor. SenseActor. } implementation { actor TimerActor.trigger.count. .trigger =[64]=> SenseActor. This section assumes that interrupt handlers are not reentrant. there is no dynamic memory allocation. this section assumes that all methods may potentially access component state. count = SenseActor.output => . which may lead to race conditions and other nondeterminacy issues. All memory is statically allocated. To simplify the discussion. This section discusses constraints on what constitutes a valid configuration of components within an actor when using components that contain interrupt handlers in which interrupts are enabled.. or nested invocations do not interfere with each other. count = TimerActor. but that interrupts are masked while servicing them (interleaved invocations of the same interrupt handler are disabled). which is used to order events. Methods that do not access component state will not suffer from race conditions. A piece of code is reentrant if multiple simultaneous. interleaved. other (different) interrupts may occur while servicing an interrupt.. TimerActor. ... } } } Figure 3. A TinyGALS program runs in a single thread of execution (single stack).29 application SenseTag { parameter { uint16_t count = 0.. This discussion also assumes the existence of a clock.. Assumptions The TinyGALS architecture is intended for a platform with a single processor. SenseActor.trigger(). There are no other sources of preemption other than hardware interrupts. which may be interrupted by the hardware. These constraints are necessary for avoiding unexpected reentrancy.count.4: Source code for the SenseTag application. but may suffer from reentrancy problems.

where the component is a called component. Reentrancy problems may arise if a component is both a source component and a triggered component. a component executes to completion. the component is a triggered component. (2) an event arrives on the actor input port linked to one of the interface methods of C. An event on a linked actor input port may trigger the execution of a component method. TinyGALS Actors The flow of control between components within a TinyGALS actor occurs on links. In Figure 3. This results in the third case.setRate() method with the values of mInterval and mScale as its arguments. and component method(s). Additional rules for linking components together are detailed in the next section. source components must not also be triggered components.3 discusses the exact specifics of what types of links are valid. mState. the component is a source component and when activated by a hardware interrupt. Source components do not connect to any actor input ports.2.init() method of TimerM is called. and vice versa. or (3) another component calls one of the interface methods of C. Therefore. parameter(s). That is. an interrupt may arrive. the interrupt service routine or method finishes. When the StdControl. it is necessary that source components only have outputs (required methods) and no inputs (provided methods). and mInterval are internal variables of component TimerM.1. In the first case. but it does not matter to which component the method is linked. leading to possible race conditions if the interrupt modifies internal variables (internal state) of the same component. Both source components and triggered components may call other components via required methods. Once activated. Section 3. In the second case. When a component calls a required method with the call keyword.setRate(). The call keyword indicates that the Clock.setRate() method is called synchronously (explained further in the next section). While the method runs. mScale. the corresponding interrupt service routine runs. A link is a relation within an actor between its port(s). The component only needs to know the type signature of Clock. Therefore. Links represent synchronous communication via method calls. to improve the ease of analyzability of the system and eliminate the need to make components reentrant. and the event triggers the execution of a provided method.30 TinyGALS Components There are three cases in which a component C may begin execution: (1) an interrupt arrives from the hardware that C encapsulates. the component calls the Clock. The same argument also applies to source components and called components. .

The external method can return a value through the call just as in a normal method call. Cycles within actors (between components) are not allowed. Race conditions and reentrancy problems may occur if source DAGs and triggered components are connected within an actor. if interrupts are not masked (interrupts are enabled). R contains a source component which has received a hardware interrupt. within components is allowed.5(b). the recursion must be bounded for the system to be live. Race conditions and reentrancy problems may occur if C3 is running in a scheduled context and an interrupt causes C1 to preempt C3 . the scheduler activates the component method linked to an input port of R in response to an event sent to R by another actor. In the first case. However. preemption of the normal thread of execution by an interrupt may lead to reentrancy problems. There are two cases in which an actor R may begin execution: (1) a triggered component is activated. TinyGALS places some restrictions on what configurations of components within an actor are allowed. A source DAG is formed by starting with a source component and following all forward links between it and other components in the actor. However. where the methods associated with a single component are grouped together. C3 ) is connected to the triggered DAG (C2 . R may interrupt the execution of another actor. As discussed in the previous section. the source DAG (C1 . An actor is considered to have finished executing when the components inside of it have finished executing and control has returned to the scheduler. it is not possible for a triggered component to preempt a component in any other triggered 6 In 7 Recursion TinyOS. any valid configurations of components within an actor can be modeled as a directed acyclic graph (DAG). the return value indicates whether the command completed successfully or not. otherwise reentrant components are required.6 The graph of the components and the links between them is an abstraction of the call graph of the methods within an actor. then a source DAG must not be connected to any other source DAG within the same actor. In Figure 3. One can relax the restriction on cycles between components and only disallow cycles in method call chains between components by first separating the methods within a component into separate source and triggered components. . as in Figure 3. Triggered DAGs can be connected to other triggered DAGs. In the second case.7 Therefore. as in Figure 3. C3 ).5(c). If all interrupts are masked during interrupt handling (interrupts are disabled). A triggered DAG is similar to a source DAG but starts with a triggered component instead. since with a single thread of execution. then additional restrictions on source DAGs is unneeded. or (2) a source component is activated.5(a).31 the flow of control in the actor is immediately transferred to the callee component or port. Notice that in this case. Therefore. The execution of actors is controlled by the scheduler in the TinyGALS runtime system.

A combination of the callee’s return values is returned to the caller.3. The interpretation is that when the caller calls. Let us first assume that both actor input ports and actor output ports are totally ordered (using the order of the ports declared in the port section of the actor definition file). Likewise. the configuration of components inside an actor must not contain cycles and must follow the rules above regarding source and triggered DAGs. required component methods may be associated with either one provided method of a single component C or with one or more actor output ports. since some configurations may lead to nondeterministic component firing order.32 Actor C Actor A C1 C2 Actor B C1 C2 C2 C1 C3 (a) A source DAG is activated by a hardware interrupt. then actor input ports and required component methods may only be associated with either a single method or with a single output port. Although multiple callees are not part of the TinyGALS semantics. the implementation section of the TimerActor declares that whenever component Trigger calls trigger(). Then actor input ports may be associated either with one provided method of a single component C or with one or more actor output ports. but they may not be associated with actor output ports. an event is produced at the trigger output port. race conditions and reentrancy problems may occur.8 Provided component methods may be associated with any number or combination of required component methods and actor input ports. all the callees are called in a possibly non-deterministic order. TinyGALS places restrictions on what connections are allowed between component methods and actor ports. if we assume that neither actor input ports nor actor output ports are ordered. However. the implementation section of the SenseActor definition declares that whenever the trigger input port is triggered (explained 8 In the existing TinyOS constructs. Likewise.5: Directed acyclic graphs (DAGs) within actors. Recall that once triggered. (b) A triggered DAG is activated by the arrival of an event at the actor input port. actor output ports may be associated with any number or combination of required component methods and actor input ports. . (c) When a source DAG is connected to a triggered DAG. it is supported by the galsC software tools for TinyOS compatibility. but that components are not ordered. the components in a triggered DAG execute to completion. In Figure 3. As discussed earlier. one caller (a required component method) can have multiple callees. Figure 3. DAG.

. The order in which methods are initialized for a single actor is the same as the order in which they are listed in the actor configuration file. Tokens are dropped if the input port queue is full. first-out) queue. During execution. these are stored in the token. Communication between actors is also possible without the transfer of data. The queue separates the flow of control between actors. In this case. The scheduler processes tokens in the order in which they are generated. The TinyGALS scheduler passes the token to the linked component method. in Figure 3. When a component within an actor calls a method that is linked to an output port.33 in the next section).4. sleeps). TinyGALS Application Each input port of an actor has a FIFO (first-in. the system does nothing (i. interrupts may occur and preempt the normal thread of execution. the trigger() method of component SenseToInt is called. If initial arguments to the port were declared in the application configuration file. each token that arrives on any input port of R corresponds to a future invocation of the component(s) in R. the arguments of the call are converted into events called tokens. the programmer is currently responsible for selecting the correct queue size. and the components in the triggered DAG of the starting actor execute to completion. the TinyGALS scheduler removes the token from the event queue of each input port connected to the output port. However. so other source components cannot interrupt this operation. They may generate one or more events at the output port(s) of the actor. When the system is not responding to interrupts or events on input ports. Later. the call to the output port returns immediately. and the component within the actor can proceed. The execution of a TinyGALS system begins with the initialization of all methods specified in INITRi for all actors Ri . the application starts when the runtime system places an initial token at the input port trigger of SenseActor. The order in which actors are initialized is the same as the order in which they are listed in the application configuration file. A copy of the token is placed in the event queue of each input port connected to the output port. the TinyGALS runtime system places an initial token at each system start port. communication between actors occurs asynchronously through these queues. Tokens are placed in input port queues atomically. which are input port(s) declared in the appstart section of the application configuration file. an empty message (token) transferred between ports acts as a trigger for activation of the receiving actor. After actor initialization. During execution of a TinyGALS application. Note that since each input port of an actor R is linked to a component method.e. For example. and calls the method that is linked to the input port with the contents of the token as its arguments. .

. Actor output ports may be connected to one or more actor input ports. Tokens generated at the same logical time are ordered according to the global ordering of actor input ports. This type of merge does not introduce any additional sources of nondeterminacy. Section 3. The previous section discussed limitations on the configuration of links between components within an actor. every token produced by A out is duplicated and trigger both B in and C in. single-input connection has a merge semantics. control eventually returns to the normal thread of execution. A single-output. in Figure 3.34 Actor B Actor A A_out B_in Actor C C_in Figure 3. Cycles are allowed between actors. which the next paragraph discusses. A multiple-output. Input ports are first ordered by actor order. Connections between actors are much less restrictive. multiple-input connection acts as a fork. and actor input ports may be connected to one or more actor output ports. then in the order in which they are declared in the actor configuration file.2 for a discussion of interrupts and their effect on the order of events in the global event queue. such that tokens from multiple sources are merged into a single stream in the order that the tokens are produced. See Section 3.2 discusses the ramifications of token generation order on the determinacy of the system. The current galsC implementation processes the tokens in the order that they are generated as defined by the hardware clock. Tokens that are produced at the same “time” are processed with respect to the global input port ordering. as they appear in the application configuration file. such as ones that take care of timing and energy concerns.6. The TinyGALS semantics do not define exactly when the input port is triggered. multiple-input connection.6: A single-output. Currently. This does not lead to reentrancy problems because the queue on an actor input port acts as a delay in the loop. The runtime system maintains a global event queue which keeps track of the tokens in all actor input port queues in the system. For example. More sophisticated scheduling algorithms can be substituted. the runtime system activates the actors corresponding to the tokens in the global event queue using FIFO scheduling.

the count parameter is passed as the last argument to the output port. This is implemented as the parameter feature in the galsC programming language.e.output method has a single argument which is written to the count parameter whenever the method is called. However. global variables (parameters) are guarded. In the TinyGUYS mechanism. One can develop components in their own scope. an interrupt may occur and preempt the read. which may quickly become inefficient if there is global state that must be updated frequently. i. Section 3. In SenseActor in Figure 3. after an actor finishes and before the scheduler triggers the next actor).35 TinyGUYS The TinyGALS programming model has the advantages that actors become decoupled through message passing and are easy to develop independently. The interrupt service routine may modify the global variables.2. It is possible that while an actor is reading the variables.2 discusses how to eliminate race conditions. Parameters are updated atomically by the scheduler only when it is safe (i. Actors may read a parameter synchronously (i.e.3. without delay).IntOuput. A component interface method or an actor port can read a parameter when the method or port is invoked by passing the parameter value as one of the arguments. When the actor resumes reading the remaining variables after handling the interrupt.. Several actors may access the same global variables at the same time.e. TinyGUYS have global names that are mapped to the local parameter names of each actor. independent of the connected parameters. the last value written will be the new value of the global variable. A component interface method or an actor port can write to a parameter by calling a connected function with a single argument. This design does not require parameter names to appear inside the component name space. it may see an inconsistent state. the last actor to write to the variable “wins”. However. One can think of this as a write buffer of size one.3. Because there is only one buffer per global variable. writes to the parameter are asynchronous in the sense that all writes are delayed. . The TinyGUYS (Guarded Yet Synchronous) mechanism provides a way for actors to share global data safely. the Counter. In TimerActor in Figure 3. A write to a TinyGUYS global variable is actually a write to a copy of the global variable. One can think of this as a way of formalizing race conditions. One must be very careful when implementing global data spaces in concurrent programs.. each message passed triggers the scheduler and activates a receiving actor..

create a token from the arguments of the function f1 and send it to the output port p1 . a source port must be an input port and a target port must be an output port. similar to the notion of record types [100].9 The equations below use regular expressions to describe possible entities of x and y. Using the regular expression model.] – f1 → f2 [When the function f1 is triggered..e. For example.] – p1 → f1 [When the input port p1 is triggered. However. . Note that functions do not appear at the application level.1.36 3.3 Link model within actors A link x → y inside an actor consists of a source x and a target y. suppose f1 is a required method with exactly two arguments.4) (3. l) is an abbreviation for any number of parameters appearing before or after the trigger t: • Without parameters – p1 → p2 [When the input port p1 is triggered. A port is triggered when the scheduler invokes it with the first token in its queue. where l is the local name of a parameter.5) A trigger is a port or function that appears as the source of a link. the following enumerates the valid types of links. p is an actor port name. The link ( f1 . trigger another function f2 . Additionally. and a source function must be a required method and a target function must be a provided method. and f is a component interface function (method): source = (l)∗ (p | f ) (l)∗ target = l | p | f (3.] – f1 → p1 [When the function f1 is triggered. A function is triggered when it is called by another function. trigger a function f1 . the types of the first two arguments of p1 must match those of f1 . The return type of a trigger must also match that of the target. where l in (t. transfer the token directly from p1 to the output port p2 . Also. the discussed port directions must be reversed: a source port must be an output port and a target port must be an input port.] 9 This model also applies to connections at the application level. and the type of the last argument of p1 must match that of l1 ) and if the return type of f1 matches that of p1 . global parameter names should be used instead of local parameter names. A link x → y is valid if the number of arguments and the types of the arguments of the source match those of the target when the arguments on each side of the arrow are concatenated separately. l1 ) → p1 is valid if p1 is an output port that has exactly three arguments whose types match those of the left hand side (i.

the parameter value(s) are Parameter GET (p1 . In a parameter GET (read) link. l) → p1 [When the function f1 is triggered.1: Summary of valid types of links in TinyGALS/galsC. l1 ) → l2 [When the function f is triggered. l) → f2 Parameter PUT p→l f →l Parameter GET/PUT (p. read the current value of the source parameter l1 and write it to the target parameter l2 .] – Parameter GET/PUT ∗ (p. l) → f1 ( f1 . l) → p2 [When the input port p1 is triggered. concatenate the arguments of f1 with the current value of the parameter(s) l.] ∗ f → l [When the function f is triggered. l1 ) → l2 . concatenate the arguments of f1 with the current value of the parameter(s) l. write its argument to a parameter l. l1 ) → l2 [When the input port p is triggered. l) → f1 [When the input port p1 is triggered. and trigger a function f1 with the corresponding arguments.] ∗ (p1 . No parameters p1 → p2 p1 → f1 f1 → p1 f1 → f2 • With parameters – Parameter GET ∗ (p1 . write its argument to a parameter l. l) → f2 [When the function f1 is triggered.] ∗ ( f . and trigger another function f2 with the corresponding arguments. l) → p1 (p1 . l) → p2 ( f1 .] ∗ ( f1 .] – Parameter PUT ∗ p → l [When the input port p is triggered.37 Table 3. read the current value of the source parameter l1 and write it to the target parameter l2 . concatenate the arguments of p1 with the current value of the parameter(s) l. concatenate the arguments of p1 with the current value of the parameter(s) l. and send the resulting token directly to the output port p2 .] For links with no parameters. l1 ) → l2 ( f . the trigger either (a) triggers the connected function or (b) passes a token to the connected output port.] ∗ ( f1 . and send the resulting token to the output port p1 .

The known types (τ1 . the trigger in a parameter PUT link must have only one argument. There are two parts to the type inference system: connections with ports. and connections with parameters but no ports. Note that for the number of arguments to match.4 Type inference and type checking The galsC compiler performs high level type inferencing on the connection graph of an application. The output port of B is directly connected to the input port of actor C. In a parameter PUT (write) link. τ5 .1. what is the order of computation if one has f1 → l1 and f1 → f2 ? Or if one has f1 → l1 and f1 → p? In TinyGALS.. The output port of B is the target of the concatenation of the input port of B and a parameter with type τ5 . In Figure 3.. The buffered parameter value may then get overwritten in the later computation.38 τ3 Actor A call f() Actor B τ5 Actor C τ1 τ2 τ4 τ6 τ7 τ8 f() {. fanout from a function)? For example..7.7: Type checking example.10 Ports In galsC. and the trigger in a parameter GET/PUT link must have no arguments. The input port of C is a trigger for a function with type signature τ8 .} Figure 3. τ8 ) are shown in bold. ports are untyped. 10 Connections containing only functions are checked with the nesC type checker. What are the semantics of multiple links (i. appended to the trigger’s argument list and passed to the connected function or port. The actual types of ports are inferred from the connection graph of a galsC program. 3. the write to the parameter occurs first.e. This policy provides a consistent view of ordering in the system. before any additional computation or transfer of control. the trigger causes the source parameter to be read and its value stored in the target parameter. actor A contains a component which has a call to function f with type signature τ1 . The input port of actor B is the target of the concatenation of the output port of A with a parameter with type τ3 . τ3 . . In a parameter GET/PUT (read/write) link. the trigger writes its argument to the parameter.

ports. The previous sections showed how the TinyGALS component model enables users to analyze potential sources of concurrency problems more easily by identifying source. triggered. and (2) links between a function and a local name. many components that are wrappers for device drivers are “split phase”. 3.1. A valid system has a unique solution to the set of equations. it is up to the software developer to write thread-safe code. Parameters The type system for parameter connections without ports is straightforward. A higher level component can call the device driver component to ask for data. Later. and called components and defined what kinds of links and connections between components. the device driver component interrupts with the ready data. The galsC compiler detects a type error when the set of equations conflicts with itself or is unsolvable.5 Summary In TinyOS. especially after components are wired together and may have interleaved events. and parameters are valid. Although the TinyOS architecture allows components to reject concurrent requests. since there are only two types of connections: (1) mappings between a global name and a local name. The hidden source aspect of these types of components may lead to TinyOS configurations with race conditions or other synchronization problems.39 One can write a type equation for each connection in the system: τ 1 = τ2 τ2 × τ 3 = τ4 τ4 × τ 5 = τ6 τ6 = τ7 τ7 = τ 8 One can then solve the set of equations to determine the types of the ports. the type checker merely verifies that all the types in a connection match each other. This call returns immediately. . Since the types of all of these sources and targets are known. The galsC compiler derives types for all ports in the system by matching the return type and the argument types of all connected upstream and downstream functions. This job is quite difficult. which means that they are actually both source and triggered components.

In Figure 3.40 3. and race conditions (i. which produces an event (token) at the output port. The event loops back to the input port where it is inserted into the event queue.g. Within the put() function.2 Concurrency and Determinacy Issues Concurrency management is a significant concern in event-driven systems.e. Theorem 1. there is a direct link between the input port and the output port inside the actor. there is no dynamic memory allocation. Poorly imple- mented systems may suffer from deadlock (i. all memory is statically allocated.e. A TinyGALS program runs in a single thread of execution (single stack). where shared variables are accessed by multiple threads at the same time). In event-driven systems. The execution activated by the scheduler is called the scheduled context.8. or (2) an interrupt service component within A is triggered by an external interrupt. a blocking read) is not part of the semantics across actors. where no tasks can proceed due to blocking on a shared resource). Thus.. Can this self-loop prevent further interrupts from entering the system? Once the event is enqueued. which are atomic. the scheduler first dequeues the event with interrupts disabled.. livelock (i. where the system falls into deadloop and responds to no further interrupts). such as enqueuing and dequeuing events. then calls the function connected to the inside of the input port (in this case the put() function of the output port). Interestingly. 3. the code that inserts the event back into the event queue .1 Concurrency There are two mechanisms for actors to communicate in TinyGALS: event queues (ports) and guarded global variables (parameters). Blocking on shared resources (e. the only possibility for cross-actor concurrent execution is when one actor is in the scheduled context.2. Deadlock is not possible across actors. and the execution triggered by interrupts is called the interrupt context.e. since there are critical system operations.. which may be interrupted by the hardware. Since all scheduled executions of actors are in the scheduled context and controlled sequentially by the scheduler. the Loop actor is first triggered by an internal interrupt. This section only considers concurrency issues on single processor platforms. it is possible for a scheduler to retain control and disable interrupts indefinitely. and one or more other actors are in an interrupt context.. In TinyGALS. An actor A may begin execution when: (1) the scheduler activates A in response to an event at its input port.

Race conditions are not possible across actors.1.8: A self-loop actor triggered by an interrupt. whose value updates are again controlled by the scheduler (where the last value written wins). The question of determinacy is that given a unique initial state of a TinyGALS program and a set of known interrupts (in terms of both interrupt time and value). The system state of a TinyGALS program consists of (1) the internal state of all components.2 Determinacy Notice that the lack of concurrency errors does not mean TinyGALS programs are deterministic. . Tokens are stored in event queues. which is a problem with a much smaller scope. as discussed in the previous section.2. Livelock is not possible across actors. in the galsC scheduler. However. Theorem 3. Two actors may also try to write to a shared variable at the same time. So. programmers can focus on concurrency issues within each actor. without a careful implementation of the scheduler. As a result of these claims. and access to them is atomic and controlled by the scheduler. 3. will the program have a unique state trajectory independent of the execution/CPU speed? Note that single thread sequential programs. These issues were discussed in Section 3. interrupts are enabled between dequeuing the event and enqueuing the event. Theorem 2. where all inputs are 11 The global event queue is defined as the ordered sequence of tokens in the event queues of all actor ports. is also atomic. (2) the contents of the global event queue11 and (3) the values of all global parameters.2. Thus. Parameters. an actor may be in the midst of writing the data when another actor tries to read the data. Thus. are always guarded. There are two forms of shared data across actors: tokens and parameters. Since there are shared data between actors. concurrency errors will not happen at the application level across actors. there is a risk of livelock. Race conditions are another major concurrency concern.41 Actor Loop interrupt Figure 3. So. so future interrupts will not be blocked.

42 Actor R (event. . Events that are produced at the same time (e. read into the system. Whenever a token is stored in an input port queue.VARMAPSA . determinacy may be sacrificed for reactiveness.g. Recall that the input port associated with a connection between actors has a FIFO queue for ordering and storing events destined for the input port. STARTA ). 2. Recall from Equation 3. system state (including quiescent system state and active system state). Concurrent models. Thus.6) are ordered first by order of appearance in the application actors list (ACT ORSA ). The contents of the global event queue. a representation of this event is also inserted into the global event queue. actor iteration (in response to an interrupt and in response to an event). as in Figures 3. ACT ORSA . However. are determinate.CONNECT IONSA . The system state consists of four main items: 1. which is an ordered list created from the actors input ports set INPORT SR ). This section analyzes the determinacy property of TinyGALS programs. which sacrifices real-time properties. The contents of all of the queues associated with actor input ports in the application.t0 ) Figure 3. Definition 1 (System). for event-driven systems.. Definition 2 (System state).3 that an application is defined as A = (GLOBALSA .9 or 3.9: Two events are produced at the same time. can also be determinate [52]. A system consists of an application and a global event queue. beginning with definitions for a TinyGALS system. such as Kahn process networks. 3. This section also reviews the conditions for well-formedness of a TinyGALS system. then by order of appearance in the actors input ports list (INPORT SR . events that are produced earlier in time with respect to the system clock appear in the global event queue before events that are produced later in time. The global event queue provides an ordering for tokens in all input port queues. The values of all internal variables of all components (VCi ). and system execution.t0 ) (event.

since execution begins by triggering an actor input port.43 4. Definition 3 (Component execution). The system state is either quiescent or active: Definition 2. A source component is activated when the hardware it encapsulates receives an interrupt. no events in any of the actor input port queues in the system. . Let C be the component that contains the interrupt handler of I.1. Execution of the system can be partitioned into actor iterations based on component execution. The following defines these two types of actor iterations in more detail. and hence.” Definition 4. Note that the code executed upon component activation may call other methods in the same component or in a linked component. A triggered or called component C is activated when one of its provided methods is called. and hence. A system state is quiescent if there are no events in the global event queue. Note that a TinyGALS system starts in an active system state. An iteration of an actor R is the execution of a subset of the components inside of R in response to either an interrupt or an event at an input port.1 (Actor iteration in response to an interrupt).1 (Quiescent system state). at least one event in the queue of at least one actor input port.2 (Active system state). Recall from Section 3. Definition 4 (Actor iteration). but the actor input ports contain the data associated with the event. Definition 2. Recall that the global event queue contains the events in the system. The values of all TinyGUYS (GLOBALSA ). Note that iteration of the actor may cause it to produce one or more events on its output port(s).2 that C therefore must be a source component. Create a source DAG D by starting with C and following all forward links between C and other components in R. encapsulated as a token. Component execution also includes execution of all external code until control returns and execution of the code body has completed. Component execution is the execution of the code in the body of the interrupt service routine or method through which the component has been activated. Iteration of the actor consists of the execution of the components in D beginning with C. including what is meant by “subset of components. Suppose actor R is iterated in response to interrupt I. A system state is active if there is at least one event in the global event queue.

Conditions for well-formedness Below is a summary of the conditions that the components within a single TinyGALS actor must satisfy to be well-formed and avoid concurrency problems. or with one or more output ports.1. The order in which actors are executed is the same as the order of events in the global event queue. • Component source DAGs and triggered DAGs must be disconnected. Q. The following discusses how to choose the actor iteration order.44 Definition 4. Given a system state and zero or more interrupts.2 and 3. but triggered DAGs may be connected to other triggered DAGs. • Input ports may be associated with a single method of a single component.2 (Actor iteration in response to an event). Iteration of the actor consists of the execution of the components in D beginning with C.1. but loops around actors are allowed.2. system execution is the iteration of actors until the system reaches a quiescent state. as discussed in Sections 3.1. • Component source DAGs must not be connected to other source DAGs. iteration of the actor may cause it to produce one or more events on its output port(s). or with one or more output ports. Let C be the component linked to the input port of Q. . • Cycles among components within an actor are not allowed. Recall from Section 3. Create a triggered DAG D by starting with C and following all forward links between C and other components in R. but other interrupts are not masked. As with the interrupt case. Definition 5 (System execution). • Source components may neither also be triggered components nor called components. Suppose actor R is iterated in response to an event E stored at the head of one of its input port queues.2 that C therefore must be a triggered component. Assumes that an interrupt whose handler is running is masked. • Outgoing component methods may be associated with a single method of another component.

Components in this triggered DAG execute and may generate events at the output port(s) of the actor.45 an actor iteration interrupt I r0 q0 quiescent state r1 a0. there is only one system execution path. Theorem 4 (Determinacy). which is part of a DAG. that is. The component C is a triggered component. Recall that a TinyGALS system starts in an active system state. between quiescent states. rn q1 quiescent state a0.. rn . one can analyze the determinacy of a TinyGALS system. . this section first discusses determinism of a TinyGALS system in the case of a single interrupt occurring in a quiescent state. for each quiescent state and a single interrupt. given an initial quiescent system state and a set of interrupts that occur at known times.. The application start port is an actor input port which is in turn linked to a component C inside the actor. This section then discusses determinism for one or more interrupts during actor iteration in the cases (1) where there are no global variables and (2) where there are global variables. Figure 3. since the system execution path is the order in which the actors are iterated. . .10 depicts iteration of a TinyGALS system between two quiescent states due to activation by an interrupt I. A TinyGALS system is determinate. What if one or more interrupts occur during an actor iteration. as is usually true in an event-driven system? . . and in each of the steps r0 . System execution proceeds until the system reaches a quiescent state.n−1 active states Figure 3. r1 . From this quiescent state. the actor selected is determined by the order of events in the global event queue. Determinacy Given the definitions in the previous section. A system is determinate if.0 a0.1 r2 . In the intuitive notion of determinacy. the system always produces the same outputs and ends up in the same state after responding to the interrupts.10: A single interrupt.

Suppose the iteration of actor R is interrupted one or more times. and that active state a2 would be the next state after an 0. I2 does not interrupt the handling of I1 ).g. This section first examines the case where there are no TinyGUYS global variables. . .46 I1 I2 In I0 ... This section assumes that the handlers for interrupts I1 . the order of events in the global event queue may not be consistent between multiple runs of the system if the same interrupts occur during the same actor iteration. Depending on the¡ relative timing between the interrupts and the production of events by C at the output ports of R. This is illustrated in Figure 3.k states.. This is a source of non-determinacy. Then the system state ..1 ax 0. In order to determine the value of active system state ax .k is a shorthand for the sequence of interrupts I0 . I2 . In the TinyGALS notation. the interrupt(s) cannot cause the production of events on output ports of R that would be used in the case of a normal uninterrupted iteration. Suppose active state a1 would be the next state after an iteration of the actor corresponding 0. Since source DAGs must not be connected to triggered DAGs. . Figure 3. but at slightly different times. . I1 . then one can predict the state of the system after a single actor iteration even if it is interrupted one or more times. However. .11.11: One or more interrupts where actors have delayed output.n q1 Figure 3. q0 ax 0.. I2 . If one knows the order of interrupts. one can “add” the combined system j. A partial solution for reducing non-determinacy in the system is to delay producing outputs from the actor being iterated until the end of its iteration.0 to interrupt I1 from quiescent state q0 . the superscript x in ax j. . Consider an actor R that contains a component C which produces events on the output ports of R.12.11 shows a system execution in which a single actor iteration is interrupted by multiple interrupts. the interrupt(s) may cause insertion of events into other actor input port queues. In execute quickly enough such that they are not interleaved (e.0 ax 0. and hence insertions into the global event queue.0 iteration of the actor corresponding to interrupt I2 from q0 . . . Determinacy of a system without global variables. In Figure 3. .k refers to an active system state after an interrupt Ii starting from quiescent state q j and after actor iteration rk . In . aij. This approach is taken by models of computation such as timed multitasking [69] and Giotto [44].

However.0 0. before the completion of the iteration of actor R in response to interrupt I0 .. if an interrupt occurs. + an 0.12: Active system state after one interrupt.0 0.0 0. That is. 0.0 Figure 3.47 Ii q0 ai 0. If the interrupts are interleaved.0 0.13: Active system state determined by adding the active system state after one noninterleaved interrupt.0 serted (or “appended”) into the corresponding actor input port queues in active system state a1 . q0 a1 0. as shown in Figure 3. which leads to greater predictability in the system. during which interrupts are masked. where the value of this 0..0 Figure 3.. system execution is deterministic for a fixed sequence of interrupts. . it is also necessary that interrupt handling be fast enough that the handling of the first interrupt I0 completes in a reasonable length of time. a sequence of actor iterations is scheduled and executed.13.0 a1 + a2 0. Then.0 expression is the system state in which the new events produced in active system state a2 are in0.. would be a1 + a2 . both of these approaches reduces the reactiveness of the system. Another solution.0 0.0 0. From a performance perspective.0 0.. but after the completion of the interrupt handlers for interrupts I1 and I2 . one must add the system state (append actor input port queue contents) in the order in which the interrupt handlers finish. is to preschedule actor iterations. + an + a0 0.. I1 I0 I2 In . In .0 a1 + a2 + . One can also queue interrupts in order to eliminate preemption. It is necessary that the number of interrupts be finite for liveness of the system.0 One can extend this to any finite number of interrupts.0 a1 + a2 + .

where there is pure reactive execution. That is. a global variable always contains the same value throughout an entire actor iteration). interrupts occur only at quiescent states.48 Determinacy of a system with global variables.2. Solution 3 Delay writes to a TinyGUYS global variable by an iterating actor until the end of the iteration. Suppose that actor R writes to a global variable. This may require that the processing speed be quick enough to process all triggered execution before the next interrupt occurs. The source of non-determinacy is the preemptive handling of interrupts. if a component in a triggered DAG writes to a TinyGUYS global variable. one cannot predict the final value of the global variable at the end of the iteration. Solution 2 Allow multiple writers. if a component in a source DAG writes to a TinyGUYS global variable then no component in any triggered DAG can be a writer. Then without timing information. This section now discusses system determinacy in the case where there are TinyGUYS global variables. Solution 1 Allow only one writer for each TinyGUYS global variable. Suppose that while an actor is being iterated. Components in other source DAGs are only allowed to write if all interrupts are masked. Likewise. As currently defined. Solution 4 Prioritize writes such that once a high priority writer has written to the TinyGUYS global variables. There are several possible alternatives for eliminating this source of nondeterminacy. (Note that when read. In general. no component in any source DAG can be a writer (but components in other triggered DAGs are allowed since they cannot execute at the same time). a TinyGALS program is non-determinate. it is interrupted by . That is.3 Summary A TinyGALS program is determinate in a restricted case. and it takes zero time to react to external events [39]. where the processing speed is infinitely fast. and a component in the interrupting source DAG writes to the same global variable. An extreme version of this case is the “synchronous” assumption in synchronous/reactive models. 3. lower priority writes are lost. the state of the system after the iteration of actor R is interrupted by one or more interrupts is highly dependent on the time at which the components in R write to the global variable(s). Also suppose that the iteration of actor R is interrupted. but only if they can never write at the same time.

dead code elimination.1 toolset. If both of these actors produce events at their output ports. a parameter). 3. The galsC compiler uses the link model described in Section 3. Given the definitions for the components. The galsC toolset is an extension of the nesC 1.1.e. The galsC compiler takes advantage of a real compiler backend. . However. The discussion throughout this section uses the example system illustrated in Figure 3. parameters. the order of events in the global event queue may not be consistent when the system is executed at different speeds. and (4) system initialization and start of execution. and can compile both nesC and galsC programs. one cannot predict the final value of the global variable at the end of the iteration. The galsC compiler uses traditional compiler techniques. This is an annotated version of the SenseTag application example shown in Figure 3. The galsC compiler also inherits the datarace detection feature of nesC. interrupts should be considered as high priority events which should affect the system state as soon as possible. and to infer and check types in the system graph of ports.2 and 3. and (3) TinyGUYS global variable reads and writes. actors. The detection feature is modified for galsC.1.3 to check links and connections.49 another actor.3 show a summary of the generated functions and data structures for galsC.. In these cases. If both of these actors write to a global variable (i. (2) communication between actors.1 at the beginning of this chapter.14.3 Code Generation The highly structured architecture of the TinyGALS model enables automatic generation of the communication and scheduling code for galsC programs. and application. the galsC compiler automatically generates all the code necessary for (1) component links and actor connections. and function inlining. then without exact timing information. allowing software developers to avoid writing error-prone concurrency control code. since the decoupling of execution through ports eliminates some possible sources of race conditions. Tables 3. and functions (methods). including the Berkeley motes. as well as data on the memory usage of TinyGALS. The output of the galsC compiler can be cross-compiled for any platform used with TinyOS. This section also gives an overview of the implementation of the TinyGALS scheduler and how it interacts with TinyOS. event-driven systems are usually designed to be reactive. including type checking.

50 GALSC_params_buffer GALSC_params TinyGALS scheduler GALSC_sched_init() GALSC_sched_start() GALSC_eventqueue[] uint16_t count = 0 count TimerActor actorControl IntOutput.. actor$port$argi[] X Queue for the ith argument of the input port. output TimerC Photo SenseActor$trigger$arg0[64] SenseActor$trigger$put() SenseActor$trigger$head SenseActor$trigger$get() SenseActor$trigger$count Figure 3.13 actor$port$head X Points to the beginning of the input port queue. Function or variable name Per port12 Function Description GALSC_sched_init() X Initialize scheduler data structures. Table 3.. actor$port$put() X X Put token into input port queue.output actorControl Counter StdControl Timer StdControl Timer TimerControl Trigger trigger trigger 64 trigger StdControl SenseActor count SenseToInt IntOutput.13 actor$port$count X Number of tokens in the input port queue.14: Code generation for the SenseTag application.2: Generated code for ports in galsC. actor$port$get() X X Get token out of input port queue. GALSC_eventqueue[] Event queue for the TinyGALS scheduler. .output trigger ADC ADCControl . GALSC_sched_start() X Put initial tokens into input port queues.

the compiler does not generate a queue for the port. 12 “Per . as detailed in the next section. and n is the length specified by the programmer in the application definition file.1 Links and connections The compiler generates a set of aliases and mapping functions that create the links between components. 13 This variable is not generated if the port has no arguments (i. 3. For the init() method of the TimerControl interface15 . The galsC compiler also generates similar aliases and mapping functions for connections between actors.2 for the source code of the TimerC and TimerM components). where m is the number of arguments in the linked component method. The mapping functions for the links between components is the same as in the original nesC compiler—these are intermediate functions that call the destination function. TimerControl is an alias for StdControl that is explicitly declared in the declaration of the Trigger component using the as keyword in nesC.2 Communication The compiler automatically generates a set of scheduler data structures and functions for each connection between actors.3. then as an optimization. the compiler generates a queue of width m and length n. but it still reserves port” indicates that this function or variable is generated for each input port. Function or variable name Per parameter14 Function Description GALSC_params Contains all of the parameters.. the alias and destination for the link is TimerM$StdControl$init() (see Figure 3. which calls TimerM$StdControl$init(). for the links between the TimerControl interfaces of the Trigger and TimerC components. there is only one instance of the function or variable for the entire galsC program.14. The galsC compiler generates a mapping function named Trigger$TimerControl$init(). parameter$get() X X Read from parameter.3.51 Table 3. though the called function is a put() or get() function for an actor port. the galsC compiler generates an alias and a mapping function for each method of the interface. 14 “Per parameter” indicates that this function or variable is generated for each parameter. In the example in Figure 3. as well as the connections between actors. GALSC_params_buffer Copy of GALSC_params. 15 Here. there is only one instance of the function or variable for the entire galsC program.e. the token contains no data).3: Generated code for parameters (TinyGUYS) in galsC. parameter$put() X X Write to parameter buffer. 3. If the linked component method has no arguments. If not indicated. For each input port of an actor. If not indicated.

as described in the previous section. as well as the variables SenseActor$trigger$head and SenseActor$trigger$count.Trigger.52 space for events in the scheduler event queue. When the scheduler activates an actor via an input port. the system calls SenseActor$trigger$get() when the scheduler activates SenseActor to remove data queued in SenseActor$trigger$arg0[0].trigger() is a method with one argument. It modifies SenseActor$trigger$head and SenseActor$trigger$count to keep track of the queue contents. The scheduler also modifies SenseActor$trigger$head and SenseActor$trigger$count before 16 TimerActor. The mapping function TimerActor$Trigger$trigger() in turn calls SenseActor$trigger$put() to insert data into the queue. The mapping function calls the get() function of the linked input port. In the example in Figure 3. The galsC compiler also generates a put() and get() function for each input port. In the example in Figure 3. The mapping function is called whenever a method of a component wishes to write to an output port. The galsC scheduler currently takes the simple approach of dropping events that occur when the queue is full. For each link between a component method and an actor output port.14. the system first calls this generated function to remove data from the input port queue and pass it to the component method. for the definition of the trigger input port of SenseActor. one can take one of several strategies. For each link between a component method and an actor input port. the galsC compiler generates a mapping function. the galsC compiler generates a mapping function TimerActor$Trigger$trigger() for the trigger method of component Trigger in TimerActor. In the example. The put() function handles the actual copying of data to the input port queue. The put() function also adds the port identifier to the scheduler event queue so that the scheduler activates the actor at a later time. the galsC compiler also generates a mapping function. The compiler also generates a pointer and a counter for each input port to keep track of the location and number of tokens in the queue. and generates functions SenseActor$trigger$put() and SenseActor$trigger$get() for the input port trigger of SenseActor. an alternate method is to generate a callback function which attempts to re-queue the event at a later time. as described in the previous section. the galsC compiler generates an input port queue of length 64 called SenseActor$trigger$arg0[ ]16 . which in turn calls the linked input port put() function. If the queue is full when attempting to insert data into the queue. . However. Yet another approach would be to place a higher priority on more recent events by deleting the oldest event in the queue to make room for the new event.14.

3.3. which performs all of the runtime initialization.3 TinyGUYS The compiler generates a pair of data structures and a pair of access functions for each TinyGUYS global variable declared in the application definition.3. For the example in Figure 3. A generated flag indicates whether the scheduler needs to update the variables by copying data from their buffers.trigger() is listed in the appstart section of the application definition. the GALSC sched start() function calls the SenseActor$trigger$put() function at the start of the system.53 calling the trigger() method of the SenseToInt component with the newly removed data as the argument. The pair of access functions consists of a get() function that returns the value of the global variable. along with a buffer for the storage location. In the source code shown in Figure 3. Figure 3.15 shows the TinyGALS scheduling algorithm. This function places initial tokens into the input port queues specified in the appstart section of the application definition. The code generator also connects the StdControl interfaces listed in the actorControl section of each actor to the Main component used in TinyOS to initialize the system. which initializes the scheduler data structures.4. along with a buffer named GALSC params buffer. and a put() function that stores a new value for the variable in the variable’s buffer.count. 3. The order of actors listed in the application definition determines the order in which the interfaces are connected. The mapping functions generated for the component connections to TinyGUYS parameters calls these put() and get() functions. SenseActor. Therefore. the galsC compiler generates a global variable named GALSC params.4 System initialization and start of execution The code generator creates a system-level initialization function called GALSC sched init().count. The code generator also creates an application start function called GALSC sched start().14. The code generator also creates functions count$put() and count$get(). 3.5 Scheduling Execution of a TinyGALS system begins in the scheduler. The pair of data structures consists of a data storage location of the type specified in the actor definition that uses the global variable. 3. There is a single scheduler .

Pass value to the method linked to the input port. TinyOS tasks are not explicitly defined in the interface of the component. The asynchronous and synchronous parts of the system are clearly separated to provide a well-defined model of computation. the scheduler first copies buffered values into the actual storage for any modified TinyGUYS global variables. The scheduler removes the token corresponding to the event from the appropriate actor input port and passes the value of the token to the component method linked to the input port. If TinyOS tasks are not used. end if Figure 3.54 if there is an event in the global event queue then { if any TinyGUYS have been modified Copy buffered values into variables. else if there is a TinyOS task then { Take task out of task queue. the only way to share data is through the internal state of a component. The algorithm loops until there are no events or TinyOS tasks. lengthy operations should be spread across multiple tasks. allow the developer to explicitly define “tasks” at the application level. If the global event queue contains no events. The TinyGALS scheduler is a two-level scheduler. Both triggered actors in TinyGALS and tasks in TinyOS provide a method for deferring computation. However. Run task. end if Get token corresponding to event out of input port. since there is no communication between tasks. The TinyGALS programming model removes the need for TinyOS tasks. and TinyOS tasks run at the lowest priority. which leads to programs that . in TinyGALS which checks the global event queue for events. the scheduler runs any posted TinyOS tasks. the TinyGALS scheduler is about the same size as the original TinyOS scheduler. so it is difficult for a developer wiring off-the-shelf components together to predict what non-interrupt driven computations will run in the system. The user must write synchronization code to ensure that there are no race conditions when multiple threads of execution access this data. However. TinyGALS actors. on the other hand. If the global event queue contains an event. at which point the system goes to sleep. which is a more natural way to write applications. TinyGALS actors run at the highest priority. In TinyOS tasks must be short. Note that the TinyOS scheduler is included as a subset of the TinyGALS scheduler for backwards compatibility with TinyOS tasks.15: TinyGALS scheduling algorithm.

The globally asynchronous nature of TinyGALS provides a way for tasks to communicate. Thus. memory usage of a TinyGALS application is determined mainly by the user-specified queue sizes and the total number of ports in the system. as shown in Figure 3.6 Memory usage TinyGALS provides an improved programming model in exchange for a minimal applicationdependent increase in code size for scheduling and communication between actors. located at the lower left corner of the field. the galsC compiler automatically generates the code. since event queues are generated as application-specific data structures. The developer has no need to write synchronization code when using TinyGUYS to share data between tasks. The get() and put() functions for a parameter of type uint16 t use 30 bytes.16. Note that the goal here is to illustrate the language. and to report the detection to a central base station. To simplify the discussion. The application primarily consists of two tasks: (1) exchanging local sensor readings to determine the “leader” responsible for reporting a detection. The goal of the sensor network is to detect moving objects modeled as point signal sources. These packets also serve as beacons to establish a multi-hop routing structure. The TinyGALS communication framework is very lightweight. assume that the motes are deployed on a perturbed grid.3. rather than to develop sophisticated algorithms to solve the problem optimally. The get() and put() functions for a port with one argument of type uint8 t together use 208 bytes. For simplicity. For a simple galsC photosensor application. Assume . 3. the initialization and scheduling code is 662 bytes compared to 564 bytes for the original nesC code.55 are easier to debug. A set of sensor nodes (motes) are deployed in a 2-D field. consider a classical sensor network appli- cation that detects and monitors point-source targets. The multi-hop routing is implemented as a routing tree rooted at the base station. 3. The scheduler event queue size is equal to the sum of the user-allocated sizes for each port connection (depends on the size of the data type). and (2) multi-hop forwarding of the report messages to the base station. Assume that the motes know their locations on the grid and the grid size. the leader election is achieved by having every mote periodically broadcast a packet containing the location of the mote and its sensor reading.4 Example To illustrate the effectiveness of the galsC language.

17 shows a high-level view of the galsC implementation of the object detection application.1.16. To compensate for the unreliable and sometimes asymmetric wireless communication links. Whenever it broadcasts a message. Every half second. Two types of event sources drive the execution of a mote—clock interrupts and received messages. for example. the mote directly connected to the base station has hop count 0. every node that can overhear the message notes that it is probably one hop away from the base station. a trade-off between low hop count and message repeatability) as its parent node.16: Sensor array for object detection and reporting.56 base station Figure 3. TimerActor emits a token that triggers the SenseAndSend actor. a mote finds out its parent in the tree by eavesdropping on other messages. as illustrated by the dashed line in Figure 3. For example. Figure 3. It then calculates its own hop count from its parent’s hop count. These messages include sensor reading broadcasts and forwarded report messages. Every message contains the hop count of the sender. The MessageReceiver actor receives messages from the radio and chooses an action based on the message type: . that no mote has the global topology of the network. The reachable nodes of a wireless broadcast may have a complicated shape. which indicates the level of the sender in the routing tree. Similar to the example from the beginning of the chapter in Figure 3. the TimerActor handles clock interrupts and updates the latest timer count in a parameter named timeCount. a mote maintains a list of senders it has heard in the past T seconds and chooses the most reliable one (measured by. All motes run the identical code modular to their locations.

Whenever there is a change of the desired parent node. left. The SenseAndSend actor activates the ADC (analog-to-digital converter) to get a sensor reading. below. Note that it requires the timeCount value to determine the rate of the messages heard. the actor updates an internal routing table by looking at the repetition frequency of the sender node. the neighbors are defined as the motes directly above. the overriding semantics of TinyGUYS variables is a natural fit. At the local level.57 • If the message is a local broadcast. 17 Here.17 If this mote has the highest sensor reading (i. the actor sends the content of the message to the downstream MessageForwarder actor. The globally asynchronous..e. the actor updates the neighborReadings table. Note that since only the latest neighbor sensor reading matters. At the global level. Once the sensor reading is available. which separates the flow of control between actors. The actor also compares its own reading with the latest values from its neighbors. it is closest to the signal source). it updates the parentNode and hopCount parameters. locally synchronous model allows developers to use high-level constructs such as ports and parameters to create thread-safe. actors communicate with each other asynchronously via message passing. • Also for each broadcast message. multitasking programs based on the actor model. the actor queues a local broadcast of the sensor reading. and thus this node’s hop count. The MessageForwarder actor also takes the parentNode ID as part of its input token. A complementary model called TinyGUYS is a guarded yet synchronous model designed to allow thread-safe sharing of global state between actors via parameters without explicitly passing messages. . merged with the requests from SenseAndSend and MessageReceiver. SenseAndSend generates a report message and queues it with the MessageForwarder actor. • If the message is a forwarding message.5 Summary This chapter described the TinyGALS programming model for event-driven embedded systems such as sensor networks. and the galsC programming language that implements the programming model. Both the LocalBroadcast actor and the MessageForwarder actor send out packets with this mote’s hopCount so that other motes can use it to build the multi-hop routing tree. software components are linked via synchronous method calls to form actors. 3. and right of this mote in the grid.

.17: Top-level. The language and compiler are implemented for the Berkeley motes and extend TinyOS/nesC by providing a higher programming abstraction level than the TinyOS primitives. Having a well-structured concurrency model at the application level greatly reduces the risk of concurrency errors. as well as checking for possible race conditions. dead code elimination. such as deadlock and race conditions. per-node view of the object detection application.58 timeCount LocalBroadcast TimerActor SenseAndSend neighborReadings hopCount MessageReceiver MessageForwarder parentNode Figure 3. The galsC compiler extends the nesC compiler. This chapter also described a type system for checking connections across synchronous and asynchronous communication boundaries. and function inlining. which allows developers to avoid writing error-prone task synchronization code. which allows galsC to have traditional type checking. The galsC compiler automatically generates communication and scheduling code for programs specified in the galsC language.

Clients never want exactly that. that move. Moreover this effect is not limited to mere accidental issues. and to other software systems. writes about requirements refinement and rapid prototyping: The hardest single part of building a software system is deciding precisely what to build. the clients do not know what they want. in the twentieth-anniversary edition of The Mythical Man Month [17]: Harel argues strongly that much of the conceptual construct of software is inherently topological in nature and these relationships have natural counterparts in spatial/graphical representations: Using appropriate visual formalisms can have a spectacular effect on engineers and programmers. to machines. Even the simple answer—“Make the new software system work like our old manual information-processing system”—is in fact too simple. moreover. So in planning any software activity.59 Chapter 4 Viptos In The Mythical Man Month [17]. it is necessary to allow for an extensive iteration between the client and the designer as part of the system definition. Complex software systems are. No other part of the conceptual work is so difficult as establishing the detailed technical requirements. They usually do not know what questions must be answered. that work. the quality and expedition of their very thinking was found to be improved. Therefore the most important function that software builders do for their clients is the iterative extraction and refinement of the product requirements. Brooks later quotes Harel. Jr. No other part is more difficult to rectify later. and they almost never have thought of the problem in the detail that must be specified. Brooks. Frederick P. author of STATEMATE [42]. Successful system development in the future will revolve around . For the truth is. No other part of the work so cripples the resulting system if done wrong. including all the interfaces to people. The dynamics of that action are hard to imagine. things that act.

an extension to the C programming language. each of which conjures up different kinds of mental images. but it is difficult to use other models. TinyOS was chosen because of its large and active user base in the wireless sensor network community. None of these allow extensive iteration between design and implementation. users must write their programs in a multi-file. We will first conceptualize. or deployment. Viptos is built on Ptolemy II. a joint modeling and design environment for wireless networks and sensor node software. text-based format.60 visual representations. a TinyOS program consists of a graph of mostly pre-existing nesC components. and wired subsystems. they face some key limitations when using the nesC/TinyOS/TOSSIM programming toolsuite. wireless communication channels. an interrupt-level discrete-event simulator for homogeneous TinyOS networks. TinyOS application developers can use TOSSIM [65]. visual manner. and then formulate and reformulate our conceptions as a series of increasingly more comprehensive models represented in an appropriate combination of visual languages. a graphical modeling and simulation environment for embedded systems. As discussed in Chapter 1. but it does not allow simulation of networks that contain different programs. Similar barriers to integrated design and deployment exist for other popular wireless sensor network development platforms. Although a large community uses TinyOS in simulation to develop and test various algorithms and protocols. a TinyOS simulator for the PC that can execute nesC programs designed for a mote. physical media such as acoustic channels. even though a graphical block diagram programming environment would be much more intuitive. especially in an intuitive. consider VisualSense [8]. which allows modeling of various hardware and other interrupt events. most existing tools for wireless sensor networks focus on either design. simulation. as discussed in Chapter 1. which ties in well with an actor-oriented approach. since system models have several facets. Additionally. A TinyOS program consists of a graph of components that are written in an object-oriented style using nesC [32]. however. does not provide a mechanism for transitioning from a sensor network application developed within the framework to an implementation for real hard- . TOSSIM contains a discrete-event simulation engine. and TOSSIM. Users may choose from a few built-in radio connectivity models in TOSSIM. VisualSense. TOSSIM can efficiently model large homogeneous networks where the same nesC code is run on every simulated node. To address these problems. This chapter presents Viptos (Visual Ptolemy and TinyOS). and its event-driven execution model. a Ptolemy II-based graphical modeling and simulation framework for wireless sensor networks that supports actor-oriented definition of sensor nodes. using the “proper” entities and relationships. A combination it must be.

61 ware without rewriting the code from scratch for the target platform. VisualSense mainly provides an abstract, mathematically-based modeling environment, and node models must be created from scratch. Integrating TinyOS and VisualSense combines the best of both worlds. TinyOS provides a platform that works on real hardware with a library of components that implement low-level routines. VisualSense provides a graphical modeling environment that supports hierarchical, heterogeneous systems. The result, Viptos, allows networked embedded systems developers to construct block and arrow diagrams to create TinyOS programs from any standard library of TinyOS components written in nesC. Viptos automatically transforms the diagram into a nesC program that can be compiled and downloaded from within the graphical environment onto any TinyOS-supported target platform. Viptos also includes the full capabilities of VisualSense, including modeling of communication channels, networks, and non-TinyOS nodes. It presents a major improvement over VisualSense by allowing developers to refine high-level wireless sensor network simulations down to real-code simulation and deployment, and adds much-needed capabilities to TOSSIM by allowing simulation of heterogeneous networks. Viptos provides a bridge between Ptolemy II and TOSSIM by providing interrupt-level simulation of actual TinyOS programs, with packet-level simulation of the network, while allowing the developer to use other models of computation available in Ptolemy II for modeling the physical environment and other parts of the system. This framework allows application developers to easily transition between high-level design and simulation of algorithms to low-level implementation, simulation, and deployment. The work presented in this chapter has three main contributions. First, it addresses a need for a unified wireless sensor network development environment that allows abstract modeling and refinement to low-level simulation and deployment. Second, it provides insights into the integration of the semantics of two different simulation systems, with different representations of software components, programming languages, types systems, and schedulers. Third, it shows through evaluation that the implementation of the combined system is linearly scalable in the number of nodes, and even without aggressive performance tuning, can simulate moderately large, heterogeneous sensor networks effectively. Section 4.1 describes the architecture of the integrated TinyOS and Ptolemy II toolchain and investigates the semantics of this interface. Section 4.2 evaluates the performance of Viptos. Section 4.3 summarizes this chapter. Related work is presented separately, in Chapter 6 (Section 6.2).

62

4.1

Design
Viptos provides an integrated toolchain for designing, simulating, and deploying sensor net-

work applications by integrating the programming and execution models and the component libraries of two systems: Ptolemy II/VisualSense and TinyOS/TOSSIM. This section describes the architecture of this integrated system in detail, including the representation of nesC components, the transformation of the nesC components into this representation, the generation of deployment and simulation code for TinyOS programs developed in Viptos, and the simulation of sensor network models that include nodes running TinyOS.

4.1.1

Representation of nesC components

Let us review the basics of the nesC programming language used in TinyOS. A nesC component exposes a set of interfaces. An interface consists of a set of methods. A method is known as either a command or an event. A nesC component implements its provides methods and expects other components to implement its uses methods. A nesC component is either a configuration that contains a wiring of other components, or a module that contains an implementation of its interface methods. A TinyOS program consists of a set of nesC components, where the top-level file that describes the application is a nesC component that exposes no interface methods. Figure 4.1(a) shows a TinyOS program called SenseToLeds that displays the value of a photosensor in binary on the LEDs of a mote. SenseToLeds contains a wiring of the components Main, SenseToInt (whose source code is shown in Figure 4.1(b)), IntToLeds, TimerC, and DemoSensorC. These components are just a few of the nesC components that are available in the TinyOS component library. NesC interfaces can also be parameterized to provide multiple instances of the same interface in a single component. In Figure 4.1(a), the TimerC.Timer interface is parameterized. The Timer interface of SenseToInt connects to a unique instance of the corresponding interface of TimerC. If another component connects to the TimerC.Timer interface, it connects to a different instance. Each timer can be initialized with different periods. In Ptolemy II, basic executable code blocks are called actors and may contain input and/or output ports. A port may be a simple port that allows only a single connection, or it may be a multiport that allows multiple connections. Fan-in to, or fan-out from, simple ports may be achieved by placing a relation in the path of the connection. A code block is stored in a class, and an actor is an instance of the class.

63

configuration SenseToLeds { } implementation { components Main, SenseToInt, IntToLeds, TimerC, DemoSensorC as Sensor; Main.StdControl -> SenseToInt; Main.StdControl -> IntToLeds; SenseToInt.Timer -> TimerC.Timer[unique("Timer")]; SenseToInt.TimerControl -> TimerC; SenseToInt.ADC -> Sensor; SenseToInt.ADCControl -> Sensor; SenseToInt.IntOutput -> IntToLeds; }

module SenseToInt { provides { interface StdControl; } uses { interface Timer; interface StdControl as TimerControl; interface ADC; interface StdControl as ADCControl; interface IntOutput; } } implementation { ... }

(a) (b)

Figure 4.1: Sample nesC source code. Table 4.1: Representation scheme for nesC components in Viptos. NesC construct component uses interface provides interface non-parameterized interface single-index parameterized interface1 fan-in or fan-out Ptolemy II construct class output port input port simple port multiport relation Ptolemy II Graphical Icon block outward pointing triangle inward pointing triangle black triangle white triangle black diamond

Viptos uses the representation scheme shown in Table 4.1 for the various parts of nesC components. Figure 4.2(c) shows a graphical representation in Viptos of the equivalent wiring diagram for the SenseToLeds configuration shown in Figure 4.1(a). Relations are represented by diamondshaped icons. Note that the TimerC component in Figure 4.2(c) provides a parameterized interface, or input multiport, as indicated by the white triangle pointing into the block. Non-parameterized interfaces, or simple ports, are represented by black triangles. Viptos can serve as a program design and editing environment—users design programs by manipulating the Ptolemy II graphical icons on the screen, then generate code using the automatic process described later in Sections 4.1.3 and 4.1.4.
multiple-index, parameterized interfaces are allowed in nesC, Viptos does not support them, since they are not used in practice and do not appear in any existing components in the TinyOS component library.
1 Although

2(c) shows a TinyOS program created graphically using components from the converted library.2 compiler.1(a).2 Transformation of nesC components As the implementation for representing nesC components. Both versions of nc2moml generate MoML syntax that specifies the name of the component. Viptos uses the NDReader Java class provided in the nesC 1. hierarchical components. Viptos uses MoML (Modeling Markup Language) [61]. for nesC top-level applications. For both nc2moml and ncapp2moml. As discussed previously. Viptos treats subcomponents and top-level applications differently when transforming nesC files into MoML. Viptos uses the resulting MoML files to display TinyOS components as a library of graphical blocks. The ncapp2moml tool harvests TinyOS nesC application files and converts them into Viptos MoML model files.nc file shown in Figure 4. .1(a). The initial version of nc2moml was a modification of the source code of the nesC 1. and whether they are multiports.2 compiler to generate MoML syntax that specifies a model containing the class corresponding to each nesC component used. The nc2moml tool harvests TinyOS nesC component files and converts them into MoML class files.4 shows an example of a portion of the MoML code generated from the SenseToLeds. a nesC component is either a subcomponent of an application if it exposes interface methods.1 compiler.1. Unlike the TinyOS component files examined by nc2moml. Viptos provides a tool called nc2moml. or a top-level application if it does not. The user may drag and drop components from the library onto the workspace and create connections between component interfaces by clicking and dragging between ports. ncapp2moml can also automatically embed the converted TinyOS application into a template model containing a representation of the hardware interface of the node and optionally. Figure 4. Viptos provides a tool called ncapp2moml. as well as the name and input/output direction of each port. Figure 4. a default physical environment. For nesC subcomponents.3 shows the generated MoML code for the TimerC component referenced in Figure 4. Figure 4. and the links between the ports and relations such that the connections in the model correspond to the connections between interfaces in the nesC file.2 compiler distribution to parse nesC XML output and create nesC-specific data structures. The current version of nc2moml uses the XML output feature of the nesC 1. the relations required at each port. which decouples nc2moml from nesC compiler version updates.64 4. The ncapp2moml tool uses information about the nesC wiring graph and the referenced interfaces in the XML output from the nesC 1. TinyOS application files in nesC do not have interfaces. an XML-based language used in Ptolemy II to specify interconnections of parameterized.

2: SenseToLeds application in Viptos. .65 a b e f d c Figure 4.

Note that this is the opposite of ncapp2moml. The nesC compiler generates a pre-processed C file." /> </port> </class> Figure 4.dtd"> <class name="TimerC" extends="ptolemy.lib. which means that it is possible to convert back and forth between Viptos models and nesC files. Viptos does not use XSLT (Extensible Stylesheet Language Transformations) because the generated MoML files are not complex.NCComponent"> <property name="source" value="$CLASSPATH/tos/system/TimerC.2(d)) to compile the generated nesC code to any target supported by the TinyOS make system. and sensors.. simulation.1.IOPort"> <property name="input" /> <property name="multiport" /> <property name="_showName" class="..berkeley." /> </port> <port name="Timer" class="ptolemy.nc The tools use JDOM 1.2(c)) into a nesC file.domains.edu/xml/dtd/MoML_1.3 Generation of code for target deployment When a user compiles a TinyOS program for an actual sensor node..nc" /> <property name="_displayedName" class=". radio.ptinyos. and deployment to target hardware for a single node.3: Generated MoML by nc2moml for TimerC. Viptos does this transformation by means of a director called PtinyOS Director. Viptos can transform a model of a TinyOS program (as in Figure 4. A user can configure the PtinyOS Director (Figure 4. 4. including cross- .66 <?xml version="1.. the nesC compiler automatically searches the TinyOS component library paths for included components. including directories containing the components that encapsulate the hardware components specific to the target platform.actor.. which controls code generation.0"?> <!DOCTYPE plot PUBLIC "-//UC Berkeley//DTD MoML 1//EN" "http://ptolemy.eecs.IOPort"> <property name="input" /> <property name="_showName" class=". which it can send to a cross compiler for the target hardware.." value="TimerC" /> <port name="StdControl" class="ptolemy. such as the clock.0 to construct and generate XML output.actor.

StdControl"/> <link port="IntToLeds.SenseToInt" /> <entity name="IntToLeds" class="tos.system.IntToLeds" /> <relation name="relation1" class="ptolemy.IORelation" /> <relation name="relation2" class="ptolemy.IORelation" /> <relation name="relation5" class="ptolemy..TimerC" /> <entity name="Main" class="tos...lib.domains..StdControl" relation="relation3"/> <link relation1="relation3" relation2="relation1"/> <link relation="relation4" port="SenseToInt. <link relation="relation1" port="Main.MicaCompositeActor"> .lib.4: Generated MoML by ncapp2moml for SenseToLeds..DemoSensorC" /> <entity name="TimerC" class="tos.actor.IORelation" /> <relation name="relation4" class="ptolemy. <entity name="DemoSensorC" class="tos.Timer" relation="relation5"/> <link relation1="relation5" relation2="relation4"/> .ptinyos.actor. <entity name="MicaCompositeActor" class="ptolemy.actor.Main" /> <entity name="SenseToInt" class="tos.actor.lib.Counters.micasb.nc .67 .Counters. Figure 4.IORelation" /> <relation name="relation3" class="ptolemy.IORelation" /> .sensorboards.Timer"/> <link port="TimerC... </entity> .actor...StdControl" relation="relation2"/> <link relation1="relation2" relation2="relation1"/> <link port="SenseToInt..system.

such as continuous-time. microphone. Figure 4.2(a) shows a basic example with models of a light source and a sensor node. or TOSSIM for external simulation. as well as models of entities such as buildings. Viptos provides a model of the hardware interface of a Mica mote with sensor board. Users may also interface to live data through Ptolemy II library blocks such as those that interface with the microphone or the IP (Internet Protocol) network. and ports for the LEDs and radio communication. synchronous/reactive. 4. equivalent to that shown in Figure 4.2(c) causes the PtinyOS Director to generate a nesC component file for SenseToLeds. Viptos users can model and simulate the physical environment. servers. the PtinyOS Director then compiles the nesC file against a custom version of TOSSIM to create a shared library. timetriggered. Running the model in Figure 4.1(a). heterogeneous nature of Ptolemy II to create detailed models of physical phenomena such as light.2(b) shows this graphically. Thus. magnetometer. Running the model in Figure 4. Figure 4. and accelerometer. A common actor-oriented programming and execution model unifies these modeling capabilities. and other nodes. Developers may choose from diverse models of computation. dataflow. and Kahn process networks. The director also generates a makefile that includes all of the paths necessary for compilation to target hardware.2(b) causes the PtinyOS Director to generate a nesC file and a makefile. wired subsystems. The user can also download code to the target hardware from the Viptos interface. including nonTinyOS nodes. This hardware representation includes ports for the ADC (analog-to-digital converter) channels connected to sensors that include a thermistor. The Viptos simulation environment provides more capabilities than TOSSIM alone. the nesC compiler follows the procedure described in the previous section. If the user specified the ptII simulation target as the target compilation platform. which Viptos uses internally and that users can run externally. As a template for modeling a real wireless sensor node. temperature.4 Generation of code for simulation When a user compiles a TinyOS program for simulation with TOSSIM. photoresistor. but with the TinyOS scheduler and device drivers replaced with TOSSIM code.68 compilation to target hardware. radio channels. The user can take advantage of the hierarchical. and other wireless nodes. and sound. The PtinyOS Director also generates a Java wrapper to load the shared library . microservers. the TOSSIM executable image depends on the particular TinyOS program specified by the user.1. In addition to simulating wireless sensor node(s) running TinyOS.

Methods may transfer the flow of control to another component by calling a uses method. To avoid duplicate functionality. or it may block the processing of other events. The TOSSIM scheduler begins its main loop by processing all tasks in the task queue in FIFO order. type system.5 Simulation of TinyOS in Viptos This section explains how Viptos simulates TinyOS programs and discusses the integration of the TOSSIM and Ptolemy II framework in terms of scheduling. as well as an ordered event queue.69 into Viptos so that the PtinyOS Director can run the shared library via JNI (Java Native Interface) method calls. TOSSIM inserts a boot-up event into the event queue. Computation performed in a sequence of method calls must be short. 4. which Viptos uses to allow calls between the C-based TOSSIM environment and the Java-based Ptolemy II environment. An event in this queue has a time stamp implemented as a long long in C (a 64-bit integer on most systems). there is a single thread of control managed by the scheduler. In TOSSIM. the original CPU clock period of the Rene/Mica motes. . the TOSSIM scheduler updates the simulated system time with the time stamp of the new event and then processes the event. In TinyOS. The processing of an event may cause new tasks to be posted to the task queue and new events to be created with time stamps possibly equal to the current time stamp. TOSSIM is a discrete-event simulator for TinyOS. Tasks are atomic with respect to other tasks and do not preempt other tasks. Viptos relies on the nesC compiler to do a complete analysis of the connected nesC interface methods at the TinyOS level to detect incorrect usage of commands or events marked with the async keyword and hence possible race conditions. Its scheduler contains a task queue similar to the regular TinyOS scheduler.5 summarizes the scheduling algorithm. and support for multiple nodes and multi-hop routing. which may be interrupted by hardware events. Scheduling Let us review the basics of the TinyOS scheduling model. Upon initialization. which a method posts to the scheduler task queue. NesC component methods encapsulate hardware interrupt handlers. The smallest time resolution is equal to 1/(4 MHz).1. all components call the queue insert event() function to insert new events into the event queue. If there is an event in the event queue. radio and I/O. Figure 4. The TinyOS scheduler processes the tasks in the queue in FIFO order whenever it is not executing an interrupt handler. A long-running computation can be encapsulated in a task.

end if end while Figure 4. If the TOSSIM event queue contains another event with the current TOSSIM system time. The DE domain uses a sophisticated calendar-queue scheduler to efficiently process events in chronological order. a node model contains an instance of PtinyOS Director.2 Thus. processes an event in the TOSSIM event queue. The precision in the semantics prevents the unexpected behavior that sometimes occurs due to modeling idiosyncrasies in some modeling frameworks. although the DE domain also supports stochastic models for Monte Carlo simulation. Viptos uses the same event time stamps as TOSSIM. In Viptos. which compiles and loads a custom copy of TOSSIM that simulates the code for a single node. Viptos calls the custom TOSSIM scheduler to process the event. end while if the event queue is not empty { Set the TOSSIM time to the time of next event. Viptos uses a specialization of the discrete-event (DE) domain of Ptolemy II [15] created for modeling wireless systems in VisualSense. The main loop updates the TOSSIM system time. Handle the event.5: TOSSIM scheduling algorithm. The DE domain provides execution semantics where interactions between components occur via events with time stamps. Viptos controls the execution of TOSSIM by using customized TOSSIM scheduler and device driver functions that notify Viptos of all TOSSIM events. At each event time stamp. Formal semantics ensure determinate execution of deterministic models [59]. and then processes all tasks in the task queue. the scheduler processes that event along with any tasks that 2 The JNI call uses fireAt() with the TOSSIM system time as the argument. . the specialized DE director may control one or more node models. At the top level of a model. In Viptos.70 while (true) { while there are TinyOS tasks { Process them. Viptos uses a modified TOSSIM queue insert event() function that also makes a JNI call to insert an event with the TOSSIM time stamp into the event queue of the Ptolemy II discrete-event scheduler (DE director) that controls the PtinyOS Director.

new events may have a time stamp that is before the current Ptolemy II system time. or moving actors. Otherwise. Viptos supports models with dynamically changing interconnection topologies and treats changes in connectivity as mutations of the model structure. Figure 4. or changing the connectivity between actors. and ports may all impose constraints on types.71 do { if the event queue of this instance of TOSSIM is not empty { Set the TOSSIM time to the time of next event.6: Viptos version of TOSSIM scheduling algorithm. Ptolemy II provides its own type system.. end while while (the event queue is not empty and the time of the next event is the same as the current TOSSIM time) Figure 4. Note that the order in the main loop of the custom TOSSIM scheduler is opposite that of the original TOSSIM. Thus. since tasks may generate events with the current TOSSIM time stamp. This change is required in order to guarantee causal execution in Viptos. by adding. which processes all tasks before updating the TOSSIM system time and processing an event in the TOSSIM event queue. may have been generated.6 summarizes the scheduling algorithm. end if while there are TinyOS tasks { Process them. parameters. the C type system and the Ptolemy II type system. Type system NesC components in TinyOS and TOSSIM use the type system provided by the C programming language.g. e. in which actors. The results are predictable and consistent. A type resolution algorithm identifies the most specific types that satisfy all the constraints. This last step is repeated until there are no other events with the current TOSSIM system time. so that static type analysis can be performed. deleting. called . A special Java base class created for Viptos. one thread can be executing a simulation of the model while another changes the structure of the model. Handle the event. The software is carefully architected to support multithreaded access to this mutation capability. Communication between actors in Ptolemy II occurs through typed tokens. Viptos composes these two type systems.

Sensor data modeled in Ptolemy II typically use tokens with values of type double. TOSSIM represents an LED value with a char. In TOSSIM. TOSSIM represents an ADC value with an unsigned short integer masked for 10-bit usage. usually do not match the actual data types of the hardware interface. Viptos automatically converts the char in TOSSIM into a booleanvalued token in Ptolemy II. however. called PtinyOSCompositeActor. the algorithm for determining radio connectivity is itself encapsulated in a component as a channel model. allows a Ptolemy II actor’s ports to have types. The types provided by C. which Viptos uses to change the animation state of the simulated LEDs. A Viptos submodel containing nesC components uses a subclass of this base class. Both .72 TypeOpaqueCompositeActor. Viptos performs automatic type conversion between the two type systems during simulation. Viptos uses JNI functions in the custom copy of TOSSIM to automatically convert between the C types used in TOSSIM and the token types used in Ptolemy II. Although LED state is binary. Viptos represents TinyOS packets using Ptolemy II string tokens. In order to maintain a standard endian format and enable easy parsing of packets. Since the data communicated between TOSSIM and Ptolemy II only involve a mote’s hardware interface. When TOSSIM requests an ADC value. as well as an interface for manually setting the per-node and per-link values and probabilities. so that the components can use the C type system. When TOSSIM updates the state of the LEDs. Radio and I/O TOSSIM has built-in models for per-node ADC values and for radio connectivity between multiple nodes. The ADC channels of a mote use 10-bit unsigned values. This facilitates the embedding of a different type system within Ptolemy II. but does not require that the actors inside use the Ptolemy II type system. As a result. the LEDs. and hence can be developed by the model builder. Viptos can limit type conversion to the data types required by the ADC interface. and the packets sent and received over the radio. Viptos automatically performs the lossy conversion from a double-valued token in Ptolemy II to a masked unsigned short integer value in TOSSIM. Viptos automatically converts between the TOSSIM char array representation and the Ptolemy II string token representation whenever a node transmits or receives a packet. TinyOS packets are represented by a C data structure containing a char array. In Viptos and VisualSense. TinyOS and TOSSIM use arbitrary data types to represent values with different bit widths.

73 Viptos and VisualSense provide several built-in models, including AtomicWirelessChannel, DelayChannel, LimitedRangeChannel, ErasureChannel, and PowerLossChannel (see the lefthand pane of Figure 4.7(a)). Both tools can determine connectivity on the basis of the physical locations of the components. Viptos overrides the built-in ADC and radio models and LED device drivers in TOSSIM so that they send data to, and receive data from, the ports of the node model. This allows the simulated node to interact with user-created models, such as sources of light (e.g., Figure 4.2(e)), temperature gradients, radio channels, and other nodes. In the DE domain of Ptolemy II, tokens received at the input port of an actor cause the actor to fire at the time of the token time stamp. The actor usually consumes the token, at which point the port becomes empty. In Viptos, the node model may receive tokens at the ADC ports that represent new values. To reconcile the difference in timing between when the simulated environment makes a new ADC value available and when the simulated node reads its ADC ports, Viptos uses a Ptolemy II PortParameter instead of a Port for the ADC ports. This usage of PortParameter makes the port value persistent between updates, such that when the TinyOS program requests data from the ADC port, the program gets the value of the most recently received token. Figure 4.2(a) shows an example containing a model of a light source and a node running the SenseToLeds TinyOS program. Viptos transmits light source data to the sensor node by means of a photo port (Figure 4.2(b)) associated with a LimitedRangeChannel named PhotoChannel (Figure 4.2(a)).

Multiple nodes and multi-hop routing TOSSIM simulates one or more nodes with the same TinyOS program by maintaining a copy of the state of each component for each simulated node. The nesC compiler has built-in support for generating arrays to store these copies, so that users do not need to modify the TinyOS program source code when compiling for TOSSIM. Viptos simultaneously simulates multiple nodes with possibly different programs by embedding multiple node models, with each TinyOS node containing a different PtinyOS Director, into the Wireless domain (the specialized DE domain). To prevent namespace collision between different simulated TinyOS programs, Viptos separately compiles and loads a shared library for each node. Viptos performs this by passing a unique name for each node to the nesC compiler, which the compiler then inserts into the TOSSIM source code by means of macros. Since Viptos models have

74 a global discrete-event scheduler, all nodes operate on the same time reference. Figure 4.7 shows an example model containing two nodes that communicate over a lossless radio channel (AtomicWirelessChannel) with full connectivity. The node on the left contains the CntToLedsAndRfm TinyOS program, which maintains a counter on a 4 Hz timer, displays the counter value on the LEDs, and sends it over the radio in a TinyOS packet. The node on the right contains the RfmToLeds TinyOS program, which listens for radio packets and displays any received counter values on the LEDs. A user can easily replace the radio channel model by deleting it and dragging in a different channel model from the menu in the left-hand pane. Though the application shown in Figure 4.7 uses broadcast, Viptos also supports multi-hop routing. Viptos accomplishes this by passing a node ID to the nesC compiler for each custom copy of TOSSIM. The modified TOSSIM code uses this node ID where it would normally be used in TinyOS, instead of using the default TOSSIM value of the index of the array containing the state of the nodes. Viptos allows users to indicate globally the name of the base station in the PtinyOS Director configuration screen, as shown in 4.2(d). Viptos includes a multi-hop routing demonstration that models a network with multiple TinyOS nodes running the Surge multi-hop routing protocol application, shown in Figure 4.8, where the base station is node 0.

4.2

Performance Evaluation
This section evaluates the scalability of Viptos in terms of execution time as the number of

nodes increases. It separately evaluates the execution time of applications without radio usage, and the execution time of applications with radio usage, in order to determine the scalability of communication within the framework. I collected timing information on an Intel Pentium M 760 processor (2.0 GHz, 2 MB L2 Cache, 533 MHz FSB) with 1024 MB of SDRAM, running Ubuntu 6.06 LTS (Dapper Drake) with Linux kernel 2.6.15-27-386. The tools I used included nesC 1.2.7a, gcc 3.4.3, TinyOS 1.x, and Sun Java VM 1.4.2 13-b06 with a heap size of 512 MB. In order to run large models, I increased the maximum number of open file descriptors allowed in the Bash shell from a default of 1024 to 20000 with the ulimit -n command. To eliminate timing variance due to random boot times, I set all nodes to boot at virtual time 0.0 seconds. I did not set the TOSSIM DBG environment variable, which affects which event debug messages get generated. I sent all printed debug messages (on stdout or stderr) from all copies of

75

a

b

c

d

e

f

g

Figure 4.7: Send and receive application in Viptos.

76 a b c Figure 4.8: Multi-hop routing in Viptos. .

or loading shared objects. gcc. This section does not present timing overhead in Viptos for opening files. I started timing right before Viptos invoked the internal copy of TOSSIM. I used the timing information from the last node to start. To reduce timing variance due to Java garbage collection. For a given number of nodes. This overhead scales linearly with the number of nodes. and Java compilers. To measure the overhead due to integrating TOSSIM with Ptolemy II.1 Comparison to TOSSIM This section uses the SenseToLeds application to evaluate the scalability of Viptos as the number of nodes increases and to compare it to TOSSIM.getRuntime() methods to measure elapsed time while running the SenseToLeds application displayed in Figure 4. since thread joining is only necessary for running the model multiple times within a graphical environment. I stopped timing at the beginning of wrapup().x CVS tree. and is on the order of a few seconds for small models. I discarded the timing measurement for the first run in each experiment to eliminate timing delay due to loading of new Java classes.77 TOSSIM to /dev/null. running the nesC. and several minutes for large models. since nodes must wait until Viptos invokes all internal copies of TOSSIM before simulation can proceed because they all operate on the same time reference. since TOSSIM uses random ADC values by default.2. 4.2. I discarded the timing measurement for the first run in each experiment to eliminate timing variance due to caching. I used the /usr/bin/time command to measure the execution time of the SenseToLeds application from the tinyos-1. For modeling additional nodes. For TOSSIM. and caching. instantiation of Java objects. I eliminated the model of the environment in order to make a fair comparison to TOSSIM.gc() to perform garbage collection before starting the timing measurement. to eliminate timing variance from printing to the screen under X11. The figure shows that Viptos has more overhead when compared to TOSSIM. To eliminate timing delay due to waiting for remaining threads to join. This does not include the overhead of running the nesC compiler and loading the TOSSIM shared object into memory. but that both simulators scale linearly in the number . For Viptos. Figure 4. restarted Viptos.getTime() and Runtime. For models with multiple nodes.0 seconds for an increasing number of nodes. I copied and pasted existing nodes into the graph. I saved the model.9 shows the average execution time of the SenseToLeds application with a virtual run time of 300. I instrumented the PtinyOS Director with calls to the Java Date(). I instrumented Viptos to call System. and took additional measurements. I collected multiple runs from the same instantiation of Viptos.

I created a model similar to that of the SendAndReceiveCnt application shown in Figure 4. Using a least squares linear regression. the user gains increased modeling and simulation capabilities and flexibility.0 seconds for all nodes.7. Each simulation ran for 300. in exchange for slightly increased execution time. of nodes.9: Execution time of the SenseToLeds application as a function of the number of nodes. I disabled animation of the LEDs.78 Figure 4. So. and a varying number of senders and receivers. The exact number for any given application depends on the fidelity of simulation required and the complexity of the application. 4. The model uses a lossless radio channel model with full connectivity.2 Radio This section evaluates the scalability of models that use the radio using the same techniques described in the previous section. which means that Viptos can simulate networks up to this size in real time. This analysis used a virtual run time of 120. The plot in Figure 4. . and an interactive. Senders send packets at 4 Hz. graphical programming environment. the results show that approximately 410 nodes can be simulated in 300.0 virtual seconds.10 shows the average execution time for this model. To eliminate timing variance due to the graphical interface.0 real seconds or less.2.

can simulate moderately large sensor networks effectively. Viptos allows users to easily transition from highlevel. The number of senders versus receivers has no noticeable effect.3 Summary This chapter described an extensible actor-oriented software framework for modeling sensor networks. The plot shows that the main determinant of execution time is the total number of nodes. and provides an integrated graphical design and simulation environment.10: Execution time of a radio send and receive model in Viptos as a function of the number of senders and receivers. This tool. and execution time scales linearly as a function of the number of nodes. whether or not the radio is used. simulation.79 Figure 4. This chapter showed that Viptos simulator performance is scalable. and even without aggressive performance tuning. heterogeneous modeling to low-level implementation. hierarchical. called Viptos. .0 virtual seconds. The execution time of the model increases linearly with the number of nodes. and deployment. 4. Each simulation ran for 120. builds upon Ptolemy II and TinyOS.

80 .

using an actor-oriented framework called galsC. I use the term “generative programming” in a broad sense: systems or components of systems are automatically generated from a specification written in one or more textual or graphical domain-specific languages [26]. Brooks. and making programs by the composition of modules. Like Sztipanovits and Karsai [93]. however. I differentiate between them—a metaprogram does not necessarily generate . 5. According to Wikipedia [104].81 Chapter 5 Metaprogramming for Wireless Sensor Networks In The Mythical Man Month [17]. actor-oriented components and pre-existing TinyOS/nesC components. asserts that “radically better software robustness and productivity are to be had only by moving up a level. In this dissertation. using an actor-oriented framework called Viptos. This chapter explains how to programmatically specify the wireless sensor network application itself through a variety of techniques that combine higher-order actors or components. Chapter 4 explained how to build wireless sensor network applications graphically from pre-existing. Jr. or objects. The terms “generative programming” and “metaprogramming” are often used interchangeably.1 Generative Programming and Metaprogramming Generative programming and metaprogramming are very similar concepts. metaprogramming is the writing of computer programs that write or manipulate other programs (or themselves) as their data. Frederick P.” Chapter 3 explained how to build wireless sensor node programs from pre-existing TinyOS/nesC components. with generative programming and metaprogramming.

. In Actor-Oriented Metaprogramming by Neuendorffer [74].. computer vendors and many big management information systems (MIS) shops had small groups of specialists who crafted whole application programming languages out of macros in assembly language. a shorter development time. higher-order actors. only resurgent and renamed. The shrink-wrapped package provides a big module of function.Next-level application builders get richness of function. He argues that partial evaluation generally requires less explicit specification by a programmer than other metaprogramming techniques. other higher-order functions capture common interconnection patterns. and Components Related to metaprogramming is the concept shared by higher-order functions. a tested component. and higher-order components.. with an elaborate but proper interface. This is one of the most persuasive arguments in favour of inclusion of higher-order functions in a programming language.. each of them captures a particular pattern of iteration. mesh. actor-oriented models are viewed as descriptions of concurrent software architectures. In effect... i. Neuendorffer describes a metaprogramming system that transforms actor-oriented models in Ptolemy II into selfcontained Java code. structured metaprograms.[S]ome higher-order functions encapsulate common types of processes.. yet others represent various linear.. although it may accept other programs or systems as input. The benefits of metaprogramming are best described by Brooks in The Mythical Man Month [17]. According to Reekie [82] (emphasis mine). since a generic actor specification is specialized to a particular role in the model. 5.2 Higher-order Functions. It is particularly effective in this use case. as they can be used to capture patterns of computation..Now the chunks offered by the metaprogrammer are many times larger than those macros.e.82 a new program or system. allowing the programmer to re-use these patterns without risk of error. Actors. where he discusses them in the context of using shrink-wrapped software packages as components: The metaprogramming concept is not new. better documentation. .Vector iterators are higher-order functions that apply a function across all elements of a vector. and tree-structured interconnection patterns. where partial evaluation is used as a way to generate more efficient programs. and its internal conceptual structure does not have to be designed at all. A higher-order function takes a function argument or produces a function result. In the early 1960s.. such as serial and parallel connection.Higherorder functions are one of the more powerful features of functional programming languages. and both the generic actor and specialized actor perform the same role and produce the same behavior. and radically lower cost..

not by an input stream. Map is replaced by the specified number of invocations of its replacement actor. the structure of a system is effectively parameterizable. “mid-way between fully-static and fully-dynamic. an efficient implementation of map( f ) can be generated.” Higher-order actors gain their power from a key restriction: “the replacement actor is specified by a parameter. since they capture patterns of instantiation and interconnection between components. The requirement that the number of invocations of an actor be known at compiletime ensures that static scheduling and code generation techniques will still be effective.. Map can accept a replacement actor with arity > 1. and could therefore be called a higher-order actor. in this case. but with number of loop iterations unknown. Like higher-order functions in Visual Haskell [82]. Further work is required to explore forms of higher-order function mid-way between fully-static and fully-dynamic. At compile time.the map actor [in Visual Haskell] takes a function as its parameter. The most basic use of icons in [the Ptolemy Classic] visual syntax may therefore be viewed as implementing a small set of built-in higher-order functions.83 Reekie then explains how the concept of higher-order functions can be applied to actors [82]: . a code generator that produces a loop with an actor as its body. to quote Reekie. or languages for constructing networks of components [19]. could still execute very efficiently. components may serve as parameters to higher-order components in composition languages.” Reekie explains higher-order actors in Ptolemy Classic [82]: Special blocks represent multiple invocations of a “replacement actor.. for example.. is a generalised form of mapV [the vector iterator higher-order function]. if not. Lee and Parks [62] explain that “dataflow processes with state cover many of the commonly used higher-order functions in Haskell. the vector of input streams is divided into groups of the appropriate arity (and the number of invocations of the replacement actor reduced accordingly). Just as functions may serve as arguments to higher-order functions in functional programming languages. If f is known. An actor of this kind mimics higher-order functions in functional languages.Unlike mapV. An interesting aspect of Ptalon is that it is.” The Map actor.” The next section investigates Ptalon in more detail. . For example. and the parameters may be other systems. which it applies to each element of its input channel. Thus [the system] avoid[s] embedding unevaluated closures in streams. In a higher-order composition language such as Ptalon [19]. the system must support dynamic creation of functions since it will not have knowledge of f until run-time.. higher-order components are the most powerful feature of these types of languages.

which minimizes the amount of input a system designer must provide to create a new system. The following sections present an example that uses the improved version of Ptalon and explain the implementation of the parameter reconfiguration capabilities. arguments to a higher-order component cannot change once specified. and Ptalon accepts components as arguments (inputs) to other components.3. The value of the local variable i is set by the for loop. thus enabling a form of scalability in system design [19]. That is. but also with an XML file containing an arbitrary collection of actors.1. Components passed as parameters to these higher-order components are atomic actors (i.1 A simple example Ptalon code is written in a simple declarative style. with varying values for the nodes’ range and location parameters.e. Ptalon uses the Ptolemy II expression . they are specified in Java. whereas the value of the parameter n is specified externally. 19] is a higher-order composition language for constructing higher-order com- ponents in Ptolemy II.. Ptalon is both a generative programming system and a metaprogramming language.3 Ptalon Ptalon [18. Ptalon makes it easy to parameterize a component with the number and types of subcomponents that should be generated within the component. Using the definitions presented in Section 5. I have improved the Ptalon system for evaluating parameters such that the values of Ptalon parameters can be changed at run-time. the specified subcomponents may be different types of wireless sensor nodes running various individual programs. Figure 5. A developer can use Ptalon to easily generate sensor network applications and configurations. In Cataldo’s original Ptalon implementation for Ptolemy II [19].1 shows a sample Ptalon file that specifies a component containing n components of type RelayNode. 5. an application developer can specify an actor not only with a Java file. This original implementation assumes that models containing higher-order components are static. a higher-order component is called a PtalonActor. I have also improved the Ptolemy II implementation of Ptalon to allow composite actors in addition to atomic actors.84 5. the underlying programming language of Ptolemy II). Cataldo proved mathematically that higher-order components can lead to succinct syntactic descriptions of large systems. since Ptalon automatically generates components from a specification written in a textual language.

and eventual validation against a real-world implementation. A user specifies the value of n as a parameter of the PtalonActor. The Ptalon compiler consists of multiple phases. . the Ptalon compiler parses the Ptalon file and creates an abstract syntax tree (AST). in which the Ptalon compiler instantiates any entities that do not depend on unknown parameter values. as shown in Figure 5. The Ptalon compiler walks the AST and creates the remaining entities. The PtalonActor parameter configuration window initially shows a blank value for the ptalonCodeLocation parameter. a PtalonActor only needs to save its parameter values. as shown in Figure 5. for which the user can then give values. and not its internal configuration. This allows simulation of abstract and concrete node and environment models with various parameters. The Ptalon compiler is implemented within Ptolemy II and is invoked as soon as the PtalonActor is set to reference a particular Ptalon file.1. The first populator phase of the Ptalon compiler occurs next.2(b).85 language to evaluate all values within double brackets ([[ ]]). The Ptalon compiler creates all entities as part of the PtalonActor submodel.2(d).2(a) shows a Ptolemy II model containing an instance of a PtalonActor called MultipleNodesMoML that references the Ptalon file in Figure 5. In its initial phase.2(a). then refine and replace these components with a real code implementation that uses TinyOS. Figure 5. I have implemented Ptalon-based versions of the SenseToLeds and SendAndReceiveCnt examples presented in Chapter 4.2(c) shows the components generated inside the PtalonActor. The second populator phase of the Ptalon compiler begins only when the values of all parameters of the PtalonActor are known. To use Ptalon within Ptolemy II. a user places a new PtalonActor in a Ptolemy II graph. the PtalonActor then reconfigures its parameter configuration window to show the parameters declared in the Ptalon file. Note that since the PtalonActor automatically populates itself with actors. Ptalon can also be integrated with Viptos (see Chapter 4). In this example. Once the user sets this parameter to reference a Ptalon file. which allow a user to change a parameter to specify different numbers of TinyOS nodes.3 shows the XML code for the model shown in Figure 5. Figure 5. An application developer can start with regular components that use pre-existing Ptolemy II domains. Figure 5. each of the components are nodes that are actually composite actors that contain other components.

Second. while preserving existing ports.1: MultipleNodesMoML. and ii) execution of the actor on its stream arguments. the separation between parameters and streams—and between compile-time and run-time values—is both clear and compulsory.SmallWorld. which necessitates a reconfiguration of the PtalonActor. using the newly assigned value of the parameter. code generation can take place with the parameters known. The Ptalon compiler proceeds through the population phase. _location := [[ [100*i. The value of a PtalonActor may be an actual token that has a type corresponding to one in the Ptolemy II token type lattice. for i initially [[ 1 ]] [[ i <= n ]] { node( range := [[ 40 + 10 * i ]]. What happens if a so-called “compile-time” parameter value changes at run-time? If the value of a PtalonActor parameter changes.2 Reconfiguration in Ptalon In his dissertation [82]. } next [[ i + 1 ]] } Figure 5.wireless.Lee stresses the difference between parameter arguments and stream arguments in Ptolemy: parameters are evaluated during an initialisation phase. First. parameter n.domains.86 MultipleNodesMoML is { actor node = ptolemy.demo. but with the stream data unknown. or it may be a reference to a model parameter. streams are evaluated during the main execution phase. Neuendorffer .. For the latter option. 100*i] ]] ). the compiler deletes the internal representation of all entities and relations in the PtalonActor.3. Reekie discusses actor parameters: Execution of an actor proceeds in two distinct phases: i) instantiation of the actor with its parameters. The Ptalon compiler implementation in Ptolemy II uses two steps to handle any change to the value of a PtalonActor parameter. and reuses existing ports whenever possible during the populator phase. it may cause the internal configuration of the PtalonActor to change.RelayNode. As a result. as well as existing values for any other parameters.. a change in the value of the referenced model parameter results in a change to the actual value of the PtalonActor parameter.ptln 5. the Ptalon compiler restarts itself in its initial phase (as described in the previous section). Thus.

.87 b a c d Figure 5.2: PtalonActor in Ptolemy II.

The Ptolemy II user manual [16] contains more details on constructing modal models. A modal model is an extended version of a finite state machine. A user may change parameters in Ptolemy II through interactive editing of the model. • Modal model.demo. a reconfiguration port is a special form of dataflow input port. Essentially.0" standalone="no"?> <!DOCTYPE entity PUBLIC "-//UC Berkeley//DTD MoML 1//EN" "http://ptolemy.actor. that is active in that particular state. Ptolemy II associates this actor with a parameter of the containing model.actor. • Reconfiguration actor.eecs.88 <?xml version="1. or refinement. The SetVariable actor is a special actor that has a single input port. the active dataflow model replaces the finite state machine until the state machine makes a state transition.PtalonActor"> <configure> <ptalon file="ptolemy. which I summarize and extend here: • Interactive editing. Ptolemy II binds each reconfiguration port to a parameter of the port’s actor.3: MultipleNodesMoML.ptalon. Another way reconfiguration of model parameters may occur in Ptolemy II is through the use of higher-order actors (I do not include PtalonActor as part of this discussion): .MultipleNodesMoML"> <ptalonExpressionParameter name="n" value="3"/> </ptalon> </configure> </entity> </entity> Figure 5. usually via a dialog box associated with the model. and tokens received through the port reconfigure the parameter.xml [74] enumerated the ways in which reconfiguration of model parameters may occur in Ptolemy II.MultipleNodes. Finite state machines transitions can reconfigure parameters of the target state’s refinement when the transition is taken. or actor of interest.TypedCompositeActor"> </property> <entity name="MultipleNodesMoML" class="ptolemy.ptalon. parameter. in which each state of the finite state machine contains a dataflow model.actor.berkeley. • Reconfiguration port. Also known as a PortParameter. The actor consumes a single token during each firing and reconfigures the associated parameter during the quiescent point after the firing.dtd"> <entity name="MultipleNodesMoML" class="ptolemy.edu/xml/dtd/MoML_1.

5. Since publication venues have limited space. The actor executes the contained model completely. lack of independent repeatability. A developer can use these actors to define an actor whose firing behavior is given by a complete execution of another model.4. the actor reads an input token from the input port. and I explain when a particular method might be most applicable.89 • ModelReference and VisualModelReference. lack of statistical validity. The developer can add these ports to an instance of this actor. to create animations by changing parameter values.4 Specifying WSN Applications Programmatically In this section. if there is one. summarizes various articles that question the credibility of published simulations results in the mobile ad hoc network (MANET) research community. for example. only it is a composite actor instead of an atomic actor. if there is one. Andel and Yasinac’s proposed solution to the first problem. an article by Andel and Yasinsac. improper precision. unrealistic application traffic. improper/nonexistent validation. and uses it to set the value of a top-level parameter in the referenced model that has the same name as the port. on each firing. I present methods for specifying wireless sensor network applications program- matically by combining in various ways higher-order actors in Ptolemy II with an improved version of VisualSense/Viptos. is to properly document all settings. they suggest . use of inappropriate radio models. and lack of sensitivity analysis. as if it were a top-level model. before executing the referenced model. The model developer can provide inputs that are MoML strings that the actor applies to the specified model. • ModelDisplay. If the actor has input ports.1 Motivation “On the Credibility of Manet Simulations” [4]. then on each firing. Problems cited include lack of independent repeatability. 5. This actor is almost the same as ModelReference and VisualModelReference. The developer can use this. • RunCompositeActor. This actor opens a window to display the specified model. The actor also uses tokens at an input port to set the value of a top-level parameter with the same name in the contained model. The ModelReference and VisualModelReference actors are both atomic actors that can execute a model specified by a file or URL (Uniform Resource Locator).

90 including only major settings and/or providing all settings as external references to research web pages. A node turns red if it receives the message in one hop. fewer hops are needed when the range increases [31]. All nodes (not including the Initiator) have the same implementation as shown in Figure 5. after two hops. It is an open-source tool whose source code is freely distributable and modifiable.4 shows the SmallWorld model as originally implemented in VisualSense. then the probability of delivery drops according to the formula shown.4. Ptolemy II is well-suited to address this problem. it turns green if it receives it in more than one hop. I also discuss how these techniques can address the other problems cited.3 Parameter Sweep I now introduce two different models.4. 5. It stays white if it never receives the message. The NodeRandomizer actor randomizes the locations of the nodes at the beginning of each run.4 illustrates a phenomenon where ad hoc networks achieve connectivity with fewer hops on average with a network that is less reliable but where ranges are longer. where a slightly modified version of the SmallWorld model (shown in Figure 5. an Initiator component.4(b) broadcasts a message. (2) the modified version has an additional param- . shown in Figure 5. The models described in the following sections show that with the techniques introduced in this dissertation. which should include freely available code/models and applicable data sets. as well as many of the other problems cited. Franceschetti and Meester showed that on average. When the user runs the model.5) is run as a submodel with the same sets of changing parameter values. wireless sensor network simulations are easily repeatable. The model plots a histogram of the number of nodes that receive the message after one hop. and the Ptolemy II version number with which they are built are automatically stored in the XML file.2 Small World The SmallWorld example shown in Figure 5. than with a network that is more reliable but ranges are shorter. If the user increases the range above sureRange. both of which perform the same set of experiments. etc. 5. There are only a few differences between this modified version and the original version: (1) the modified version stores the histogram data in a file whose name is specified by a new parameter.2(d). Figure 5. which keeps the expected number of recipients roughly constant. Each node in the sensor network rebroadcasts the first message it receives. Ptolemy II models are simple XML files that are easy to publish on the web.

and (3) the modified version uses non-zero random seeds so that each run is repeatable.5). runs j number of different ranges are simulated.7 shows an SDF model which accomplishes the same objectives as the modal model in Figure 5. for the purposes of sensitivity analysis [83]. simulating the ParameterSweep version of the SmallWorld model with runs i different node layouts and runs j different ranges.6. one for each of the parameters to be changed (range. a dataflow language provides a more intuitive interface for specifying these settings. The transitions in the modal model are used to change the counters i and j. and for each node layout. resetOnEachRun. This refinement is an SDF (synchronous dataflow) model containing a VisualModelReference actor with three different ports. Just as in the modal model.g.. For simulations where the parameters values are known a priori. I will call this version of the SmallWorld model the ParameterSweep version. That is. For each run.91 eter created to allow node location randomization to be controlled externally. a modal model might be a more appropriate . Modal model Figure 5.6(a) shows a modal model in which the main state (named state and highlighted in green) contains a refinement (Figure 5. and to validate their algorithms by quickly creating new simulation scenarios via a few simple parameter value changes. the VisualModelReference in the SDF model references the ParameterSweep version of SmallWorld (Figure 5. e. The SDF model uses dataflow actors that send the simulation parameters directly to a VisualModelReference actor with the same ports as those in the modal model.6(b)). The initial location of the nodes (not including the Initiator) is not significant. However. Dataflow Figure 5. and fileName) in the ParameterSweep version of SmallWorld (Figure 5. if the user wants to create simulation scenarios with dynamically derived parameters values. This model allows application developers to create simulation scenarios that are independently repeatable. and set the parameter values for each run. the model creates an output file with the stored histogram data. the values of the parameters are more readily apparent in the SDF model than in the modal model. Both top-level models store the settings used as part of the model itself. Notice that for this particular application. and no additional configuration files are needed.5). The modal model sweeps over the parameter values such that runs i number of different random node layouts are simulated.

4: Small World in Ptolemy II.92 a f b c d e Figure 5. .

93 Figure 5. .5: ParameterSweep version of Small World in Ptolemy II.

6: Modal model for changing parameter values of Small World model in Ptolemy II.94 a b c Figure 5. .

the first using a MultiInstanceComposite actor. This actor is often used in situations where a model contains repetitive structures that are awkward to build by hand. MultiInstanceComposite In his dissertation [74]. or when the number of repetitions is specified by a parameter. In other words. Figure 5. Ptolemy Classic also allowed the user to visually represent the replacement function in a way that is conceptually similar to using a box inside of the icon for a higher-order function. rather than the visual representation. Ptolemy Classic took advantage of higher-order functions by allowing a user to specify the number of instances of an actor by modifying the parameters of a bus icon (a line connecting the boxes representing the actors). As described by Lee and Parks [62].8 shows the SmallWorld application in Ptolemy II. the user can feed output from the SmallWorld model back into the modal model. where a MultiInstanceComposite creates all of the nodes. this actor replicates itself a number of times determined by a structural parameter. either by implication (by graphically specifying the number of instances of upstream actors). Such programmatically generated structures are called higher-order components to emphasize their similarity to higher-order functions in functional languages. A parameter which is used to determine the structure of a higher-order component is a structural parameter.4 Higher-order actors Since most of the nodes in the SmallWorld application have the same implementation. one might also consider using a higher-order actor to specify the nodes. a user could graphically specify the number of instances of an actor in Ptolemy Classic. Just before a model is executed. and the second using a PtalonActor. For example.2(d). A similar feature also existed in Ptolemy Classic. Neuendorffer introduces higher-order components (actors) (emphasis mine): In many cases is it useful to build parameterized structures in actor-oriented models. each of which has an implementation identical to that in Figure 5.95 choice. This section considers two different methods. 5. . the application developer can choose the most appropriate domain-specific language to specify the metaprogram. or directly (by graphically instantiating the desired number). The MultiInstanceComposite actor in Ptolemy II is one example of a simple higher order component. which can then automatically select new parameter settings on the basis of noise level or network connectivity.4.

7: SDF model for changing parameter values of Small World model in Ptolemy II. .96 a b Figure 5.

and n.97 a b Figure 5. not just values of actor parameters. No other changes to the model are required.8 has the same behavior as that in Figure 5. range. since it can specify the structure of the model itself. which means that the location parameter of the generated nodes are easily accessible and remain in reference to the Initiator actor in the container. Ptalon Ptalon is a natural fit for specifying model parameters programmatically. reportChannelName. The model shown in Figure 5.8: ParameterSweep version of Small World model with MultiInstanceComposite in Ptolemy II. One can use Ptalon to generate the SmallWorld application shown in Figure 5.5. Figure 5.5. The third section declares . The first section of code declares all of the actor types needed in the model.9 shows the required Ptalon code. MultiInstanceComposite generates the nodes in its container. The second section declares four parameters: channelName.

5 Discussion A user can control both the MultiInstanceComposite (Figure 5. with no modifications required. Figure 5. through which the actor transmits the data to be recorded. NodeRandomizer.10. and extra linefeeds). Ptalon automatically generates names of actor instances. and WirelessToWired converter. spaces. except the number of nodes. So. the Ptalon file shown in Figure 5. wireless channels. The first column .9. Note that this model is similar to the model shown in Figure 5. Because parameters in Ptolemy II use a form of lazy evaluation (changes to parameter values may not be propagated until they are used at run time). I explicitly declare these variables because they are useful for visualization. Another advantage of higher-order actors is that they require fewer bytes to express the model. The parameter range specifies the radio range of the nodes. 5. Also note that the resetOnEachRun parameter in Figure 5. One advantage of using higher-order actors such as MultiInstanceComposite and PtalonActor is that they enable run-time reconfiguration (e. wireless ports are parameterized by the name of the wireless channel on which they receive or transmit.5. The parameter n specifies the number of nodes to create. However. The PtalonActor also contains an output port.10) versions of SmallWorld with either the modal model or the SDF model discussed previously.8) and PtalonActor (Figure 5.10(b) shows the values of the PtalonActor parameters.4. Figure 5.10(c). Table 5.9 uses the parameters channelName and reportChannelName to specify concrete names for the channels. The Ptalon model will still run correctly. in addition to annotations and comments that were not constant across all models.g. the user must create a Ptalon parameter as a mirror of any Ptolemy parameters that should be evaluated before run time. to verify visually that the ranges are correct. For all files. before running the model. in VisualSense and Viptos. I removed all extra white space (tabs. The remainder of the file instantiates the components.98 an output port named output. refer to model parameters with the same name. Note that all of the parameters. as shown in Figure 5.g.. the number of nodes in the model can be controlled programmatically). except that Ptalon generates the nodes. Figure 5.10(a) shows a Ptolemy II model containing a PtalonActor named SmallWorld that refers to the Ptalon code in Figure 5. e. even if the range parameter is not declared as a Ptalon parameter.9 is not explicitly declared as a Ptalon parameter.1 shows a comparison of the three different ways presented for implementing the SmallWorld application.11 shows an excerpt of the MoML code for the model in Figure 5..

/* Ptalon parameters */ parameter channelName.SmallWorld.0 .domains. actor channel = ptolemy. randomize := [[ randomize ]].0] ]] ). actor nodeRandomizer = ptolemy.5. /* Instantiation of components */ channel( defaultProperties := [[ {range=range} ]].wireless. actor initiator = ptolemy. nodeRandomizer( maxPrecision := [[ 3 ]].5.0}} ]]. .0. range := [[ range ]].lib.0. 400. range := [[ {{100.probability ]]. 500. 0.0. wirelessToWired( inputChannelName := [[ reportChannelName ]]. randomizeInInitialize := [[ true ]]. 345. resetOnEachRun := [[ resetOnEachRun ]]. parameter range. haloColor := [[ {0.9: Ptalon code for SmallWorld (SmallWorld. payload := output. channel( seed := [[ 1L ]].ptalon. actor wirelessToWired = ptolemy.demo. seed := [[ 1L ]].lib. parameter n. probability*visualDensity} ]]. lossProbability := [[ 1.demo.0] ]] ). for i initially [[ 1 ]] [[ i <= n ]] { node( nodePropagationDelay := [[ nodePropagationDelay ]].Initiator. _location := [[ [0. initiator( _location := [[ [230.0. seed := [[ 1L ]] ). 0. parameter reportChannelName.SmallWorld.0. } next [[ i + 1 ]] } Figure 5.actor.ptalon. /* Port declaration */ outport output.0 * i] ]] ).domains.LimitedRangeChannel.ptln). _location := [[ [10. {200.NodeRandomizer.WirelessToWired.wireless. name := [[ reportChannelName ]] ).0}.actor.domains.wireless. 10.RelayNode.0 * i.99 SmallWorld is { /* Actor types */ actor node = ptolemy. 0. name := [[ channelName ]] ).lib.

10: Ptalon version of Small World in Ptolemy II. .100 a b c d e f g Figure 5.

the MoML code for the Initiator actor must be stored externally so that the actor can be referenced in the Ptalon file...8. Even though the RelayNode code is stored external to the ParameterSweep version.Location" value="[240.actor. Additionally.4.actor. The third column is the Ptalon implementation as shown in Figure 5.0]"> </property> <configure> <ptalon file="ptolemy. the code for the Initiator actor is stored in the model itself.SmallWorld"> <ptalonExpressionParameter name="n" value="49"/> <ptalonExpressionParameter name="channelName" value="channelName"/> <ptalonExpressionParameter name="reportChannelName" value="reportChannelName"/> <ptalonExpressionParameter name="range" value="range"/> </ptalon> </configure> </entity> . Figure 5.ptalon.8).5. Increasing the number of nodes in the implementations in the MultiInstanceComposite and PtalonActor versions requires no extra bytes (except if the number of digits in the number of nodes exceeds two. as shown in Figure 5.SmallWorld. the parameter values for each node must still be stored internally.. The main difference between using MultiInstanceComposite and PtalonActor for this particular application is that one cannot visualize the generated components using the MultiInstanceComposite actor.kernel...101 .11: Excerpt of MoML code for Ptalon version of Small World. For the Ptalon model. Also note that the code for the RelayNode actor in the ParameterSweep version is stored externally. Note that in all of the non-Ptalon versions of the SmallWorld application (Figures 5. since the ParameterSweep version requires 705 bytes for each additional node to store the parameter values and the instance declaration. so that its Actor interface methods (preinitialize().demo. have a director.PtalonActor"> <property name="_location" class="ptolemy.5. <entity name="SmallWorld" class="ptolemy. The second column is the MultiInstanceComposite implementation. whereas the code for the RelayNode actor in the MultiInstanceComposite version must be stored in the MultiInstanceComposite itself. 210. is the ParameterSweep version of SmallWorld as shown in Figure 5. in which case there is an extra byte for each digit). 5.0. .. wrapup()) are invoked during model . however. the MultiInstanceComposite actor must be opaque..e.ptalon. i. and 5.util.10. The difference in the number of lines of between the ParameterSweep and MultiInstanceComposite versions would be even greater if there were more nodes..

They can then 1 International Conference on Information Processing in Sensor Networks and Track on Sensor Platforms. Ptalon also allows the user to specify heterogeneous networks more easily.xml 55320 RelayNode. IPSN/SPOTS 20071 and SenSys 20062 . including modal models. dataflow. and that there are few standard benchmarks. These results show that parameter choices are largely application-dependent. With MultiInstanceComposite. and higher-order actors. Ptalon has the advantage over the other methods in that it is easier to express model structure.xml Initiator. Ptalon makes no such constraints on the PtalonActor component.102 Table 5.xml 48212 with Ptalon SmallWorld. flexibility in specifying simulation parameters is extremely important. the application developer can modify the Ptalon code to cycle through a number of different types of radio channel models.xml SmallWorld. For example. . Combined with generative programming and metaprogramming techniques. This is not possible with MultiInstanceComposite alone (one would need to use a Case actor or other similar actor to achieve the same results).xml 28228 with MultiInstanceComposite SmallWorld.1: Comparison of number of bytes between different implementations of SmallWorld. Tools and Design Methods. over one hundred different simulation parameters were used. sensor network developers can easily specify experimental simulation setups programmatically using a variety of techniques.5 Summary In this chapter. a user would need to create a new instance of the actor for each type of duplicated node in the network. ParameterSweep SmallWorld. 5. In Leung’s survey of the 70 full-length papers from prominent wireless sensor networking conferences.xml RelayNode. with very few repeated counts [63]. I demonstrated how higher-order components provide a powerful way to build wireless sensor network applications. In general. Developers can choose the method that best fits the particular application. in order to test the behavior of a routing algorithm under different channel assumptions.ptln Total 83548 48212 16882 28314 5292 1151 51639 initialization. 2 Conference on Embedded Networked Sensor Systems.

103 refine these simulations to real-world implementations using a technology such as Viptos (presented in Chapter 4). .

104

105

Chapter 6

Related Work
This chapter details information on work related to TinyGALS and galsC, as well as work related to Viptos and the metaprogramming techniques for wireless sensor networks discussed in earlier chapters.

6.1

TinyGALS and galsC
This section summarizes the features of several related operating systems and software ar-

chitectures, and discusses how they relate to TinyGALS and galsC. Herlihy’s method for building non-blocking operations, as well as the message passing interface (MPI) offer concurrency and communication alternatives to those used in TinyGALS. The SVAR (state variable) mechanism of PBOs (port-based objects) and FPBOs (featherweight port-based objects) influenced the design of TinyGUYS. The Click Modular Router project has interesting parallels to the TinyGALS model of computation, as do Ptolemy II, the CI (component interaction) domain, and the TM (Timed Multitasking) domain.

6.1.1

Non-blocking

Herlihy proposes a methodology in [45] for constructing non-blocking and wait-free implementations of concurrent objects. Programmers implement data objects as stylized sequential programs, with no explicit synchronization. Each sequential operation is automatically transformed into a non-blocking or wait-free operation via a collection of synchronization and memory management techniques. However, operations may not have any side-effects other than modifying the memory block occupied by the object. Unlike TinyGALS, this technique does not address the need

106 for inter-object communication when composing components. Additionally, this methodology requires additional copying of memory, which may become expensive for large objects.

6.1.2

MPI

MPI (Message Passing Interface) is the de facto standard library interface for writing message passing programs on high-performance parallel computing platforms [104]. MPI provides virtual topology, synchronization and communication functionality between a set of processes that have been mapped to processing nodes. Interface functions include point-to-point, rendezvous-type send/receive operations (including synchronous, asynchronous, buffered, and ready forms); choosing between a Cartesian or graph-like logical process topology; exchanging data between process pairs (send/receive operations); combining partial results of computations (gather and reduce operations); synchronizing nodes (barrier operation); as well as obtaining network-related information such as the number of processes in the computing session, identity of the current processor to which a process is mapped, and neighboring processes accessible in a logical topology. MPI was originally targeted for distributed memory systems, though implementations for shared memory systems have appeared as these platforms have become more popular. In MPI, all parallelism is explicit; the programmer is responsible for correctly identifying parallelism and implementing parallel algorithms using MPI constructs. The number of tasks dedicated to run a parallel program is static. New tasks cannot be dynamically spawned during run time, though the new MPI-2 standard addresses this issue. The advantages of MPI over older message passing libraries are portability (because MPI has been implemented for almost every distributed memory architecture) and speed (because each implementation is in principle optimized for the hardware on which it runs). Hempel and Walker [43] summarize MPI and its alternatives: The main function of MPI is to communicate data from one process to another. Other mechanisms, such as TCP/IP and CORBA, do essentially the same thing. MPI provides a level of abstraction appropriate for communication of data in scientific computing, whereas TCP/IP is geared to low-level network transport, and CORBA to clientserver interactions... The idea of communicating sequential processes as a model for parallel execution was developed by C.A.R. Hoare in the 1970s, and is the basis of the message passing paradigm. This paradigm assumes a distributed process memory model, i.e., each process has its own local address space. Processes co-operate to perform a task by independently computing with their local data and communicating data with other processes

and is well-suited to this type of machine since there is a good match between the distributed memory model and the distributed hardware. The Message-Passing Interface follows in the footsteps of the Unix threads library: both extend a sequential programming language with subroutines for parallel execution and data communication.2. The design of MPI focused on message passing capabilities... It is possible to call a user procedure that inputs a message in a local variable and returns before the input has been completed. (2) no global variables are allowed. supports parallel distributed simulation using one of various communication mechanisms. named pipes. inventor of Concurrent Pascal. Concurrent Pascal proved that nontrivial parallel programs can be written exclusively in a secure programming language. or broadcast some data to a whole group of processes. and resource management—its message passing capabilities are not very sophisticated.107 by explicitly exchanging messages. for example. which. MPI and PVM [Parallel Virtual Machine] were designed for different uses. [Although regarded as competing standards]. (4) lookahead must be present in the form of link delays. this message passing is normally realized by calls to library functions. and it is intended to attain high performance on tightly-coupled. There are some constraints. a family of spatial operators that capture local communication within regions of a wireless sensor . send or receive a message. MPI has its detractors. Message passing provides the most explicit way of programming a parallel computer with physically distributed memory. however: (1) modules can only communicate by sending messages (no direct method call or member access) unless they are mapped to the same processor. homogeneous parallel architectures. which (conceptually) no longer exists. MPI has had considerable impact on the development of middleware and other tools for wireless sensor networks. However. The MPI routines for synchronous message passing work as expected. and therefore may be reused by unrelated procedure calls! Twenty years ago. OMNeT++ [84. In his evaluation of MPI [41]. discussed later in Section 6. or the file system. 98]. This time-dependent error may change a variable. asynchronous communication is dangerously insecure. the first concurrent programming language. Technically. Welsh and Mainland take inspiration from MPI in their approach to abstract regions [101]. (5) currently only static topologies are supported. However. he states. I regard the attempt to replace a parallel programming language and its compiler with insecure procedures as a step backwards in programming technology. Personally. including MPI. such as Per Brinch Hansen. interoperability. (3) a module may not send directly to a submodule of another module. fault tolerance. unless the modules are mapped to the same processor. PVM was originally intended for use on networks of workstations (NOWs) and addresses issues such as heterogeneity.

it is more comprehensive than MPI. yet low-level enough to permit extensive application-specific optimizations. and there is no explicit synchronization with other processes. “structured communication” refers to a routing problem where the communication pattern is known in advance. UW-API (University of Wisconsin-Madison’s Application Programmer’s Interface) [5.108 network. which users can create with specific primitives. OSCAR provides the standard Message Passing Interface (MPI) for communication between the parallel computing processes. The actor model used by TinyGALS/galsC. Bakshi and Prasanna [6] have a similar goal in their library of structured communication primitives. all-to-one (data gather). MPI has been extremely successful in the parallel processing community as it is highlevel enough to shield programmers from most of the details of the underlying machine. MPI hides the details of the communication hardware and provides efficient implementations of common collective operations. Barrier synchronization is also supported for the sensor nodes that lie within a region. 66] is an integrated software bundle designed for high performance cluster computing. where the sensor network sends its data to the computing cluster through a gateway node. The software framework was developed for the Chimera multiprocessor real-time operating system (RTOS). Viptos. which may be defined in terms of radio connectivity. Some of the UW-API primitives are to be invoked by a single sensor node. such as broadcast and reduction. in addition to communication primitives. PBOs may execute either . 6. All operations take place on regions. We wish to provide communication interfaces that serve a similar role for sensor networks. A PBO is an independent concurrent process. and Ptolemy II uses message passing. However. others are for collective communication. or other node properties. all-to-all.3 Port-Based Objects The port-based object (PBO) [92] is a software abstraction for designing and implementing dynamically reconfigurable real-time software. It has been used in sensor network applications to parallelize data fusion processes. to be invoked simultaneously by a group of nodes in a geographic region. with example patterns including one-to-all (broadcast). many-to-many. in that the actor model specifies scheduling and execution semantics. 81] for sensor network communication is motivated by MPI. The Open Source Cluster Application Resources (OSCAR) package [29. In their system. geographic location.1. They state that [MPI] provides a unified interface for message passing across a large family of parallel machines. and permutation.

which contains only the subset of data from the global table that is needed by the PBO. PBOs are separate processes.109 periodically or aperiodically. Configuration constants are used to reconfigure generic components for use with specific hardware or applications. there is no possibility of deadlock. Access . a PBO may update the state variables corresponding to the PBO’s output ports at any time. A task busy-waits with the local processor locked until it obtains the lock and goes through its critical section. During its cycle. which is especially important since memory in embedded processors is a limited resource. The system performs all transfers between the local and global tables as critical sections. and updates to the tables only occur at predetermined times. A PBO can only access its local table. It is guaranteed that the task holding the global lock is on a different processor and will not be preempted. Every input and output port and configuration constant is defined as a state variable (SVAR) in the global table. Echidna [9] is a related real-time operating system designed for smaller. which are not PBOs. The system uses spin-locks to lock the global table. single-processor. which is stored in shared memory. A PBO communicates with other PBOs only through its input ports and output ports. and it assumes that the amount of data communicated via the ports on each cycle of a PBO is relatively small. In an RTOS. Although there is no explicit synchronization or communication among processes. PBOs may also have resource ports that connect to sensors and actuators via I/O device drivers. If the total time that a CPU is locked to transfer a state variable is small compared to the resolution of the system clock. Consistency between the global and local tables is maintained by the SVAR mechanism. multiple accesses to the same SVAR in the global table are mutually exclusive. The system updates the state variables corresponding to input ports prior to the execution of each cycle of a periodic PBO. whereas FPBOs all share the same context. The design is based on the featherweight port-based object (FPBO) [91]. Since there is only one lock. The Chimera PBO implementation uses data replication to maintain data integrity and avoid race conditions. The system updates these values in the global table only after the PBO completes its processing for that cycle or event. or before the processing of each event for an aperiodic PBO. PBOs communicate with each other via state variables stored in global and local tables. embedded microcontrollers. then there is negligible effect on the predictability of the system due to this mechanism locking the local CPU. The application programmer interface (API) for the FPBO is identical to that of the PBO. thus it will release the lock shortly. Echidna FPBO implementation takes advantage of context sharing to eliminate the need for local tables. The system updates configuration constants only during initialization of the PBO. which creates potential implicit blocking. Since every PBO has its own local table. no explicit synchronization is needed to read from or write to a state variable.

1. as well as the element’s initialization procedure and data layout.4 Click Click [54. A Click router configuration consists of a directed graph. and agnostic ports with a double outline. Each element supports one or more method interfaces. At initialization time. 55] is a flexible. which specifies the code that should be executed when the element processes a packet. An element can have any number of input and output ports. through which they communicate at runtime. modular software architecture for creating routers. However. . push ports are drawn in black. but elements can create and export arbitrary additional interfaces. since components within a module may be tightly coupled in terms of data dependency. Each element belongs to a single element class. and the components always read the latest value of the SVAR. in TinyGALS. each use of a compound element is compiled into the corresponding collection of simple elements. This section provides a detailed description of the constructs and processing in Click and compares it to TinyGALS. However. An element may also have an optional configuration string which contains additional arguments to pass to the element at router initialization time.110 to global data must still be performed as a critical section to maintain data integrity. Echidna constrains when preemption can occur. This is more closely related to the local tables in the Chimera PBO implementation than the global tables in the Echidna FPBO implementation. The Click configuration language allows users to define compound elements. and agnostic. pull. which are router configuration fragments that behave like element classes. Every element supports the simple packet-transfer interface. pull ports in white. An element is implemented as a C++ object that may maintain private state. However. To summarize. which are similar to global variables. 6. The SVAR concept is the motivation behind the TinyGALS strategy of always reading the latest value of a TinyGUYS parameter. There are three types of ports: push. instead of using semaphores. updates to TinyGUYS are buffered until a module has completed execution. Elements in Click A Click element is a software module which usually performs a simple computation as a step in packet processing. where the vertices are called elements and the edges are called connections. Updates to an SVAR are made atomically. In Click diagrams. in both the PBO and FPBO models. there is no possibility of blocking when using the TinyGUYS mechanism. software components only communicate with other components via SVARs.

An agnostic port behaves as a push port when connected to push ports and as a pull port when connected to pull ports. There are no implicit queues on input and output ports. where packet handoff along the connection is initiated by the source element (or source end. However. push inputs and pull outputs can be connected more than once. Queues in Click must be defined explicitly and appear as Queue elements. where packet handoff along the connection is initiated by the destination element (or destination end. If the chosen input has no packets ready. A connection is implemented as a single virtual function call. A connection between a push port and a pull port is illegal. and returning the packet. which means that they do not carry the associated performance and complexity costs. Source: Eddie Kohler. A connection between two pull ports is a pull connection. Figure 6.1 shows an example Click element that belongs to the Tee element class. A connection between two push ports is a push connection. Another type of element is the Click packet scheduler. but each agnostic port must be used exclusively as either push or pull. In addition. This is an element with multiple pull inputs and one pull output. The element has one input port. The element is initialized with the configuration string “2”. which sends a copy of each incoming packet to each output port. pulling a packet from it. then both input and output must be used in the same way (either push or pull). the . Every push output and every pull input must be connected exactly once.1: An example Click element. The element reacts to requests for packets by choosing one of its inputs. A Queue has a push input port (responds to pushed packets by enqueuing them) and a pull output port (responds to pull requests by dequeuing packets and returning them). the system propagates constraints until every agnostic port has been assigned to either push or pull. which in this case configures the element to have two output ports. in the case of a chain of pull connections). if packets arriving on an agnostic input might be emitted immediately on an agnostic output. in the case of a chain of push connections). When a Click router is initialized. Connections in Click A Click connection represents a possible path for packet handoff and at- taches the output port of an element to the input port of another element.111 element class input port Tee(2) configuration string output ports Figure 6.

2: A simple Click configuration with sequence diagram. they are implicitly scheduled when their push() or pull() methods are called. ToDevice examines the device’s transmit DMA queue for empty slots and pulls packets from its input. The placement of Queues in the configuration graph determines how CPU scheduling may be performed. the elements are indistinguishable. FromDevice polls the device’s receive DMA (direct memory access) queue for newly arrived packets and pushes them through the configuration graph. scheduler usually tries other inputs. The kernel thread runs the Click router driver. until the packet is explicitly stored or dropped (and similarly for pull requests). first out) queuing. Click is a pure polling system. A task is an element that needs special access to CPU time. Both Queue elements and scheduling elements have a single pull output. which are compound elements that act like queues but implement behavior more complex than FIFO (first in. or a chain of pull() calls). The router continues to process each pushed packet. following it from element to element along a path in the router graph (a chain of push() calls. Since Click runs in a single thread. a call to push() or pull() must return to its caller before another task can begin. An element should place itself on the task queue if the element frequently initiates push or pull requests without receiving a corresponding request. Click runtime system Click runs as a kernel thread inside the Linux 2. An element can have any number of active timers. Click timers are implemented using Linux timer queues. which loops over the task queue and runs each task using stride scheduling [99]. device-handling elements such as FromDevice and ToDevice place themselves on Click’s task queue. . Timers are another way of activating an element besides tasks.112 FromDevice receive packet p Null Null ToDevice push(p) return push(p) return dequeue p and return it enqueue p pull() return p pull() return p ready to transmit send p Figure 6. where each timer calls an arbitrary method when it fires. This leads to an ability to create virtual queues. so to an element downstream. Most elements are never placed on the task queue. the device never interrupts the processor. Source: Eddie Kohler. For example.2 kernel. When activated.

Source: Eddie Kohler.3: Flowchart for Click configuration shown in Figure 6.2. .113 Poll packet from receive DMA ring Push packet to Queue Queue full? N Y Drop packet (Queue drop) Enqueue packet on Queue Pull packet from Queue Enqueue packet on transmit DMA ring Figure 6.

The first source of overhead comes from passing packets between elements.2 and 3. each of which involve loading the relevant function pointer from a virtual function table. The push() method of Null calls push() on its output port. Kohler. which calls the pull() method of Null.3 illustrates the basic execution sequence of Figure 6. Figure 6. The calls to push() then return in the reverse order. FromDevice calls push() on its output port. The Null element simply passes a packet from its input port to its output port. which calls the push() method of Null.2). The second source of overhead comes from unnecessarily general element code. Rules in Click on connecting elements together are similar to those for connecting components in TinyGALS: push outputs must be connected exactly once. and moves backward during a pull sequence. The Queue element enqueues the packet if its queue is not full. it performs no processing on the packet. time moves downwards. When the task corresponding to FromDevice is activated. This leads to one or two virtual function calls. The Queue element dequeues the packet and returns it through the return of the pull() calls. but push inputs may be connected more than once (see Sections 3. Data flow (in this case. Later. Note that in the sequence diagram in Figure 6. otherwise it drops the packet.1. the task corresponding to ToDevice is activated. as well as an indirect jump through that function pointer. et al. the element polls the receive DMA ring for a packet. ToDevice calls pull() on its input port. In Click. This overhead is avoidable—the Click distribution contains a tool to eliminate all virtual function calls from a Click configuration. The pull() method of Null calls pull() on its input port. the packet p) always moves forwards. If there is an empty slot in its transmit DMA ring. there is no fundamental difference between push processing and pull processing at the method-call level. found that element generality had a relatively small effect on Click’s performance since not many elements in a particular configuration offered much opportunity for specialization [55]. which calls the push() method of the Queue.2. which calls the pull() method of the Queue. The two chains are separated by a Queue element. Control flow moves forward during a push sequence. both push and pull processing are sets of method calls that .2 shows a simple Click router configuration with a push chain (FromDevice and Null) and a pull chain (Null and ToDevice).2.114 Figure 6.2. Both types of objects (Click elements and TinyGALS components) communicate with other objects via method calls. Overhead in Click Modularity in Click results in two main sources of overhead. Comparison of Click to TinyGALS An element in Click is comparable to a component in TinyGALS in the sense that both are objects with private state.

Figure 6. Data (a packet) flows to the right until it reaches the Queue. TinyGALS. Visualizing this configuration as a TinyGALS model.4: Click vs. where control and data flow downstream in response to an upstream event. Data flow between components in an actor .115 Actor A C1 C2 Actor B C3 C4 Actor C C5 C6 Click router thread task invocation task invocation TinyGALS thread event scheduler invokes actor from event queue ??? Click push Control flow Data flow Click pull TinyGALS Figure 6. Pull processing can be thought of as demand-driven computation. In Click. Note that a compound element in Click does not form the boundary of control flow. where control flows upstream in order to compute data needed downstream.4 provides a more detailed analysis of the difference in control and data flow between Click and TinyGALS. Push processing can be thought of as event-driven computation (if one ignores the polling aspect of Click). control begins at element C1 and flows to the right and returns after it reaches the Queue. data flow within an actor is not represented explicitly. shows that a TinyGALS actor forms a boundary for control flow. differ only in name. if an element inside of a compound element calls a method on its output. control flows to the connected element (recall that a compound element is compiled to a chain of simple elements). However.4 shows a push processing chain of four elements connected to a queue. which is connected to a pull processing chain of two elements. In TinyGALS. In Click. the direction of control flow with respect to data flow in the two types of processing are opposite of each other. Figure 6. with elements C1 and C2 grouped into an actor A and elements C3 and C4 grouped into an actor B.

tasks can be preempted by hardware interrupts. for backwards compatibility with TinyOS. although TinyGUYS provides a possible hidden avenue for data flow between actors. Also note that the Click Queue element is not equivalent to the queue on a TinyGALS actor input port. elements in Click have no way of sharing global data. however. From this global point of view. Also unlike Click. but which is not part of the packet data).4. elements C5 and C6 are grouped into an actor C. arrival of data in a queue does not cause downstream objects to be scheduled. where data flow has the same direction as the connection. In Click. which are separated by a Queue element. However. as in TinyGALS. Much of this is because Click’s design is motivated by high throughput routers. In Figure 6. a TinyGALS system goes to sleep when there are no external events to which to respond. which is interrupt-driven and allows preemption to occur in order to process events. unlike in Click. Aside from the polling/interrupt-driven difference. . control flow in this new TinyGALS model is the same as in Click. does not have a natural equivalent in TinyGALS. The scheduler runs tasks in the task queue only after processing all events in the event queue. but execution is asynchronous between chains. execution is synchronous within each push (or pull) chain. Unlike Click. TinyGALS does not contain timers associated with elements.1 1 Although. the TinyGALS runtime system implementation supports TinyOS tasks. locally synchronous execution model of TinyGALS. elements C5 and C6 may have to be rewritten to reflect the fact that C6 is now a source object. Pull processing in Click. Additionally.3. push processing in Click is equivalent to synchronous communication between components in a TinyGALS actor. it does not respond to events immediately. although this can be emulated by linking a CLOCK component with an arbitrary component. since Click is a pure polling system. In Click. If one reverses the arrow directions inside of actor C. the execution model of Click is quite similar to the globally asynchronous. unlike TinyGALS. The only way of passing data between Click elements is to add annotations to a packet (information attached to the packet header. This highlights the fact that Click configurations cannot have two push chains (where the end elements are activated as tasks) separated by a Queue. See Section 3. Additionally.116 can have a direction different from the link arrow direction. Unlike TinyGALS. Data flow between actors always has the same direction as the connection arrow direction. which are long running computations placed in the task queue by a TinyOS component method. the TinyGALS model does not contain a task queue.5 for more information. rather than a sink object. whereas TinyGALS is motivated by powerand resource-constrained hardware platforms.

5. somewhat likely to come from the south. Node D (and others) may be free to perform other computations while node A performs most of the intrusion detection. so nodes should send data only when necessary. but very unlikely to come from the east or north. node A may want to pull data from other nodes only when needed. Each node is only capable of detecting intruders within a limited range and has a limited battery life. Pull processing in sensor networks Although TinyGALS does not currently use pull processing. It is known that an intruder is most likely to come from the west. the following example by Jie Liu given in Yang Zhao’s paper [109] illustrates a situation in which pull processing is desirable for eliminating unnecessary computation.6: Pull processing across multiple nodes.117 North A D B C Figure 6. a configuration for the application in Figure 6. This could be an extension to the current single-node architecture of TinyGALS. . Node A has more power and functionality than other nodes in the system. Figure 6.6 shows one possible configuration for this kind of pull processing. Figure 6.5: A sensor network application. The center component is similar to the Click scheduler element. Under these assumptions.5 shows a sensor network application in which four nodes cooperate to detect intruders. Node B Node C Node A Node D Figure 6. This example also demonstrates a way to perform distributed multitasking. Communication with other nodes consumes more power than performing local computations.

The system makes the results of the execution available to other actors and the physical world only at the deadline time. The system activates an actor when its trigger condition is satisfied. An actor represents a sequence of reactions. an actor should not need any additional data to complete its finite computation. 6. which carries from one reaction to another. Software components in TM are called actors. In cases where an actor cannot finish by its deadline. the TM model includes an overrun handler to preserve the timing determinism of all other actors and allow an actor that violates the deadline to come to a quiescent state. interaction with the ports of an actor may not directly transfer the flow of control to another actor. which is based on Ptolemy II and implements the Click model of computation. If there are enough resources at run time. CI and Click could be leveraged to implement an implementation of TinyGALS in Ptolemy II. and/or messages from other actors. where a reaction is a finite piece of computation. The communication among the actors has event semantics. A trigger condition can be built using real-time physical events.e. due to the implementation of TM in Ptolemy II. Triggers must be responsible.5 Click and Ptolemy II The MESCAL project has created a tool called Teepee [71]. possibly using the ClassWrapper actor to model TinyGALS components. then the system grants the actor at least the declared execution time before it reaches its deadline. Actors have state. CI is motivated by the push/pull interaction between data producers and consumers in middleware services such as the CORBA event service. The CI (component interaction) domain [16] in Ptolemy II models systems that contain both event-driven and demand-driven styles of computation.. Actors in a TM model declare their computing functionality and also specify their execution requirements in terms of trigger conditions. CI actors can be active (i.118 6. Actors can only communicate with other actors and the physical world through ports.1. actors are never blocked on reading. Therefore.1. There is a natural correlation between the CI domain and Click. communication packets. execution time. have their own thread of execution) or passive (triggered by an active actor). in which. and deadlines. unlike .6 Timed Multitasking Timed multitasking (TM) [69] is an event-triggered programming model that takes a timecentric approach to real-time programming but controls timing properties through deadlines and events rather than time triggers. which means that once triggered. Unlike method calls in object-oriented models.

and a flag indicating whether the event has been consumed. though none include all of the capabilities of Viptos. Wireless and mobility support in ns-2 comes from the Monarch project. simulating. Conceptually. and deploying wireless systems exist. The TM runtime system uses an event dispatcher to trigger a task when a new event is received at its port.2. and routing layers [14]. which contains the communicating data.1 Design and simulation environments ns-2 [77] is a well-established. .2 suggested that a partial method of reducing non-determinacy in TinyGALS programs due to one or more interrupts during an actor iteration is to delay producing outputs from an actor until the end of its iteration. open-source network simulator. Events on a connection between two actors are represented by a global data structure. There are two types of actors: interrupt service routines (ISRs) respond to external events. Section 3. An ISR is synthesized as an independent thread. link. The event semantics can be implemented by FIFO queues. Some information presented in this section is excerpted from papers on VisualSense [8] and Viptos [21. Simulation. routing.2 Design. These two types do not intersect. ISRs do not have triggering rules. and multicast protocols over wired and wireless (local and satellite) networks. It is a discrete-event simulator with extensive support for simulating TCP/IP. 6. every piece of data is produced and consumed exactly once. which provides channel models and wireless network layer components in the physical. Tasks have a much richer set of interfaces than ISRs and have a set of methods that define the split-phase reaction of a task. 22]. This is similar to the TM method of only producing outputs at the end of an actor’s deadline.2.119 state semantics. Liu and Lee [69] describe a method for generating the interfaces and interactions among TM actors into an imperative language like C. the sender of a communication is never blocked on writing. a mutual-exclusion lock to guard the access to the variable if necessary. and outputs are made immediately available as trigger events to downstream actors. 6. In a TM model. and tasks are triggered entirely by events produced by peer actors. and Deployment Environments A number of frameworks for designing. an ISR usually appears as a source actor or a port that transfers events into the model.

120 SensorSim [79] builds on ns-2 and claims power models and sensor channel models. It also includes a set of classes and mechanisms to realize network emulation. SensorSim also claims hybrid simulation in which real sensor nodes can participate. and provides an object-oriented definition of (1) target. An energy consumer can have several modes. OMNeT++ [98] is an open source tool for discrete-event modeling. and each process can be constructed using finite state machine (FSM) models. OPNET also supports antenna gain patterns and terrain models. The OPNET Wireless Module provides support for wireless and mobile communications. Unfortunately. compositional network simulation environment developed entirely in Java. sensor and sink nodes. (2) sensor and wireless communication channels. radio. With the Mobility Framework extension. nodes are connected by static links. OPNET Modeler [78] is a commercial tool that offers sophisticated modeling and simulation of communication networks. and (3) physical media such as seismic channels. it shares many concepts. component-based. power. Application-specific models can be defined by sub-classing classes in the simulation framework and customizing their behaviors. OMNeT++ defines a component interface for the basic module. An OPNET model is hierarchical. Users can specify transceiver frequency. solutions. and other characteristics. and features with OPNET. each corresponding to a different trade-off between performance and power. It uses a 13-stage “transceiver pipeline” to dynamically determine the connectivity and propagation effects among nodes. A power model consists of an energy provider (the battery) and a set of energy consumers (CPU. in a block-diagram fashion. called processes. But instead of using FSM models for processes. It uses a discrete-event simulator to execute the entire model. bandwidth. J-Sim [97] is an open-source. The sensor channels model the dynamic interaction between the physical environment and the sensor nodes. The transceiver pipeline stages use these characteristics to calculate the average power level of the received signals to determine whether the receiver can receive this signal. A new wireless sensor framework [90] is builds upon the autonomous component architecture (ACA) and the extensible internetworking framework (INET) of J-Sim. where the top level contains the communication nodes and the topology of the network. and sensors). The NesCT tool of the EYES WSN project allows users to run TinyOS applications directly in OMNeT++ simulations. Each node can be constructed from software components. SensorSim is no longer under development and will not be publicly released. In conventional OPNET models. This new framework extends the notion of network emulation to Berkeley Mica . and mobility models and power models (both energy-producing and energy-consuming components). with an object-oriented approach similar to the abstract semantics of Ptolemy II [28].

for example. TinyViz [65] is a Java-based graphical user interface for TOSSIM. is a scalable environment for parallel simulation of wireless systems [106]. from the application to the physical communication layer. The EmTOS wrapper library is similar to the TOSSIM simulated device library. Bagrodia founded Scalable Network Technologies. simulation. Thus. This means that the minimum granularity of a timer is 10 milliseconds. a C-based simulation language for sequential and parallel execution of discrete-event simulation models. EmTOS modules are restricted to using the Linux scheduler as the main programming model. Although Prowler provides a generic simulation environment. Inc. Prowler is an event-driven simulator that can be set to operate in either deterministic mode (to produce replicable results while testing the application) or in probabilistic mode (to simulate the nondeterministic nature of the communication channel and the low-level communication protocol of the motes).. corresponding to the Linux jiffy clock that is part of the scheduler in the Linux 2. Em* modules are implemented as user-space processes that communicate through message passing via device files.4 kernel. GloMoSim is designed to be extensible and composable: the communication protocol stack for wireless networks is divided into a set of layers. both real and simulated. It relies on Parsec. which supports both wired and wireless networks. similar to the OSI (Open Systems Interconnection) seven-layer network architecture. on arbitrary (possibly dynamic) topology. or actuating the simulation itself. by setting the sensor values that simulated motes read. which expanded and further developed GloMoSim into a commercial tool called QualNet. Prowler [86] is a probabilistic wireless network simulator running under MATLAB and can simulate wireless distributed systems. Physical environment data from the network is extracted with SerialForwarder. and it was designed to be easily embedded into optimization algorithms. GloMoSim (Global Mobile system Simulator). from UCLA. a utility distributed with TinyOS that collects TinyOS packets sent to a mote base station attached to a PC and forwards them through the serial port. TinyViz supports software plugins that watch for events coming from the simulation—such as debug messages and radio messages—and react by drawing information on the display. its current target platform is the Berkeley Mica mote running TinyOS. It supports deployment. . each with its own API.121 mote-based wireless sensor networks. and visualization of live systems. It can incorporate an arbitrary number of motes. EmTOS [35] is an extension to Em* that enables an entire nesC/TinyOS application to run as a single module in an Em* system. setting simulation parameters. Em* [34] is a toolsuite for developing sensor network applications on Linux-based hardware platforms called microservers. emulation.

but none provide the ability that Viptos inherits from Ptolemy II to integrate diverse models of computation. This generative technique is similar to the metaprogramming techniques presented in Chapter 5. for example. they are not restricted to be leaf nodes [33]. DES generates a model encompassing the full sensor network from this XML configuration and existing Ptolemy II descriptions of the required actors. DyMND-EE [110] is a wireless sensor network simulator based on Ptolemy II and uses Em* to run nesC code in a Linux environment using the FUSD kernel module to provide connections between simulated nodes and the DyMND-EE simulation manager. They also appear to be unique among these modeling environments in that FSM models can be arbitrarily nested with other models. Some are also open-source software. None provide the ability to transition from high-level modeling to real code simulation and deployment. to model the physical environment. DyMND-EE is similar to Viptos.e. signal processing.. Other simulators used in the TinyOS community for cycle accurate simulation/emulation of the Atmel AVR (processor used in the Mica mote series) instruction set include ATEMU [80] and Avrora [96]. Such models would have to be built with low-level code. One interesting part of this project is the DyMND Execution Sequencer (DES) user interface. Both support simulation of heterogeneous networks. like Viptos. or real-time software behavior.122 TinyViz includes a radio model plugin with two built-in models: “Empirical” (based on an outdoor trace of packet connectivity with the RFM1000 radios) and “Fixed radius” (all motes within a given fixed distance of each other have perfect connectivity. such as continuous-time. energy consumption and production. i. and to generate an XML configuration file. as well as the physical dynamics of mobility of sensor nodes. and no connectivity to other motes). and its simulation speed scales much better than ATEMU for large number of nodes. which allows users to graphically specify the deployment topology of a sensor network including target positions and trajectories. except that it requires modification to the nesC source code in order to use simulated sensor and other devices. Avrora works at the byte level with precise timing. All except Em* provide some form of discrete-event simulation. their digital circuits. This capability can be used. They also appear to be the only frameworks to provide a modern type system at the actor level (vs. dataflow. All of these systems provide extension points where model builders can define functionality by adding code. and time-triggered. synchronous/reactive. along with any properties files required by external runtime environments like Em*. . Viptos and Ptolemy II support hierarchical nesting of heterogeneous models of computation [28]. ATEMU simulates a byte-oriented interface to the radio and its transmissions at the bit level with precise timing. the code level) [105].

interfaces and the JavaDoc style nesC documentation. which includes component hierarchy.. Both TinyDT and TinyOS IDE complement Viptos in that they can be used to create and edit the source code for new TinyOS library components. set a persistent name property. Users can graphically address individual running applications in order to pause. code completion for interface members. The TinyOS component library is available as graphical blocks within GRATIS II. and users can manage each device. The Sun SPOT is based on a 32-bit 180 MHz ARM920T core with 512 KB of RAM and 4 MB of flash memory. The Sun SPOT supports multiple concurrently running applications. 89]. deploy code.2 TinyOS development and editing environments GRATIS II (Graphical Development Environment for TinyOS) is built on top of GME 3 (Generic Modeling Environment). SPOTWorld is “an integrated management. the GRATIS II code generator can transform all the interface and wiring information into a set of nesC target files. automatic build support. SPOTWorld . wirings. TinyDT uses a Java-based nesC parser implemented using ANTLR to build an in-memory representation of the actual nesC application. to get device status information. even as the application runs. It runs the Squawk VM [85]. 6. e.15. TinyDT is a TinyOS 1. SPOTWORLD depicts each automatically discovered Sun SPOT.x plugin for the Eclipse platform that implements an IDE (integrated development environment) for TinyOS/nesC development.2. or exit each one. SPOTWorld also has an experimental feature that allows users to drag an application from one SPOT to the next. a small J2ME-compliant Java virtual machine. GRATIS II was developed mainly for static analysis of TinyOS component graphs and does not support simulation. reset the device. This graphical tool can run stand-alone or be integrated with NetBeans.123 6. and a CC2420 802. or start any of the available applications. which nc2moml can then import into Viptos for simulation. TinyOS IDE is another Eclipse plugin that supports TinyOS project development and provides nesC syntax highlighting. code navigation. However. deployment. resume. and support for multiple TinyOS source trees.4 radio with an effective range of about 80 meters.3 Programming and deployment environments Sun Microsystems Laboratories has created a Java-based wireless sensor network platform called Sun SPOT (Small Programmable Object Technology) [88]. team development support (through Eclipse-CVS integration). This open source project features syntax highlighting of nesC code. debugging and programming tool” for Sun SPOTs [87.g. Given a valid model. support for multiple target platforms and sensor boards.2.

3 Summary This chapter presented a number of frameworks related to TinyGALS/galsC. 6.124 enables the user to compile a collection of applications and deploy the resulting file over the air to selected Sun SPOTs. Viptos. Future versions of these tools can benefit from cross-fertilization of the techniques presented in this dissertation. . and the metaprogramming techniques presented in earlier chapters. The developers of the Sun SPOT and SPOTWorld have expressed interest in integrating features of Viptos with SPOTWorld.

VisualSense. middleware. and this dissertation described its syntax and the high-level type checking. actor-oriented design. locally synchronous programming model that combines an actor-oriented (message-oriented) model with an object-oriented (procedure-oriented) model that allows application developers to use high-level actors as a first-order programming concept. and macroprogramming layers. usually using very low-level code. Viptos provides an integrated. and scheduling and communication code generation facilities provided by the galsC compiler. TinyGALS provides a globally asynchronous. but still allows them to use a low-level programming model when needed. Various metaprogramming and generative programming techniques described in this dissertation using higher-order actors in Ptalon. simulation. simulation. and design stages. This dissertation discussed raising the conceptual level of designing. and deploying wireless sensor network applications by using actor-oriented programming tools and techniques. Ptolemy II.125 Chapter 7 Conclusion Developing software for wireless sensor networks today is an error-prone and tedious process that involves patching together many different tools and techniques. galsC is a language that implements the TinyGALS programming model. and/or Viptos enable wireless sensor network application developers to create high-level descriptions or models and automatically . and between the deployment. concurrency error detection. This combination balances fast response with an easy-to-understand programming model that puts application tasks first. node-centric. and deployment environment for wireless sensor network applications. Application developers can use Viptos to create abstract models of their intended systems and refine them down to low-level code that can be transferred to target hardware. Actor-oriented programming provides a way to unify the layers and stages of application development—between the operating system. simulating.

The networked embedded computing community can use these tools and the knowledge shared in this dissertation to improve the way we program wireless sensor networks. All of the tools I developed and described in this dissertation are open-source and freely available on the web.126 generate sensor network simulation scenarios. .

In Proceedings of the First European Workshop on Wireless Sensor Networks (EWSN 2004). Talcott. ACTORS: A Model of Concurrent Computation in Distributed Systems. In ICPP ’04: Proceedings of the 2004 International Conference on Parallel Processing (ICPP’04). A foundation for actor computation. Luciano Lavagno. June 1999. July 2006. Paolo Giusto. [5] Amol Bakshi and Viktor K. Sentovich. 7(1):1–72. Alberto Sangiovanni-Vincentelli. Lee. Sanjeev Kohli. Smith. Abstraction and modularity mechanisms for concurrent computing. Structured communication in single hop sensor networks. 1993. and Kei Suzuki. The MIT Press Series in Artificial Intelligence. Mason. 39(7):48–54. Andel and Alec Yasinsac. and Carolyn L. and Yang Zhao. [8] Philip Baldwin. Xiaojun Liu. Anna Patterson. Journal of Functional Programming. [6] Amol Bakshi and Viktor K. Cambridge. Modeling of sensor nets in Ptolemy II. [2] Gul A. Rajendra Panwar. [4] Todd R. USA. Agha. 1(2):3–14. Attila Jurecska. [3] Gul A. On the credibility of manet simulations. Prasanna. WooYoung Kim. pages 423–430. Agha. In IPSN’04: Proceedings of the Third International Symposium . Algorithm design and synthesis for wireless sensor networks. [7] Felice Balarin. 2004. IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems. Massimiliano Chiodo.127 Bibliography [1] Gul Agha. IEEE Parallel and Distributed Technology: Systems and Applications. Scott F. and Daniel Sturman. 1997. Computer. 1986. IEEE Computer Society. MIT Press. Washington. 18(6):834–849. Svend Frølund. pages 138–153. Ellen M. Ian A. DC. Synthesis of software programs for embedded control applications. Harry Hsieh. January 2004. Prasanna. Edward A.

10(4):563–579. pages 359–368. Eric Fiterman. pages 85–97. James Carlson. 2004. 11 January 2007. Lee. NY. Johnson. Jeff Rose.). Xiaojun Liu. 1999.). Berkeley. ACM Press. New York. Jing Deng. 1998. Stephen Neuendorffer. Adam Torgerson. 2006. and Paul Le Guernic. Fast-prototyping using the BTnode platform. Charles Gruenwald. Star Galaxy Publishing. Tiebing Zhang. A SystemC Primer. and Richard Han. Edward A. In ı CONCUR ’99: Proceedings of the 10th International Conference on Concurrency Theory. [10] Albert Benveniste. Mobile Networks and Applications. [14] Josh Broch. NY. [15] Christopher Brooks. UK. Belgium. The performance and energy consumption of embedded real-time operating systems. Springer-Verlag. 2003. Jul 2005. Belgium. Anmol Sheth. EECS Department. Technical Report UCB/ERL M05/23. Brinda Ganesh. Yang Zhao. Paul Kohout. Automation and Test in Europe. European Design and Automation Association. and Haiyang Zheng (eds. Berkeley. 2004. Yang Zhao. 3001 Leuven. and Bruce Jacob. [13] Shah Bhatti. pages 162–177. USA. From synchrony to asynchrony. MANTIS OS: an embedded multithreaded operating system for wireless micro sensor platforms. and Jorjeta Jetcheva. ACM Press. IEEE Transactions on Computers. Benoˆt Caillaud. Lee.128 on Information Processing in Sensor Networks. Chris Collins. University of California. 52(11):1454–1469. [12] J. Heterogeneous concurrent modeling and design in Java (Volume 1: Introduction to Ptolemy II). pages 977–982. Xiaojun Liu. Brian Shucker. USA. In DATE ’06: Proceedings of the Conference on Design. Second Edition. [16] Christopher Brooks. New York. Yih-Chun Hu. Christine Smit. University of California. Edward A. EECS Department. 2005. A performance comparison of multi-hop wireless ad hoc network routing protocols. [9] Kathleen Baynes. Technical Report UCB/EECS-2007-7. David B. Maltz. [11] Jan Beutel. . and Haiyang Zheng (eds. Heterogeneous concurrent modeling and design in Java (Volume 3: Ptolemy II domains). Hui Dai. David A. London. Bhasker. In MobiCom ’98: Proceedings of the 4th Annual ACM/IEEE International Conference on Mobile Computing and Networking. Stephen Neuendorffer.

EECS Department. University of California. [22] Elaine Cheong. Addison Wesley Longman. In Proceedings of Design. Berkeley. Elaine Cheong. EECS Department. [24] Elaine Cheong and Jie Liu. pages 326–341. November 2006. and Andrew Christopher Mihal. A formalism for higher-order composition languages that satisfies the ChurchRosser property. March 2003. 7–11 March 2005. University of California. and Yang Zhao. Berkeley. The Power of Higher-Order Composition Languages in System Design. Berkeley. University of California. [19] James Adam Cataldo. TinyGALS: A programming model for event-driven embedded systems. Lee. Jie Liu. [25] Elaine Cheong and Jie Liu. Overview of generative software development.. Inc. February 2006. PhD thesis. Automation and Test in Europe (DATE05). Viptos: A graphical development and simulation environment for TinyOS-based wireless sensor networks. Berkeley. [26] Krzysztof Czarnecki. CA. Lee. Brooks. Berkeley. galsC: A language for event-driven embedded systems. Lee.129 [17] Frederick P. and Feng Zhao. . 2005. Edward A. galsC: A language for event-driven embedded systems. Published as Technical Memorandum UCB/ERL M03/14. [23] Elaine Cheong. Judy Liebman. Edward A. EECS Department. University of California. Technical Report UCB/EECS-2006-150. Joint modeling and design of wireless networks and sensor node software. [21] Elaine Cheong. May 2003. Jr. USA 94720. In Unconventional Programming Paradigms (UPP) 2004. Technical Report UCB/EECS-2006-15. Berkeley. Memorandum UCB/ERL M04/7. volume 3566/2005 of Lecture Notes in Computer Science. and Yang Zhao. [20] Elaine Cheong. April 2004. Technical Report UCB/EECS-2006-48. 9 May 2006. Master’s thesis. University of California. In Proceedings of the Eighteenth Annual ACM Symposium on Applied Computing. The Mythical Man-Month: Essays on Software Engineering. Edward A. Thomas Huining Feng. pages 698–704. University of California. 20th Anniversary Edition. Design and implementation of TinyGALS: A programming model for eventdriven embedded systems. Springer Berlin / Heidelberg. 1995. EECS Department. 18 December 2006. [18] Adam Cataldo. Berkeley.

Journal of Applied Probability. Contiki . USA. Florida. Dantas. IEEE Computer Society. In ATEC’04: Proceedings of the USENIX Annual Technical Conference 2004. Edward A. A system for simulation. 18(6):742–760. Thanos Stathopoulos. Thanos Stathopoulos.130 [27] Adam Dunkels. Stephen o Neuendorffer. Alberto Cerpa. A middleware for OSCAR and wireless sensor network environments. Sonia Sachs. June 2003. and Deborah Estrin. 2004. and Yuhong Xiong. Washington. and Tom Schoellhammer. and deployment of heterogeneous sensor networks. CA. Proceedings of the IEEE. In SenSys ’04: Proceedings of the 2nd International . Eric Osterweil. [32] David Gay. USA. [29] D. Lee. November 2004. In Proceedings of the First IEEE Workshop on Embedded Networked Sensors (EmNetS-I). J. R. A. and David Culler. pages 283–296. and Edward A. Montez. [28] Johan Eker. Lee. Navigation in small world networks: a scalefree continuum model. Jeremy Elson. Nithya Ramanathan. Ferreira. Matt Welsh. Deborah Estrin. Bilung Lee. In Proceedings of the 4th International Conference on Information Processing in Sensor Networks (IPSN’05). Gruia-Catalin Roman. 2006. Taming heterogeneity—the Ptolemy approach. [30] Chien-Liang Fok. Rob von Behren. and Martius Rodriguez. [33] Alain Girault. Xiaojun Liu.a lightweight and flexible o o operating system for tiny networked sensors. Phil Levis. 91(1):127–144. IEEE. 43(4):1173–1180. 2007. Bj¨ rn Gr¨ nvall. and Thiemo Voigt. [31] Massimo Franceschetti and Ronald Meester. emulation. [34] Lewis Girod. M. J¨ rn W. IEEE Transactions On Computer-Aided Design Of Integrated Circuits and Systems. Jie Liu. A. In HPCS ’07: Proceedings of the 21st International Symposium on High Performance Computing Systems and Applications. Berkeley. DC. Nithya Ramanathan. USENIX Association. and Chenyang Lu. R. Eric Brewer. In Proceedings of Programming Language Design and Implementation (PLDI) 2003. EmStar: A software environment for developing and deploying wireless sensor networks. [35] Lewis Girod. Jeremy Elson. April 2005. Hierarchical finite state machines with multiple concurrency models. C. The nesC language: A holistic approach to networked embedded systems. USA. January 2003. Jozsef Ludvig. June 1999. Pinto. pages 382–387. Janneck. Tampa. Mobile agent middleware for sensor networks: An application case study.

1999. Computer Standards & Interfaces. Ramesh Govindan. Walker. USA. April 1990. pages 163–176. pages 69–80. A dynamic operating system for sensor nodes. Ram Kumar. In SenSys ’04: Proceedings of the 2nd International Conference on Embedded Networked Sensor Systems. Ki-Young Jang. Amir Pnueli. USA. New York. In MobiSys ’05: Proceedings of the 3rd International Conference on Mobile Systems. NY. 2005. IEEE Transactions on Software Engineering. 2006. Amnon Naamad. The Tenet architecture for tiered sensor networks. Rivi Sherman. [37] Ben Greenstein. Hagi Lachover. pages 126–140. 2004. ACM SIGPLAN Notices. Jeongyeup Paek. USA. pages 153–166. Roy Shea. New York. ACM Press. Applications. Michal Politi. and Mani Srivastava. Ben Greenstein. New York. 2005. [42] David Harel. ACM Press. volume 3560/2005 of Lecture Notes in Computer Science. A sensor network application construction kit (SNACK). NY. Deborah Estrin. [38] Ramakrishna Gummadi. and Mark Trakhtenbrot. [43] Rolf Hempel and David W. and Deborah Estrin. Eddie Kohler. [36] Omprakash Gnawali. STATEMATE: A working environment for the development of complex reactive systems. [41] Per Brinch Hansen. The emergence of the MPI message passing standard for parallel computing. Kluwer Academic Publishers. 21(1):51–62. [40] Chih-Chieh Han. August Joki. and Services. Omprakash Gnawali. ACM Press. 1993. Eddie Kohler. 1998. 33(3):65–72. and Eddie Kohler. NY. ACM Press. Macro-programming wireless sensor networks using Kairos. In Proceedings of the International Conference on Distributed Computing in Sensor Systems (DCOSS). Synchronous Programming of Reactive Systems. Marcos Vieira. Aharon Shtull-Trauring. 16(4):403–414. . [39] Nicolas Halbwachs. and Ramesh Govindan. NY. pages 201–213. New York. 2004. Springer Berlin / Heidelberg. USA.131 Conference on Embedded Networked Sensor Systems. An evaluation of the message-passing interface. In SenSys ’06: Proceedings of the 4th International Conference on Embedded Networked Sensor Systems.

In Proceedings of the Ninth International Conference on Architectural Support for Programming Languages and Operating Systems. A software architecture supporting networked sensors. and Kristofer Pister. New York. pages 471–475. Xiaojun Liu. ACM Press. EECS Department. Overview of the Ptolemy project. Journal of Artificial Intelligence. [52] Gilles Kahn. pages 64–72. France. and Christoph Meyer Kirsch. 1977. Seth Hollar. In IPSN ’05: Proceedings of the 4th International Symposium on . In Proceedings of the IFIP Congress 74. 8(3):323–364. Alec Woo. NY. [48] Jason Hill. Master’s thesis. [47] Jason Hill. Power and performance evaluation of globally asynchronous locally synchronous processors. USA. [45] Maurice Herlihy. Viewing control structures as patterns of passing messages. A methodology for implementing highly concurrent data objects. 2001. USA. In MobiCom ’00: Proceedings of the 6th Annual International Conference on Mobile Computing and Networking. ACM Press. Directed diffusion: a scalable and robust communication paradigm for sensor networks. New York. Jie Liu. [46] Carl Hewitt. Henzinger. and Deborah Estrin. November 1993. Stephen Neuendorffer. University of California. In Proceedings of the 29th Annual International Symposium on Computer Architecture. July 2003.132 [44] Thomas A. Yuhong Xiong. NY. 2000. Ramesh Govindan. Yang Zhao. Beyond event handlers: programming wireless sensors with o attributed state machines. ACM Transactions on Programming Languages and Systems. Berkeley. pages 56–67. 2000. System architecture directions for networked sensors. The semantics of a simple language for parallel programming. Compilers and Tools for Embedded Systems (LCTES’01). 15(5):745–770. In Proceedings of the ACM SIGPLAN Workshop on Languages. North-Holland Publishing Company. [49] Christopher Hylands. 2002. University of California. IEEE Computer Society. ACM Press. pages 158–168. 1974. Technical Report UCB/ERL M03/25. David Culler. Robert Szewczyk. [50] Chalermek Intanagonwiwat. Berkeley. [51] Anoop Iyer and Diana Marculescu. Edward Lee. International Federation for Information Processing. and Haiyang Zheng. Benjamin Horowitz. [53] Oliver Kasten and Kay R¨ mer. 2000. pages 93–104. Embedded control systems development with Giotto. Paris.

[63] Man-Kit Leung. Springer-Verlag. 56. John Jannotti. [62] Edward A. 2002. The Click Modular Router. ActorNet: an actor platform for wireless sensor networks. Reprinted in Operating Systems Review. 2006. Lauer and Roger M. [56] YoungMin Kwon. Frans Kaashoek. 18(3):263–297. Needham. Technical Report UCB/ERL M00/12. PhD thesis. Berkeley. IEEE Press. 7(1-4):25–45.133 Information Processing in Sensor Networks. 2005. [55] Eddie Kohler. The Click modular router. pages 1297–1300. Lee. NY. pages 45–52. Benjie Chen. University of California. Spring 2007 EE290Q (Wireless Sensor Networks) Class Project Report. pages 191–227. MoML – a modeling markup language in XML – version 0. [61] Edward A. [57] William W. New York. Dataflow process networks. 13. Advances in Petri Nets. [58] Hugh C. 9 May 2007. Modeling concurrent real-time processes using discrete events. and Gul Agha. 2000. USA. USA. LaRue. Robert Morris. In Proc. May 1995. pp. IRIA. London. NJ. Kirill Mechitov. Piscataway. Lee. In Concurrency and Hardware Design. EECS Department. 2002. [60] Edward A. and M. 1999. On the duality of operating system structures. Oct 1978. Sameer Sundresh. November 2000. Proceedings of the IEEE. Lee and Steve Neuendorffer. Parks. Sherry Solden. . ACM Press. Massachusetts Institute of Technology.2 April 1979. In AAMAS ’06: Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems. Reviving the value of WSN simulation results through Viptos extensions. 2000. 3–19. Advances in Computers. Lee and Thomas M. 83(5):773–801. [54] Eddie Kohler. Functional and performance modeling of concurrency in VCC. [59] Edward A.4. Embedded software. Annals of Software Engineering. UK. Second International Symposium on Operating Systems. and Bishnupriya Bhattacharya. ACM Transactions on Computer Systems (TOCS).

[67] Jie Liu. Edward A. Hellerstein. Timed multitasking for real-time embedded software. pages 85–95. Networks on Chip. [68] Jie Liu. [71] Andrew Mihal and Kurt Keutzer. The design of an acquisitional query processor for sensor networks. [70] Samuel Madden. John Rushing. USA. 1997. ACM Press. pages 65–75. Real time target tracking with binary sensor networks and parallel computing. editors. Mat´ : a tiny virtual machine for sensor networks. Michael J. In ASPLOSe X: Proceedings of the 10th International Conference on Architectural Support for Programming Languages and Operating Systems. pages 112–117. 10-12 May 2006. In Axel Jantsch and Hannu Tenhunen. Sara J. [69] Jie Liu and Edward A. Inside CORBA: Distributed Object Standards and Applications. [72] Thomas J. 2003. Franklin. James Reich. New York. and Guang R. Elaine Cheong. pages 126–137. 2005. . 2002. In SIGMOD ’03: Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data. Steve Tanner. pages 491–502. and Evans Criswell. In Proceedings of 2006 IEEE International Conference on Granular Computing. Joseph M. Lee. Addison-Wesley. Najjar. Nelson Lee. New York. and Feng Zhao. Maurice Chu. 2003. February 2003. Juan Liu. Mowbray. Mapping concurrent applications onto architectural platforms. Semantics-based optimization across uncoordinated tasks in networked embedded systems. Lee. pages 39–59. pages 273–281. State-centric programming for sensor-actuator network systems. In EMSOFT ’05: Proceedings of the 5th ACM International Conference on Embedded Software. Ruh. NY. Graves. and Richard M. New York. Advances in the dataflow computational model. Soley. Matt Welsh. TOSSIM: accurate and scalable simulation of entire TinyOS applications. [73] Walid A. Kluwer Academic Publishers. William A. 2003. [65] Philip Levis. IEEE Pervasive Computing. 25(1):1907–1929. and Feng Zhao. NY. chapter 3. USA. ACM Press.134 [64] Philip Levis and David Culler. [66] Hong Lin. January 1999. and Wei Hong. 2(4):50–62. 2003. IEEE Control Systems Magazine. NY. Gao. USA. and David Culler. In Proceedings of the 1st International Conference on Embedded Networked Sensor Systems (SenSys 2003). ACM Press. Parallel Computing. ACM Press.

In MSWIM ’00: Proceedings of the 3rd ACM International Workshop on Modeling. pages 37–44. PhD thesis. http://www. ATEMU: A fine-grained sensor network simulator. Kuang-Ching Wang. Dan Rusk. UW-API: A network routing application programmer’s interface (draft version 1. 29 October 2001. Opnet modeler. University of Technology at Sydney. and Mani B. NY. [79] Sung Park. Berkeley. In IPSN ’05: Proceedings of the 4th International Symposium on Information Processing in Sensor Networks. .edu/nsnam/ns. PhD thesis. University of California. pages 145–152. Realtime Signal Processing: Dataflow. and Manish Karir. http://www. The Regiment macroprogramming system. 2004.isi. IEEE Press. In IPSN ’07: Proceedings of the 6th International Conference on Information Processing in Sensor Networks. 2003. [75] Ryan Newton. In Proceedings of the First IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks (SECON’04). Visual. Springer-Verlag New York. [76] Ryan Newton. Building up to macroprogramming: an intermediate language for sensor networks. Actor-Oriented Metaprogramming.opnet. The Design and Analysis of Computer Experiments. pages 489–498. Inc. Santner. and Functional Programming. [80] Jonathan Polley. and William I. Neuendorffer.135 [74] Stephen A. Inc. New York. [81] Parmesh Ramanathan. 2007.ns-2. [78] OPNET Technologies. Springer Series in Statistics. [77] The network simulator . Analysis and Simulation of Wireless and Mobile Systems. USA. USA. and Thomas Clouqueur. Notz. John S. Srivastava. ACM Press. and Matt Welsh. pages 104–111. [82] Hideki John Reekie. 2000. and Matt Welsh.. Arvind. [83] Thomas J. Baras. Jonathan McGee. NY. 2005. USA. Andreas Savvides. University of Wisconsin-Madison.com. Williams. ACM Press. SensorSim: a simulation framework for sensor networks. Technical report. Dionysys Blazakis. 2005. Kewal Saluja. Greg Morrisett.2). 1995. EECS Department. Brian J. New York. Department of Electrical and Computer Engineering. NJ. Piscataway.

[89] Randall B. Enabling JavaTM for small wireless devices with Squawk and SpotWorld. 8-15 March 2003. ACM Press. Hou. SPOTWorld and the Sun SPOT. pages 706–707. 16 October 2005. In IPSN ’07: Proceedings of the 6th International Conference on Information Processing in Sensor Networks. Languages. Hung-Ying Tyan. USA. October 2003. USA. [88] Randall B. and Applications. November 1999. DC. pages 175–187. Grand challenges in mission-critical systems: Dynamically reconfigurable real-time software for flight control systems. IEEE Computer Society. 2007. Cristina Cifuentes. Brown. ´ [86] Gyula Simon. Lu-Chuan Kung. In Proceedings 2003 IEEE Aerospace Conference. New York. John Daniels. and Akos L´ deczi. Smith. and Honghai Zhang. JavaTM on the bare metal of wireless sensor devices: The Squawk Java virtual machine. P´ ter V¨ lgyesi. New York. John Daniels. Dave Cleal. pages 3 1339–3 1346. and Dave Cleal. Ning Li. Washington. In Workshop on RealTime Mission-Critical Systems in conjunction with the 1999 Real-Time Systems Symposium. 2006. NY. In 2nd Workshop on Building Software for Pervasive Computing. 2005. NY. Simulation-based optimizae o o o e tion of communication protocols for large-scale wireless sensor networks. USA. J-Sim: A simulation environment for wireless sensor networks. Smith. Programming the world with Sun SPOTs. and Derek White.136 [84] Y. NY. In Proceedings of the 15th European Simulation Symposium (ESS’03). Stewart and Robert A. pages 493–499. Cristina Cifuentes. In VEE ’06: Proceedings of the 2nd International Conference on Virtual Execution Environments. Egan. . Wei-Peng Chen. pages 565–566. New York. and Doug Simon. [91] David B. pages 78–88. Jennifer C. Parallel simulation made easy a with OMNeT++. ACM Press. In ANSS ’05: Proceedings of the 38th Annual Symposium on Simulation. volume 3. Ahmet Sekercioglu. Bernard Horan. USA. 2006. Hyuk Lim. Andr´ s Varga. Mikl´ s Mar´ ti. In OOPSLA ’06: Companion to the 21st ACM SIGPLAN Conference on Object-Oriented Programming Systems. [87] Randall B. [90] Ahmed Sobeih. [85] Doug Simon. Smith. ACM Press. and Gregory K.

tinyos. NJ. Massachusetts Institute of Technology. Springer-Verlag. . Richard A. Technical Report MIT/LCS/TM-528. 2002. In Proceedings of the Fourth Annual Symposium on Logic in Computer Science. 23(12):759–776. Programming sensor networks using abstract regions. London. Fourth International Conference on Information Processing in Sensor Networks. Khosla. [93] Janos Sztipanovits and Gabor Karsai. Design. USA. and Scott Shenker. The Ohio State University. [94] Arsalan Tavakoli. USA. Daniel Lee. 2004. Berkeley. Philip Levis. [96] Ben Titzer. Stride scheduling: Deterministic proportionalshare resource management. June 1995. David Chu. pages 32–49. In GPCE ’02: Proceedings of the 1st ACM SIGPLAN/SIGSOFT Conference on Generative Programming and Component Engineering. IEEE Transactions on Software Engineering. 6-9 June 2001. [101] Matt Welsh and Geoff Mainland. Volpe. PhD thesis. Piscataway. pages 477–482. pages 92–97. USA. Waldspurger and William E. USENIX Association. In Proceedings of IPSN’05. December 1997. In International Workshop on Wireless Sensor Network Architecture (WWSNA 2007). CA. 2002. In NSDI’04: Proceedings of the 1st Conference on Symposium on Networked Systems Design and Implementation. 25-27 April 2007. The OMNeT++ discrete event simulation system. Type inference for record concatenation and multiple inheritance. Design of dynamically reconfigurable real-time software using port-based objects. Generative programming for embedded systems. In Proceedings of the a European Simulation Multiconference (ESM’2001). [99] Carl A.net. and Jens Palsberg. 2005. IEEE Press. [100] Mitchell Wand.137 [92] David B. Declarative sensornet architecture. Joseph Hellerstein. Cambridge. MA. pages 29–42. and Pradeep K. An open-source OS for the networked sensor regime. [98] Andr´ s Varga. UK. Avrora: Scalable sensor network simulation with precise timing. realization and evaluation of a component-based compositional software architecture for network simulation. Weihl. [97] Hung-Ying Tyan. [95] TinyOS community forum: http://www. 1989. Stewart.

5-12 March 2005. and F. and Services. [110] Andrew L. [106] Xiang Zeng. TinyGALS and CI. University of California. EECS Department. volume 3868 of Lecture Notes in Computer Science. In Proceedings of 2005 IEEE Aerospace Conference. pages 3820–3830. The o Third European Workshop on Wireless Sensor Networks (EWSN). and David Culler. Juan Liu. and Jie Liu. Feng Zhao. Jie Liu. Cory Sharp.pdf. In K. . http://ptolemy. Mattern. [108] Feng Zhao. and James Reich.wikipedia. R¨ mer. New York. James Yang. In Proceedings of the 12th Workshop on Parallel and Distributed Simulation – PADS ’98.edu/˜ellen zh/click tinygals ci. August 2003.eecs. Berkeley. Karl. 91(8):1199–1209. 2004. Elsevier/Morgan-Kaufmann. pages 99–110. Zimdars. In MobiSys ’04: Proceedings of the 2nd International Conference on Mobile Systems. ACM Press. PhD thesis. pages 5–20. GloMoSim: A library for parallel simulation of large-scale wireless networks. http://www. Wireless Sensor Networks: An Information Processing Approach.berkeley. Proceedings of the IEEE. A study of Click. [109] Yang Zhao.org/. [105] Yuhong Xiong.138 [102] Kamin Whitehouse. Eric Brewer. Collaborative signal and information processing: An information directed approach. An Extensible Type System for Component-Based Design. USA. [107] Feng Zhao and Leonidas Guibas. Leonidas Guibas. [104] Wikipedia. Springer-Verlag Berlin Heidelberg. H. NY. April 2003. 2002. 2004. and Prasanta Bose. Hood: a neighborhood abstraction for sensor networks. 2006. and Mario Gerla. Semantic Streams: A framework for composable semantic interpretation of sensor data. pages 154–161. End-to-end prototyping and validation for health management sensor networks. [103] Kamin Whitehouse. 26-29 May 1998. Applications. Rajive Bagrodia. editors.

Sign up to vote on this title
UsefulNot useful