Actor-Oriented Programming for Wireless Sensor Networks

Elaine Cheong

Electrical Engineering and Computer Sciences University of California at Berkeley
Technical Report No. UCB/EECS-2007-112 http://www.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-112.html

August 30, 2007

Copyright © 2007, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission.

Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong

B.S. (University of Maryland, College Park) 2000 M.S. (University of California, Berkeley) 2003

A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences in the GRADUATE DIVISION of the UNIVERSITY OF CALIFORNIA, BERKELEY

Committee in charge: Professor Edward A. Lee, Chair Professor Eric A. Brewer Professor Paul K. Wright Fall 2007

Actor-Oriented Programming for Wireless Sensor Networks Copyright 2007 by Elaine Cheong .

manufacturing and industrial control. an interrupt-level discrete-event simulator for TinyOS networks. and simulating wireless sensor network applications. This dissertation presents the TinyGALS (Globally Asynchronous. seismic and structural monitoring. Berkeley Professor Edward A. each with different design methodologies and tools. business asset management. node-centric. a joint modeling and design environment for wireless networks and sensor node software. health care. and macroprogramming layers of a sensor network application. TinyGALS is implemented in the galsC programming language. Locally Synchronous) programming model. which is built on the TinyOS programming model. an actor-oriented graphical modeling and simulation environment for embedded systems. generating. making it a challenging and error-prone process. The galsC compiler toolsuite provides high-level type checking and code generation facilities and allows developers to deploy actor-oriented programs on actual sensor node hardware. transportation. Building sensor networks today requires piecing together a variety of hardware and software components. Chair Wireless sensor networks is an emerging area of embedded systems that has the potential to revolutionize our lives at home and at work. I advocate using an actor-oriented approach to designing. In this dissertation. and home automation. Actor-oriented programming provides a common high-level language that unifies the programming interface between the operating system. Lee. including environmental monitoring and conservation. This dissertation also presents methods for using higher-order actors with various metapro- . This dissertation then describes Viptos (Visual Ptolemy and TinyOS). middleware.1 Abstract Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California. Viptos is built on Ptolemy II. programming. with wide-ranging applications. which provides constructs to systematically build concurrent tasks called actors. and TOSSIM.

Professor Edward A. Lee Dissertation Committee Chair . parameterizable descriptions and automatically generate sensor network simulation scenarios from these models. All of the tools I developed and describe in this dissertation are open-source and freely available on the web. The networked embedded computing community can use these tools and the knowledge shared in this dissertation to improve the way we program wireless sensor networks.2 gramming and generative programming techniques that enable wireless sensor network application developers to create high-level.

i To the teachers who have challenged. and guided me through life. . encouraged.

Avocado & Watercress Salad with Discrete-Event Miso Dressing Corn and Cherry Tomato Salad with Arugula Soups Mexican Chicken Soup Butternut Squash Bisque Why-the-Chicken-Crossed-the-Model Santa Fe-Tastic Tortilla Soup Entrees Chicken Pot Pie Fettuccine with Concurrent Meyer Lemon Butter Sauce Chicken Tikka Masala Farfalle with Ptalon Pesto. and Cherry Tomatoes Butternut Squash Lasagna Cumin Crusted Chicken with Cotija and Mango-Garlic Sauce with Green Onion Pesto Mashed Motes Butternut Squash. Metacabbage & Pancetta Risotto with Basil Oil Desserts Marillenknödel: Austrian apricot dumplings Zwetschgendatschi: Bavarian plum delicacy Drinks Plum Granita with Limoncello Vinho do IOPorto Lychee-flavoured Ramune (ラムネ) . Avocado.ii Café Ptolemy Executive Chefs: Elaine Cheong and Andrew Mihal Appetizers Avocado Smash and Double-Decker Baked Quesadillas Corn Pancakes with Green Onions. and Cherry Tomatoes Salads Fuyu Persimmon. Crème Fraîche. Feta. Actors.

. . 3. . . . . . . . . . . . . . . . . .1 VisualSense . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . 2. . . . . . . . 3. . 2. . . . . . . . .4 Previously Published Work . . . . . . . .2. . . . . . . 2. . . . . . . 3. . . . . . . . . . . . . . . Background 2. . . . . . . . . . . . . . . . . . .5 Summary . . . . .1 TinyOS . . . . . .1 NesC syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Code Generation . .3. . . . . . . . . . . . .3 Summary .1. . . .3 Link model within actors . . . . . . . . . . . . . . . . .1 Concurrency . . . . . . . . . . . . . .2. . . . . . . .1 Links and connections . . . . . . . . . 3 TinyGALS and galsC 3. . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . 2. . .1. . . . . . . . . . . . . . . . . . . . .2 Concurrency and Determinacy Issues . . . 1. . . . 3. . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . 2 . . . .2 TinyOS execution model 2. . . . . . 3. . . . . . . . . . . . . . . . . . . . .iii Contents List of Figures List of Tables 1 Introduction 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Communication . . . . . . . . . . . . . . . . . . . . . . .1.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . .2 Actor-Oriented Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Determinacy . . .1 Programming constructs and language syntax . . . .1. . . . . . 3. . . . . .3 TinyGUYS . . . .3 Actor-Oriented Programming for Wireless Sensor Networks 1. . . 3. . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . 3. . . . . . . . . . . . . . . . . . 3. . . . .1 Wireless Sensor Networks . . . . . . . . . .2 Execution model and language semantics . . . . . . . . .1. . . . .3 Summary . . . . . . . 3. . . . . . . . . . . 3.1 The TinyGALS Programming Model and galsC Language 3. . . . . . . . . . . . . . . . . . . . . . . . .4 System initialization and start of execution . . vi viii 1 2 3 5 8 9 9 10 10 11 14 17 19 22 22 28 36 38 39 40 40 41 48 49 51 51 53 53 .2 Ptolemy II . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3. .4 Type inference and type checking . . .

. . . . . . . . . . . . . . . . . . . . . . . .1. . . . . .2. . .1 Comparison to TOSSIM . . . . . .1. . .1. . . . . . . . . . . .1 Design and simulation environments . . . . .3 Summary . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . .3 Parameter Sweep . .1 Representation of nesC components . . Actors. . . . . 4. . . . 5. . . . . . . . . . .6 Memory usage Example . . . . .5 Discussion . . . .2 Reconfiguration in Ptalon . . . . . 82 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . .2. . . . . . . . . . . . .2 Radio . . . . . . . . . . . . . . 6. . . . . . . . . . . . .4. . . . .4 3. . . . . . . . . . Summary . . . . . . . . . . 6. . . . . . .4 Specifying WSN Applications Programmatically 5. . . . . . . . . . . . . . . . . .2 MPI .2 Performance Evaluation . . . . . . . . .2 Transformation of nesC components .3.3. . . . 5 . . . .1. . . . . 5. . . . . . . . . . . . . . . . . . . . . . . 5. . .3 Ptalon . 6.5 4 Viptos 4. . . . . . . . . . . 6. . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . and Components 5. . . . . . . .1. . . . . . . 4. . . . . . . . . . . . . . . . . . . . 86 . . .3 Generation of code for target deployment 4. . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . .1 Non-blocking . . . .6 Timed Multitasking . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Summary . . . . . . . . . . . . . . . .4 Generation of code for simulation . . .1. . . . . . . . .4. .1 Generative Programming and Metaprogramming 5. . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . .4. . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . 81 . . 4. . . .2 Design. . .2. . . . . . . . . . . . . . . . . Simulation. . 4. . . . . . . . . . . . . . .2 Small World . . . . . . . . . . 81 . . . . . . . . . . . 6. . . . . . . . . . 4. . .5 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . 84 . . . . . . . . . . . . . . . .4 Higher-order actors . .iv 3. . . . . . . . . .1 TinyGALS and galsC . . . . . . . . . . .4 Click . . . . . . . . . . . . . . . . .1 Motivation . . . . . 89 . . 84 . . . . . . . . . . . . . .5 Summary . . . . . . . . . . . . . . . . . . . . . .1 A simple example . . . 6. . 6. . . . . . . . . . . . . . . . 3. . . . . . .3 Programming and deployment environments . . . 98 . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . Metaprogramming for Wireless Sensor Networks 5. . . . . . . . . . . . . . .2 TinyOS development and editing environments 6. . . . . . 6. . . . . .1 Design . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . . . 102 105 105 105 106 108 110 118 118 119 119 123 123 124 6 Related Work 6. and Deployment Environments . . . . . . . . . . . . . . . . . . . 4. . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 . . . . . . . . . . .4. . . . . . . . . . 4. . . . . 90 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Port-Based Objects .5 Simulation of TinyOS in Viptos . . .1. . 89 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . .4. . . . . . . . . . . . . . . . . 5. . . . . . . . . . .2 Higher-order Functions. . . 53 55 55 57 59 62 62 64 66 68 69 74 77 78 79 3. . . . . . . . . . . . . . . .5 Click and Ptolemy II . . . 90 . . . . . . . . . . . . . . . .

v 7 Conclusion 125 127 Bibliography .

. . . . . . . . . . Source code for TimerActor and SenseActor. . . . . . . . . . . . Generated MoML by ncapp2moml for SenseToLeds. . . . . . . . . .5 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .nc TOSSIM scheduling algorithm. . . . . . . . . .13 3. A single interrupt. .10 3. . . . Lee. . . . . . . TinyGALS scheduling algorithm. actor-oriented design. . . . . . . . . . . . . . . . . . . . . . . . .6 4. . Active system state determined by adding the active system state after one noninterleaved interrupt. . . .3 4. . . . . . . . . . . Send and receive application in Viptos. . . . . . . Source: Edward A. . . . . . . . Sensor array for object detection and reporting.7 Object-oriented design vs. . . . . . . . . . . . . . . . . . .12 3. . . . . . XML representation of the Sinewave source. . . . . . . . . . . . . .2 2. . . . . . . . . . . . . . . One or more interrupts where actors have delayed output. . . . . . . Code generation for the SenseTag application. . . . . . . . . . . . . . . A self-loop actor triggered by an interrupt. . . . . . . . . . . . . .8 3. . . . . . . . . . Sample nesC source code. Sample nesC source code. . . . . . . . . . . . . . . . . . . . . . . . . .1 3. . . . . . . . . . . . . . . . . Two events are produced at the same time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 3. . . . . Directed acyclic graphs (DAGs) within actors. . . . . . . . . . .4 4. . . . . . . . . .17 4. . . . . . SenseToLeds application in Viptos. . . . . . . . . . Illustration of an actor-oriented model (top) in Ptolemy II and straction (bottom). . . . . . . . . . . . . . . . . . . . . 5 6 11 13 15 22 24 27 29 32 34 38 41 42 45 46 47 47 50 54 56 58 63 65 66 67 70 71 75 Graphical representation of the SenseTag application. . . . . . . . . . . . . . . . . . . . . . . . per-node view of the object detection application. . . . . . . . . .4 3. . . . . .2 2. . . . . . . . . . . Top-level. . . . . . . . . . . . . . . . . . . . . . . . . . .vi List of Figures 1. . . . . . . . . . . . .1 1. . Viptos version of TOSSIM scheduling algorithm. . . . . . . . . . . . . .3 3. . . . . . . . . . . A single-output. . . . . . . . . . . . .nc . . .5 4. . . . Source code for the SenseTag application. . . . .9 3. . . . . . . . . . Generated MoML by nc2moml for TimerC. . . . . . . . . . . . . . . . .6 3. . . . . . . .3 3. . . . . . . . . . . . . . . .15 3. . Active system state after one interrupt. . . . . . .1 4. . . .11 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 2. . . . . . . . . . . . . . . . . Source code for the TimerC and TimerM components. its hierarchical ab. . .2 3. multiple-input connection. .2 4. . . . . . . . . . . . Type checking example. . . . . . . . . . . . . . . . .14 3. . . . . . . . . . . . . . .16 3. . . . . . . . . . WSN landscape. . . . . . .

. . . Each simulation ran for 120. . . .5 6. . . . 100 5. . . . . . . .6 An example Click element. . . . .5. .10 Ptalon version of Small World in Ptolemy II. . . . . . . . . 4. . .4 6. . . . . . .9 Multi-hop routing in Viptos. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6. . . . . . . . . . . . . . 88 Small World in Ptolemy II. . . . . 96 ParameterSweep version of Small World model with MultiInstanceComposite in Ptolemy II. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 6. .11 Excerpt of MoML code for Ptalon version of Small World. . . . . . . . . . . . . . . . . . . TinyGALS. . . . . . .xml . . . . . . . . . . 99 5. . .2. . . . Source: Eddie Kohler.7 5.0 virtual seconds. . .6 5. . . . . . . . . .1 5. . .3 6. A simple Click configuration with sequence diagram. . . . . . . . . . . . . . . . . . . . . 86 PtalonActor in Ptolemy II.vii 4. . . . . . . . . . . . . . . . . . . . .3 5. .ptln). . 97 5. Click vs. . . . . . . 92 ParameterSweep version of Small World in Ptolemy II. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Source: Eddie Kohler. . . . . . . . . . . . . . . . . . . . . .4 5. . . . . a configuration for the application in Figure 6. . . . . . . . . . . . . . . . . . . .1 6. . . . . . Pull processing across multiple nodes. 5. . . . .8 76 78 79 MultipleNodesMoML. . . . . 87 MultipleNodesMoML. . . . . . . . . . . . . . . . . 94 SDF model for changing parameter values of Small World model in Ptolemy II. . . . . . . . . . . . . . . . . Source: Eddie Kohler. . . .10 Execution time of a radio send and receive model in Viptos as a function of the number of senders and receivers.ptln . . . . . . . . . A sensor network application. .0 virtual seconds.2 5. .9 Ptalon code for SmallWorld (SmallWorld. . . . . . . . . 93 Modal model for changing parameter values of Small World model in Ptolemy II. 111 112 113 115 117 117 . . . . Flowchart for Click configuration shown in Figure 6. . . . .8 4. . . . . . . .5 5. . . . . . . . . . . . Each simulation ran for 300. . . . Execution time of the SenseToLeds application as a function of the number of nodes. . . . .

2 3. . . . . . .viii List of Tables 3. . . . . .1 Summary of valid types of links in TinyGALS/galsC. . . . . . . . . .1 5. . . . . . . . . .3 4. . . . . . . .1 3. . . . . 102 . . . . . Generated code for ports in galsC. Generated code for parameters (TinyGUYS) in galsC. . . . . . . . . Representation scheme for nesC components in Viptos. . . . . . . . . . . . . . . . . . . 37 50 51 63 Comparison of number of bytes between different implementations of SmallWorld. . . .

for their feedback. Phoebus Chen. Heather Taylor. and Greens Restaurant. o . Recipes courtesy of Food Network and Cooking Fresh from the Bay Area. for his advice and support throughout all these years. I would also like to thank Feng Zhao for giving me the opportunity to work with him and Jie at both PARC and Microsoft Research. Jackie Leung. Eddie Kohler. Stewart. Parke. Finally. without which this dissertation would not be possible. Roberto Passerone. Cook’s Illustrated. L. Mary Stewart. I would like to thank my undergraduate advisor. Steve Neuendorffer. Rob Szewczyk. Xiaojun o Liu. Yang Zhao. for introducing me to embedded systems and starting me on this path. as well as Backen k¨ stlich wie noch nie. Adam Cataldo. David Gay. Prabal Dutta. Andrew Mihal. For their feedback. I would like to thank my dissertation committee. Eric Brewer. John Reekie. Epicurious. and Paul Wright. Kamin Whitehouse. I would like to thank: Christopher Brooks. Edward A.ix Acknowledgments I would like to thank Jie Liu for his invaluable guidance throughout my graduate career. Judy Liebman. David B. and the rest of the Ptolemy Group and NEST Group. Edward. and/or help with hardware and software. Lee. J¨ rn Janneck. advice. I would especially like to thank my advisor.

x .

datatypes. disks. and such. Jr. or classes. What does a high-level language accomplish? It frees a program from much of its accidental complexity. and eliminated the vast amounts of work and the copious opportunities for error that dwell at the individual statement level. If we can limit design and building so that we only do the putting together and parameterization of such chunks from prebuilt collections. The concrete machine program is concerned with bits. and simplicity has been the progressive use of high-level languages for programming. Radical progress is going to have to come from attacking the essential difficulties of fashioning complex conceptual constructs. it eliminates a whole level of complexity that was never inherent in the program at all. The above passage was originally published in 1975. To the extent that the high-level language embodies the constructs wanted in the abstract program and avoids all lower ones. and comprehensibility. Brooks. . registers. channels. The Mythical Man Month: Essays on Software Engineering [17]. or modules. The most obvious way to do this recognizes that programs are made up of conceptual chunks much larger than the individual high-level language statement—subroutines. simplicity. Brooks elaborates further: Most past progress in software productivity has come from eliminating noninherent difficulties such as awkward machine languages and slow batch turnaround. conditions. branches. In the twentieth-anniversary edition (1995) of the text.1 Chapter 1 Introduction In his classic software engineering text. sequences. Frederick P. we have radically raised the conceptual level. reliability. An abstract program consists of conceptual constructs: operations. Most observers credit that development with at least a factor of five in productivity. and communication. There are not a lot more of these easy pickings. and with concomitant gains in reliability. discusses high-level languages as an essential tool for increasing programmer productivity: Surely the most powerful stroke for software productivity.

1. Typical networked embedded system software development may require the design and implementation of device drivers. including environmental monitoring and conservation. acoustic microphone arrays. electrical resistance. mostly because the interactions between the programming models are poorly understood. A sensor node in a typical sensor network has a battery. and/or video or still cameras. Each node is equipped with one or more sensing devices. Nodes in this higher tier are sometimes called masters [36] or microservers [67]. a sensor network is constrained by finite on-board battery power and limited network communication bandwidth.2 Although twelve years have passed since Brooks wrote the above passage. seismic and structural monitoring. as in the Tenet architecture [36]. network stack protocols. In addition. automated data collection and monitoring systems. wired nodes with greater network capacity and computation power. health care. An application area where these concepts are particularly needed is that of wireless sensor networks. A wireless sensor network may also be augmented with a higher tier of more powerful. sensor networks are spatially aware and are more closely linked to geographic location and the physical environment than centralized systems. Little or no integration exists among the tools necessary to create these software components. This dissertation presents methods for raising the conceptual level of building wireless sensor network applications. using actor-oriented programming concepts. In addition. tetherless. application-level tasks. such as sensors for visible or infrared light. manufacturing and industrial control. humidity. and home automation [107]. changing magnetic field. business asset management. and a small amount of memory for signal processing and task scheduling. scheduler services. acceleration or vibration. Building sensor networks today requires piecing together a variety of hardware and software components. Each sensor node communicates wirelessly with a few other neighboring nodes within its radio communication range [107]. it still applies today to new high-level programming concepts. Unlike traditional networked systems. with wide-ranging applications. pH. Wireless sensor networks provide a way to create flexible. these tools typically have little infrastructure . transportation.1 Wireless Sensor Networks Wireless sensor networks is an emerging area of embedded systems that has the potential to revolutionize our lives at home and at work. and partitioning of tasks across multiple nodes. a microprocessor. each with different design methodologies and tools. making it a challenging and errorprone process. or temperature.

but conceptually it represents signaling between components. Like Neuendorffer [74].3 for building models and interactions that are not part of their original scope or software design paradigms. Instead of object-oriented design. I view actor-oriented programming as an approach to system-level design. Gul Agha extended the notion of actor to include history-sensitive behavior necessary for shared. mutable data objects [1]. This dissertation uses Lee’s concept of actors. Lee’s actors are not required to encapsulate a thread of control [60]. which encapsulate data and do not interact with one another. He intended actors to be used as a paradigm for exploiting parallelism on massively parallel architectures and as a suitable language for concurrency [2]. reliability. Lee distinguishes data tokens. where instead of objects. 1. it does not have call-return semantics. Hewitt and Agha view actors as a universal concept. and simplicity. he suggests the term actor-oriented design as a refactored software architecture. The actor model was originally proposed by Carl Hewitt. software components are parameterized actors with ports. everything in the system is an actor that responds to messages. though the meaning of the term has evolved over time. A brief history of actor research up to 1993 is summarized by Agha [3] and excerpted and extended here. Hewitt proposes actors as an approach to modeling intelligence as a society of communicating knowledge-based problem-solving experts. from actors which exchange and process data [74]. Ports and parameters define the interface of an actor. A port represents an interaction with other actors. Actors are concurrent dataflow-oriented components that specify behavior abstractly without relying on low-level implementation constructs . One can view each of the experts as a society that can be further decomposed until reaching the primitive actors of the system. Hewitt [46] showed that control structures can be represented as patterns of message passing between simple actors with a conditional construct but no local state. The precise semantics depends on the model of computation. Lee generalized the notion of actors and applied it to software design for concurrent systems [49]. These actors are objects that interact in a purely local way by sending messages to one another. but unlike a method. Edward A. In his model. Unlike Agha’s actors. Agha assumes that each actor encapsulate a thread of control.2 Actor-Oriented Programming Actor-oriented programing is a high-level programming concept that can increase software productivity. which emphasizes inheritance and procedural interfaces.

what flows through an object is evolving data. or distributed computing infrastructure [74]. things happen to objects. threads.” such as the “organization of real and virtual memory. In other words. Instead. In other words. In actor-oriented programming. with a fast response rate.4 such as function calls. In traditional objectoriented programming. similar to Lauer and Needham’s concept of the duality of message-oriented systems and procedure-oriented systems [58]. and the architecture of the instruction set and the programmable registers. machine architecture and/or programming environment—on which the process and synchronization facilities are implemented. Actor-oriented programming and object-oriented programming are duals of each other.” “i.” They conclude that “the considerations for choosing which model to adopt in a given system are not found in the applications which that system is meant to support. . the ease with which scheduling and dispatching can be done. each of which corresponds to one of the models. actors make things happen (see Figure 1.” “most modern operating systems can be usefully classified using them. and memory space is at a premium (and memory protection is often not provided in the underlying infrastructure). the size of the stateword which must be saved on every context switch. Other constraints are those “imposed by the machine architecture and hardware..” Actor-oriented programming and other message-oriented systems are well-suited to embedded systems and other highly concurrent systems. The factors and design decisions of the system upon which the process and synchronization facilities are built are the things which make one or the other style more attractive or more tedious. Some systems are implemented in a style which is very close in spirit to one model or the other. Actor-oriented programming can be combined with object-oriented programming and other procedure-oriented systems in a structured way to achieve the best of both worlds. where a variety of peripheral devices and interrupts must be accessed frequently.” They suggest that a message-oriented (actor-oriented) style is best when it is easy to allocate message blocks and queue messages but difficult to build a protected procedure call mechanism. they lie in the substrate upon which the system is built. Other systems are able to be partitioned into subsystems. and which are coupled by explicit interface mechanisms. Lauer and Needham explain that though “no real system precisely agrees with either model in all respects.1). what flows through an object is sequential control.e. the arrangement of peripheral devices and interrupts.

Examples include directed diffusion [50].3 Actor-Oriented Programming for Wireless Sensor Networks Wireless sensor networks are highly concurrent systems. return Things happen to objects.5 The established: Object-oriented: class name data methods What flows through an object is sequential control.1 The node-centric approach forms the next layer above the operating system layer. Contiki [27]. Lee. Token Machines [75]. call The alternative: Actor-oriented: actor name data (state) parameters ports What flows through an object is evolving data. 1. actor-oriented design. 1 Many of the examples shown in Figure 1. and the Object State Model [53]. Input data Output data Figure 1. Examples include TinyOS [48]. generating. Instruction-level emulation lies below the operating system approach and is not shown in the figure. programming. SOS [40]. which makes programming easier for the user. Linux. Actors make things happen. I advocate using an actor-oriented approach to designing. Examples include Mat´ [64]. MantisOS [13]. Source: Edward A. Software in the node-centric layer runs on a single node on top of the operating system.2 rely on either simulation with a combination of TOSSIM and gdb. The operating system approach forms the bottom-most layer.1: Object-oriented design vs. whose focus is to provide basic programming abstractions to allow a program to run on the sensor node hardware. as shown in the vertical axis of Figure 1. In this dissertation. and more abstract programming models are used.NET. NutOS [11]. e The middleware approach forms the third layer. . and simulating wireless sensor network applications. or emulation for the Atmel AVR microcontroller instruction set.2. with concurrency at many different levels. which begins to include programming abstractions that allow the user to address multiple nodes. Existing approaches to building wireless sensor networks can be divided into four layers. SNACK [37]. and .

6 Figure 1. .2: WSN landscape.

IDSQ (information-driven sensor querying) [108]. OPNET. Macroprogramming is also known as programming the ensemble. though it does not translate easily to actual deployment. The macroprogramming approach forms the top layer. and application layers and making major design trade-offs across the layers [107]. DHT (Distributed Hash Table). . Prowler. by jointly considering the physical. from design to simulation and testing. Many of the tools shown in Figure 1. Agilla [30]. networking. Examples include TinyDB [70]. OMNeT++. These tools. including Semantics Streams [103]. Most existing work focuses on only one stage of development.2 PIECES (Programming and Interaction Environment for Collaborative Embedded Systems) [68] is a higher-level simulation tool implemented in a mixed Java-Matlab environment. SensorSim. Regiment [76]. and not simulation. In this dissertation. Unfortunately. rather than an integrated approach. Simulation tools that fall somewhere between the middleware and node-centric layers include ns-2. These constraints dictate that sensor network problems are best approached in a holistic manner. and embedded web services [67]. and Em*. and UML (Unified Modeling Language). and TinyViz for visualization. DSN (Declarative Sensornet) [94]. and to deployment. rather than programming individual nodes separately. J-Sim. Kairos [38]. The process of building a wireless sensor network can be divided into three stages of development: design. Other tools are programming models or languages that focus solely on design. actor-oriented programming provides a common high-level language that unifies the programming interface across the four application layers and between the different stages of development. wireless sensor networks are often deployed in resourceconstrained environments. and deployment. which allows the user to create an application by programming the wireless sensor network as a whole.7 abstract regions [101]. The developer can choose the model of computation. or communication model between actors. and actorNet [56]. with the exception of Em*. simulation. are usually stand-alone and not designed for hardware deployment. The goal of this work is to create integrated tools and programming models for networked embedded application developers to model and simulate their algorithms and quickly transition to 2 Chapter 6 contains a more detailed discussion of these simulation tools. that best fits the target application. Hood [102].2 rely on the TOSSIM TinyOS simulator for operating system-level simulation and testing. existing development tools are disjoint and difficult to integrate. However.

updated. The language implemented for the programming model described in these two papers was redesigned and reimplemented as part of the nesC compiler and described in galsC: A Language for Event-Driven Embedded Systems [24]. node-centric model called TinyGALS for programming individual sensor nodes. TinyGALS: A Programming Model for Event-Driven Embedded Systems [23] was the first paper published on this topic. which was later condensed.8 testing their software on real hardware in the field. Chapter 2 introduces the reader to TinyOS. Chapter 5 describes various techniques for using higher-order actors to generate multiple simulation scenarios for design and test of wireless sensor network applications. Viptos: A Graphical Development and Simulation Environment for TinyOS-based Wireless Sensor Networks [22] was the first paper published on this topic. and published under the same title [25]. These two publications are combined and updated to form the basis of Chapter 4 and part of Chapter 6. This tool. and it was extended into a master’s report. revised. and extended as Joint Modeling and Design of Wireless Networks and Sensor Node Software [21]. the operating system approach.4 Previously Published Work Some of the material in this dissertation was previously published in technical reports or con- ference proceedings. 1. A summary of how these papers have been incorporated into this dissertation follows. and Chapter 7 concludes this dissertation. encompasses multiple layers and lies above the operating system approach. . which allows construction of actor-oriented models of computation. Design and Implementation of TinyGALS: A Programming Model for Event-Driven Embedded Systems [20]. a Java-based software framework with a graphical user interface. I use TinyOS as an interface for the bottom-most layer. a runtime environment for wireless sensor nodes. Chapter 6 discusses related work. while allowing them to use the model of computation most appropriate for each part of the system. Chapter 2 also introduces Ptolemy II. These four publications are combined and updated to form the basis of Chapter 3 and part of Chapter 6. Chapter 3 describes an actor-oriented. Chapter 4 introduces an actor-oriented modeling and design environment for wireless sensor networks. called Viptos. and it was revised.

9

Chapter 2

Background
In this chapter, I present TinyOS, one of the most popular software toolsuites in the wireless sensor network research and development community. I also present Ptolemy II, the current version of one of the most influential actor-oriented design frameworks. Together, these tools form the background knowledge required for understanding the implementation of the tools and techniques presented later in this dissertation.

2.1

TinyOS
TinyOS [47, 48] is an open-source runtime environment designed for sensor network nodes

known as motes. TinyOS has a large user base—over 500 research groups and companies use TinyOS on the Berkeley/Crossbow motes. It has been ported to over a dozen platforms and numerous sensor boards, and new releases see over 10,000 downloads. TinyOS differs from traditional operating system models in that events drive the behavior of the system. Using this type of execution, battery-operated nodes can preserve energy by entering a sleep mode when no interesting events are happening. According to the TinyOS website [95], “TinyOS’s event-driven execution model enables fine-grained power management yet allows the scheduling flexibility made necessary by the unpredictable nature of wireless communication and physical world interfaces.” In this section, I present the details of the nesC syntax and the TinyOS execution model. Note that in this dissertation, I focus on TinyOS 1.x. TinyOS 2.x is a rewritten implementation of TinyOS 1.x that provides users with a cleaner interface. All material presented in this dissertation can easily be transferred to TinyOS 2.x.

10

2.1.1

NesC syntax

TinyOS provides a library of reusable software components written in nesC, an extension to the C programming language. A TinyOS application connects these components using a wiring specification that is independent of the component implementation. Some TinyOS components are thin wrappers around hardware, though most are software modules which process data. The distinction is invisible to the developer. Decomposing different OS services into separate components allows unused services to be excluded from the application. Figure 2.1(a) shows a TinyOS program called SenseToLeds that displays the value of a photosensor in binary on the LEDs of a mote. The TinyOS component library includes those “wired” together in SenseToLeds: Main, SenseToInt (shown in Figure 2.1(b)), IntToLeds, TimerC, and DemoSensorC. A nesC component may expose a set of interfaces. Each interface is a set of methods. A method may be either an event or a command, where an event is usually called “upwards” from a hardware interrupt handler, and a command is usually called “downwards” from the application code. A nesC component provides methods that it implements, and uses methods that are implemented by other components. A nesC component is either a configuration that contains a wiring of other components, or a module that contains an implementation of its interface methods. NesC interfaces may also be parameterized to provide multiple instances of the same interface. In Figure 2.1(a), SenseToLeds is a configuration that exposes no interface methods. The TimerC.Timer interface is parameterized. The Timer interface of SenseToInt connects to a unique instance of the corresponding interface of TimerC. If another component connects to the TimerC.Timer interface, it connects to a different instance. Each timer can be initialized with different periods.

2.1.2

TinyOS execution model

TinyOS contains a single thread of control managed by the scheduler, which may be interrupted by hardware events. Component methods encapsulate hardware interrupt handlers. These methods may transfer the flow of control to another component by calling a uses method. Computation performed in a sequence of method calls must be short, or it may delay the processing of other events. There are two sources of concurrency in TinyOS: tasks and events. Tasks are a deferred computation mechanism. A long-running computation can be encapsulated in a task, which a component method posts to the scheduler task queue. The post operation returns immediately, deferring the computation until the scheduler executes the task later. The TinyOS scheduler processes the tasks

11

configuration SenseToLeds { } implementation { components Main, SenseToInt, IntToLeds, TimerC, DemoSensorC as Sensor; Main.StdControl -> SenseToInt; Main.StdControl -> IntToLeds; SenseToInt.Timer -> TimerC.Timer[unique("Timer")]; SenseToInt.TimerControl -> TimerC; SenseToInt.ADC -> Sensor; SenseToInt.ADCControl -> Sensor; SenseToInt.IntOutput -> IntToLeds; } (a)

module SenseToInt { provides { interface StdControl; } uses { interface Timer; interface StdControl as TimerControl; interface ADC; interface StdControl as ADCControl; interface IntOutput; } } implementation { ... }

(b)

Figure 2.1: Sample nesC source code. in the queue in FIFO order whenever it is not executing an interrupt handler. Tasks run to completion and do not preempt each other. Events signify either an event from the environment or the completion of a split-phase operation. Split-phase operations are long-latency operations where operation request and completion are separate functions. Commands are typically requests to execute an operation. If the operation is split-phase, the command returns immediately and completion is signaled later with an event; non-split-phase operations do not have completion events. Events also run to completion, but they may preempt the execution of a task or another event. Resource contention is typically handled through explicit rejection of concurrent requests. Because tasks execute non-preemptively, TinyOS has no blocking operations. TinyOS execution is ultimately driven by events representing hardware interrupts.

2.2

Ptolemy II
Ptolemy II, a modeling and design framework for concurrent systems, and VisualSense, an

extension to Ptolemy II that supports modeling and simulation of wireless sensor networks, form the basis of the tools described in this dissertation. In this section, I excerpt and summarize information from Hylands, et al. [49] and Baldwin, et al. [8]. The Ptolemy Project conducts foundational and applied research in software-based design tech-

such as networking. parameter values are part of the a priori configuration of an actor and do not change when a model is executed. the model of computation is a set of rules that are more abstract. but not always. in that there might be distinct sets of rules that impose identical constraints on behavior. and parameters which are used to configure the operation of an actor. It serves as a laboratory for experimenting with design techniques.2 The use of channels to mediate communication implies 1 These components are not the same as TinyOS/nesC components.12 niques for embedded systems. This interface abstracts the internal state and behavior of an actor. though Chapter 4 explores the relationship between Ptolemy II components and TinyOS/nesC components. . A model of computation may have more than one semantics. particularly those that mix technologies including. Often. they interact by sending messages through channels. Components called actors execute and communicate with other actors in a model. mode changes. A director. This contrasts with. models of computation in Ptolemy II support actor-oriented design. have a well-defined component interface.2. and provide a framework within which a designer builds models. A set of rules that govern the interaction of components is called the semantics of the model of computation. Actors. The “port/parameters” shown in Figure 2. as illustrated in Figure 2. which is a component specific to the model of computation used. sequential decision making. and user interfaces. but not all. More commonly. in actor-oriented design. and restricts how an actor interacts with its environment. and complements. Most.1 If a model describes a mechanical system. components interact primarily by transferring control through method calls. however.2 function as both ports and parameters. analog and digital electronics. and electronics and mechanical devices. Whereas with object-oriented design. The focus is on embedded systems. 2 These channels may be wired or wireless. then the model of computation may literally be the laws of physics. and design of concurrent systems. signal processing. which is the set of the “laws of physics” that govern the interaction of components in the model. like objects. Executable models are constructed under a model of computation. controls the execution of a model. hardware and software. simulation. Central to actor-oriented design are the communication channels that pass data from one port to another according to some messaging scheme. Ptolemy II is the current software infrastructure of the Ptolemy Project and is published freely in open-source form. The interface includes ports that represent points of communication for an actor. The next section discusses wireless channels in more detail. The focus is also on systems that are complex in the sense that they mix widely different operations. object-oriented design by emphasizing concurrency and communication between components. feedback control. It studies heterogeneous modeling.

.2: Illustration of an actor-oriented model (top) in Ptolemy II and its hierarchical abstraction (bottom).13 annotation director port/parameters external port model hierarchical abstraction Figure 2.

and an extensible visualization framework. Custom wireless channels can be defined by subclassing the WirelessChannel base class and by attaching functionality defined in Ptolemy II models. such as in SystemC. The model of computation also defines the nature of communication between components. These rules determine when actors perform internal computation. The DE domain of Ptolemy II [15] provides execution semantics where interaction between components occurs via events with time stamps. Models. The software architecture consists of a set of base classes for defining wireless channels and sensor nodes. in XML (Extensible Markup Language). a library of subclasses that provide specific wireless channel models and node models. The external ports of a model can be connected by channels to other external ports of the model or to the ports of actors that compose the model.1 VisualSense VisualSense [8] is a modeling and simulation framework for wireless sensor networks that builds on Ptolemy II. This interface consists of external ports and external parameters. This syntax can be represented concretely in several ways: graphically. parameters.2. wireless communication channels. VisualSense uses a specialization of the discrete-event (DE) domain of Ptolemy II. and wired subsystems. The semantics is largely orthogonal to the syntax and is determined by a model of computation. The model of computation might give operational rules for executing a model. and perform external communication. A sophisticated calendar-queue . It is important to realize that the syntactic structure of an actor-oriented design says little about its semantics. such as in Figure 2. Ptolemy II offers all three alternatives. like actors. The interface of a model is called its hierarchical abstraction. A relation is an object used to represent the (wired) interconnection.14 that actors interact only with the channels to which they are connected and not directly with other actors.3. Custom nodes can be defined by subclassing the base classes and defining the behavior in Java or by creating composite models using any of several Ptolemy II modeling environments. and channels describe the abstract syntax of actor-oriented design. ports. To support this style of modeling. or in a program designed to a specific API (Application Programming Interface). the concepts of models. 2. such as in a bubble-and-arc or block-and-arrow diagram. Taken together. This framework supports actor-oriented definition of sensor nodes. which are distinct from the ports and parameters of the individual actors in the model. may also define an external interface. actors. update their internal state. physical media such as acoustic channels. External parameters of a model can be used to determine the values of the parameters of actors inside the model.

PortParameter" value="(frequency*2*PI/samplingFrequency)"/> </entity> <entity name="TrigFunction" class="ptolemy.input" relation="relation4"/> <link port="TrigFunction.output" relation="relation2"/> <link port="AddSubtract.expr.ParameterPort"> <property name="input"/> </port> <port name="output" class="ptolemy.lib.berkeley.0"/> <port name="frequency" class="ptolemy.expr.domains.actor.TypedIORelation"/> <link port="output" relation="relation3"/> <link port="Ramp.data.SDFDirector"/> <property name="frequency" class="ptolemy.expr.parameters.plus" relation="relation"/> <link port="AddSubtract.actor.actor.TypedIORelation"/> <relation name="relation4" class="ptolemy.TypedCompositeActor"> <property name="samplingFrequency" class="ptolemy.15 <?xml version="1.PortParameter" value="440.actor.actor.actor.0"/> <property name="SDF Director" class="ptolemy.data.plus" relation="relation2"/> <link port="AddSubtract.3: XML representation of the Sinewave source.Parameter" value="phase"/> </entity> <entity name="AddSubtract" class="ptolemy.lib.Ramp"> <property name="firingCountLimit" class="ptolemy.output" relation="relation"/> <link port="TrigFunction.data.actor.actor.TypedIORelation"/> <relation name="relation2" class="ptolemy.actor.parameters.parameters.actor.actor.actor.dtd"> <class name="Sinewave" extends="ptolemy.actor. .expr.actor.data.TypedIOPort"> <property name="output"/> </port> <entity name="Ramp" class="ptolemy.Parameter" value="0"/> <property name="step" class="ptolemy.Const"> <property name="value" class="ptolemy.output" relation="relation3"/> <link port="Const.lib.edu/xml/dtd/MoML_1.kernel.AddSubtract"/> <relation name="relation3" class="ptolemy.actor.output" relation="relation4"/> </class> Figure 2.0"/> <property name="phase" class="ptolemy.parameters.TypedIORelation"/> <relation name="relation" class="ptolemy.parameters.eecs.Parameter" value="0"/> <property name="init" class="ptolemy.TrigFunction"/> <entity name="Const" class="ptolemy.lib.Parameter" value="8000.ParameterPort"> <property name="input"/> </port> <port name="phase" class="ptolemy.0" standalone="no"?> <!DOCTYPE class PUBLIC "-//UC Berkeley//DTD MoML 1//EN" "http://ptolemy.sdf.PortParameter" value="0.

The precision in the semantics prevents the unexpected behavior that sometimes occurs due to modeling idiosyncrasies in some modeling frameworks. Ptolemy II and VisualSense permit customized icons for components in a model. every actor that sends data to a wireless channel requires that every recipient from that channel be able to . The DE domain in Ptolemy II supports models with dynamically changing interconnection topologies. In this type system. Thus. and a type resolution algorithm identifies the most specific types that satisfy all the constraints. OPNET [78]. and hence can be developed by the model builder. For example. Connectivity can then be determined on the basis of the physical locations of the components. Another feature of Ptolemy II and VisualSense is a sophisticated type system [105]. and instead associates ports with wireless channels by name (e. these connections do not represent all the type constraints.. VisualSense is a subclass of the DE modeling framework in Ptolemy II that is specifically intended to model sensor networks. Components (which are called actors) have ports. using more conventional DE models (as block diagrams) or other Ptolemy II models (such as dataflow models. In particular. Ptolemy II provides a visual editor for constructing DE models as block diagrams. By default. or more interestingly. The algorithm for determining connectivity is itself encapsulated in a component as a wireless channel model. “RadioChannel”). The most straightforward uses of the DE domain in Ptolemy II are similar to other discreteevent modeling frameworks such as ns [77]. it removes the need for explicit connections between ports. Visual depictions of systems can help to offset the increased complexity that is introduced by heterogeneous modeling. the type system in Ptolemy II includes a type constraint for each connection in a block diagram. in wireless models. The software is carefully architected to support multithreaded access to this mutation capability. or moving actors.16 scheduler is used to efficiently process events in chronological order. or changing the connectivity between actors. deleting. actors. In VisualSense. parameters. and the ports are interconnected to model the communication topology. and to lend insight into the behavior of models. or continuous-time models). and ports can all impose constraints on types. for example by adding. finite-state machines.g. sensor nodes themselves can be modeled in Java. The DE domain has a formal semantics that ensures determinate execution of deterministic models [59]. The results are predictable and consistent. In particular. a sensor node can have as an icon a translucent circle that represents (roughly or exactly) its transmission range. Changes in connectivity are treated as mutations of the model structure. although stochastic models for Monte Carlo simulation are also well supported. one thread can be executing a simulation of the model while another changes the structure of the model. and VHDL. However.

3 Summary This chapter summarized background information on TinyOS and Ptolemy II. VisualSense imposes this constraint in the WirelessChannel base class. . so that the reader can understand the underlying implementation of the tools and techniques presented in the following chapters. the model builder does not need to specify particular data types in the model.17 accept that data type. so unless a particular model builder needs more sophisticated constraints. They are inferred from the ultimate sources of the data and propagated throughout the model. 2.

18 .

most of these high-level languages are designed for writing sequential programs to run on an operating system and fail to handle concurrency intrinsically. to host a full-scale modern operating system. Event-driven embedded software is similar to hardware. handling irregular interrupts. . these actors communicate with each other asynchronously via message passing. These tasks become even more challenging when the resources of the hardware platforms are too limited. maintaining consistent state across multiple tasks.19 Chapter 3 TinyGALS and galsC Networked embedded software designers face issues such as managing computation as well as communication. do not scale with the growing complexity of today’s applications. For many networked embedded systems. and conserving power. Traditional technologies for developing embedded software. since a node can go into a sleep mode to preserve energy when no interesting events are happening. Within each actor. avoiding concurrency errors. inherited from writing device drivers and optimizing assembly code to achieve a fast response and small memory footprint. as in most imperative languages. components communicate synchronously via method calls. there is a fundamental gap between this event-driven execution model and sequential programming languages. At the application level. in terms of CPU speed and memory size. Despite the fact that “high-level” languages such as C and C++ have recently replaced assembly language as the dominant embedded software programming languages. Event-driven execution is particularly suitable for untethered devices such as sensor network nodes. The TinyGALS (Globally Asynchronous and Locally Synchronous) programming model [23] aims to fill this gap by providing language constructs to systematically build concurrent tasks called actors. where conceptually concurrent components are activated by incoming signals (or events).

Thus. a set of guarded yet synchronous variables (called TinyGUYS) is provided at the system level for actors to exchange global information “lazily. 25] is a language that implements the TinyGALS programming model. and they can develop software components without the burden of thinking about multiple threads. application developers have precise control over the concurrency in the system. in practice. including an application-specific operating system scheduler. In this programming model. However. In the system modeling community. In this chapter.” and “globally asynchronous. thus causing confusion.” “asynchronous. is consistent with the usage of these terms in distributed programming paradigms [72]. synchronous often refers to computational steps and communication (propagation of computed signal values) that take no time (or. Automatically generated code also reduces implementation and . at a high level. The circuit and processor design communities use these terms for synchronous and asynchronous circuits. such as race conditions. In order to incorporate shared variable semantics where only the latest value matters. GALS then refers to a modeling paradigm that uses events and handshaking to integrate subsystems that share a common tick (an abstract notion of an instant in time) [10]. Steps do not take infinite time. which makes TinyOS applications difficult to develop. The TinyGALS notion of synchronous and asynchronous.” Access to these variables is thread-safe. and the galsC compiler generates executable code. synchronous means that the software flow of control transfers immediately to another component and the calling code blocks awaiting return. galsC takes advantage of the nesC specification for TinyOS 1. control eventually returns to the calling code. locally synchronous (GALS)” mean different things to different communities. Asynchronous means that the software flow of control does not transfer immediately to another component. galsC [24. yet components can quickly read their values. where synchronous refers to circuits that are driven by a common clock [51]. Lack of explicit management of concurrency forces TinyOS component developers to manage concurrency by themselves (locking and unlocking semaphores). The galsC language provides basic concurrency constructs. execution of the other component is decoupled. the TinyGALS programming model is globally asynchronous and locally synchronous in terms of transfer of the flow of control. TinyOS/nesC components provide an interface abstraction that is consistent with synchronous communication via method calls. very little time compared to the intervals between successive arrivals of input signals). concurrent tasks in TinyOS are not exposed as part of the galsC component interface. from high-level specifications.20 The terms “synchronous. This generative approach allows further analysis of concurrency problems. however.x. This language has a type system that spans synchronous and asynchronous communication boundaries.

functions.. Section 3.g. queues.2 discusses concurrency and determinacy issues in TinyGALS programs. Instead. The TinyGALS approach differs from coordination models like those discussed above.1. most of the processor time is spent waiting for an external trigger or event. Section 3. Section 3. it uses a thread-safe global data space to store messages that do not trigger reactions. In particular. At the same time. such as SystemC [12] and VCC [57]. in that it allows designers to directly control the concurrent execution and sizes of buffers between asynchronous actors. high-level programming language. synchronous languages try to compile away concurrent executions based on the synchronous (zero-time execution) assumption [39]. To some extent.3 explains a code generation technique based on the twolevel execution hierarchy and a system-level scheduler. The design of TinyGALS is influenced by the trend of introducing formal concurrency models in embedded software. . and they are easy to develop and backwards compatible with most legacy software. Section 3.1 describes the TinyGALS programming model and galsC language. Components in the TinyGALS model are entirely sequential. than embedded software languages such as nesC. the port-based object (PBO) model [92] has a global shared variable space mediating component interaction. A reasonably small amount of additional code to enhance software modularity will not greatly affect the performance of the system. event-driven system. TinyGALS is closer to system-level hardware/software codesign languages. Section 3. the TinyGALS/galsC framework can greatly improve software productivity and encourage component reuse. The galsC compiler and toolsuite is built on the nesC 1. In a reactive. TinyGALS programs do not rely on the existence of an operating system. since the developer does not need to reimplement standard constructs (e. Various dataflow models [73] use FIFO queues to separate flow of control.21 debugging time.5 summarizes this chapter. The remainder of this chapter is organized as follows. and guards on variables). The POLIS codesign approach [7] uses an event-driven model for both hardware and software execution. As an actor-oriented. communication ports. the galsC compiler generates the scheduling framework as part of the application.4 describes a sample application implemented in galsC.1 compiler and toolsuite for the wireless sensor network nodes known as the Berkeley motes. When it is not possible to compile away concurrency.

1.. .1.4 discusses type inference and checking in galsC. as well as the concrete galsC syntax.1 Programming constructs and language syntax There are three basic constructs in TinyGALS and galsC: components. 3. A downsampled clock signal triggers the system to read the light intensity level from a photoresistor at a lower rate. for each construct. Section 3.. Reading the sensor may take time.1 The TinyGALS Programming Model and galsC Language This section uses a simple sensing application to illustrate the TinyGALS programming model and galsC syntax and semantics. 3. Section 3. Section 3. shown in Figure 3. and Section 3. actors. and applications.1. The system tags the resulting sensor value with the latest value of the counter and sends it downstream for further processing.output trigger ADC ADCControl .1 introduces the basic constructs in the TinyGALS programming model and the syntax of the galsC programming language. output TimerC Photo Figure 3.1. a hardware clock triggers the system to update a time tick counter.1: Graphical representation of the SenseTag application.3 describes valid links in TinyGALS/galsC.output actorControl Counter StdControl Timer StdControl Timer TimerControl Trigger trigger trigger 64 trigger StdControl SenseActor count SenseToInt IntOutput.22 uint16_t count = 0 count TimerActor actorControl IntOutput.1. This section presents the abstract TinyGALS notation. In this example.1.2 explains the semantics of TinyGALS and galsC.

but with explicit definition of the external methods it uses. A TinyGALS component C is a 4-tuple: C = (PROV IDESC . The implementation of a module contains executed code. Syntactically. A component that provides an interface (in PROV IDESC ) contains an implementation of the interface method(s). LINKSC .USESC . COMPONENT SC is the set of components that form C.2 shows the source code for the TimerC configuration used in Figure 3.COMPONENT SC . TimerC contains a module named TimerM that implements the provided interface methods.VC ).1) where PROV IDESC and USESC are the sets of methods that constitute the interface of C. an interface (in USESC ) expects another component to implement the interface. whereas the implementation of a configuration only contains a list of components and the links between their interface methods.1. whereas a component that uses. a component is defined in two parts—an interface definition and an implementation. (3. Figure 3. and VC is the set of internal variables that carry the state of C from one invocation of an interface method of C to another. LINKSC is the set of relations among the interface methods of the components (including that of C). a component is like an object in most objectoriented programming languages. Using the tuple . Components in galsC are written in the nesC programming language. Thus.23 TinyGALS Components Components are the most basic elements of a TinyGALS program. A component is either a module or a configuration. or requires.

command result_t StdControl. } .StdControl)...2: Source code for the TimerC and TimerM components. notation given in Equation 3. mScale). mScale = 3.setRate( mInterval.ClockC. which means that each of the individual methods are linked.Timer.Clock... Timer.init(). and StdControl. .. For brevity.Clock). LINKSc = {(TimerM. } module TimerM { provides interface Timer[uint8_t id]. TimerM. VC = 0).. } Figure 3. the TinyGALS notation used in this chapter only lists the name of a given interface. .StdControl.ClockC. StdControl}.start(char. TimerM. mInterval = 230..stop(). NesC allows the shorthand notation of linking two interfaces of the same type. } implementation { // Each bit represents a timer state. uint32_t mState. . mInterval.Clock -> ClockC. the Timer interface refers to the set containing Timer. provides interface StdControl.. . return call Clock. / interface keyword in nesC refers to a set of methods. uint8_t mScale. ..init() { mState = 0. } implementation { components TimerM. rather than the individual methods in the interface. uses interface Clock. and the StdControl interface refers to the set containing StdControl. Timer = TimerM.fired().stop().1.. USESC = 0. the TimerC component can be defined as1 C = (PROV IDESC = {Timer.24 configuration TimerC { provides interface Timer[uint8_t id]. ClockC. StdControl = TimerM. 1 The .Timer)}... . provides interface StdControl. (StdControl. So.. / COMPONENT SC = {TimerM... TimerM.start().}.. uint32 t). and Timer. StdControl. (Timer.

PARAMET ERSR . The interface of an actor consists of a set of input and/or output ports and a set of parameters. TimerActor has an output port named trigger. Actors are different from components. The only difference is that the relations in LINKSR may also include actor ports and parameters.output.3 shows the source code for TimerActor. PROV IDESC and USESC are executable. encompassing one or more TinyGALS components.25 TinyGALS Actors Actors are the major building blocks of a TinyGALS program.1) and the input and output ports of R (INPORT SR and OUT PORT SR ) and the parameters of R (PARAMET ERSR ). or (4) some combination of these.1. INITR ).2 and 3.3 describe links in more detail. A TinyGALS actor R is a 6-tuple: R = (INPORT SR . which contains the TimerC component.2) where INPORT SR and OUT PORT SR are the sets that specify the input ports and output ports of R. The galsC syntax for an actor is similar to that of a galsC (or nesC) configuration component. Trigger.COMPONENT SR . whereas INPORT SR and OUT PORT SR are not.IntOutput. respectively.2.3 An actor may also contain an actorControl section which exports the StdControl interface of any of its components to the application level for system initialization (e.1. for initializing hardware components). INPORT SR and OUT PORT SR of an actor R are not the same as PROV IDESC and USESC of a component C. Counter. COMPONENT SR is the set of components that form the actor. and INITR is the list of initialization methods that belong to the components in COMPONENT SR .1. An actor implementation contains a list of components and links. TimerActor exports the StdControl interfaces of Count and Trigger for system initialization.. LINKSR . (3) a parameter. A link can join a component interface method to one of four types of endpoints: (1) another component interface method.2. However. LINKSR is the set that specifies the relations among the interface methods of the components (PROV IDESC AND USESC in Equation 3. OUT PORT SR .trigger. including which configurations of components within an actor are valid. writes to the count parameter.g. Figure 3. whose source code was shown in Figure 3. Figure 3. PARAMET ERSR is the set of external variables—they are global variables that can be both read and written2 . . A different component interface method. PROV IDESC and USESC refer to component methods and may be linked to actor ports in INPORT SR and OUT PORT SR . 3. (2) a port. (3. LINKSR of an actor R is similar to LINKSC of a component C.3 also shows the source code for 2 Refer 3 Sections to information on TinyGUYS in Section 3. which is linked to a component interface method.

Using the tuple notation given in Equation 3. ACT ORSA is the list of actors that form A. each call returns a different unsigned integer in the range {0. If the program contains n calls to unique() with the same identifier string (in this example. . Trigger.TimerControl. CONNECT IONSA is the set of the relations between actor input and output ports.StdControl. Trigger}. (Trigger. each of which maps a global variable in GLOBALSA to a parameter (PARAMET ERSR in Equation 3.3) where GLOBALSA is the set of global variables.output and the value read from the count parameter. “Timer”). TimerC. A TinyGALS application A is a 5-tuple: A = (GLOBALSA . ACT ORSA . .trigger)}. TimerC. .IntOutput. VARMAPSA is a set of mappings. (Trigger.CONNECT IONSA . LINKSR = {(Counter. STARTA is the list of input ports of actors in the application.2.StdControl]).Timer[1]). COMPONENT SR = {Counter.1. . (3. actors are connected to form a complete application. TinyGALS Application At the top level of a TinyGALS program.TimerControl).out put. Figure 3. n − 1}.2) of an actor in ACT ORSA 5 . . TimerActor can be defined as4 R = (INPORT SR = 0.Timer. PARAMET ERSR = {count}.1 shows a graphical representation of the actors. count).Timer.2. (Counter.26 SenseActor.Timer[0]).1.2. TimerC.trigger. INITR = [Counter. The semantics of the execution of components within an actor are discussed in more detail in Section 3. Its output port output is connected to the concatenation of the component interface method SenseToInt.IntOut put. (Trigger. TimerC. STARTA ). CONNECT IONSA of an application A differs from LINKSR of an actor R in 4 The unique() function in nesC is a constant function that evaluates to a constant at compile time.VARMAPSA . / OUT PORT SR = {trigger}. 5 Refer to information on TinyGUYS in Section 3.

trigger. Counter. actorControl { Counter. Trigger. SenseToInt. . } parameter { uint16_t count.Timer[unique("Timer")]. } implementation { components Counter.IntOutput.ADCControl -> Photo. count) -> output. SenseToInt.3: Source code for TimerActor and SenseActor.trigger -> trigger.StdControl.Timer -> TimerC. TimerC.StdControl. Trigger.Timer -> TimerC.StdControl. } } } (SenseToInt. actorControl { SenseToInt. Trigger. } implementation { components SenseToInt. Counter. Trigger. } parameter { uint16_t count. Trigger.TimerControl -> TimerC.Timer[unique("Timer")].ADC -> Photo.27 actor TimerActor { port { out trigger. } } } Figure 3. out output.output -> count.output. actor SenseActor { port { in trigger. Photo.IntOutput. trigger -> SenseToInt.

Note that arguments (initial data) may also be passed to the port.. It also includes a discussion of the conditions for well-formedness an application. A connection connects actor output ports with actor input ports.1. . which contains TimerActor. . SenseActor. VARMAPSA = {(count. (count.count)}. SenseActor. TimerActor. CONNECT IONSA = {(TimerActor.4 shows the source code for the SenseTag application. .3. ACT ORSA = [TimerActor. A mapping associates application parameters (global names) with actor parameters (local names). and between actors within an application. whereas links inside an actor (between components) do not..2 Execution model and language semantics This section discusses the semantics of execution within a component. Using the tuple notation given in Equation 3.count).)}.out put.. with an optional declaration of the port queue size (defaults to size one).]. Figure 3.trigger()]). (SenseActor.trigger). between components within an actor.trigger.1. mappings.2 describes which configurations of actors within an application are valid. as well as an application start section. the example application can be defined as A = (GLOBALSA = {count}. A galsC program is created by writing a galsC application file that contains zero or more parameters (global variables) and an implementation containing a list of actors. STARTA = [SenseActor. The appstart section declares that an initial token is to be placed in the input port of SenseActor. Section 3. The output port trigger of TimerActor is connected to the corresponding input port of SenseActor. SenseActor. and connections. 3. with a queue size of 64. The application contains a parameter (global variable) named count.. SenseActor. which is initialized to zero and connected to the corresponding parameters of TimerActor and SenseActor. and some downstream actors.28 that connections between actors contain an implicit queue.

or nested invocations do not interfere with each other. count = TimerActor. which may be interrupted by the hardware. this section assumes that all methods may potentially access component state. count = SenseActor. which is used to order events.29 application SenseTag { parameter { uint16_t count = 0. which may lead to race conditions and other nondeterminacy issues. A TinyGALS program runs in a single thread of execution (single stack). To simplify the discussion. This section assumes that interrupt handlers are not reentrant. SenseActor.trigger =[64]=> SenseActor. } implementation { actor TimerActor.trigger.4: Source code for the SenseTag application.. However. There are no other sources of preemption other than hardware interrupts.. All memory is statically allocated.trigger().. there is no dynamic memory allocation. A piece of code is reentrant if multiple simultaneous. other (different) interrupts may occur while servicing an interrupt.. SenseActor. TimerActor. These constraints are necessary for avoiding unexpected reentrancy. Assumptions The TinyGALS architecture is intended for a platform with a single processor. . This section discusses constraints on what constitutes a valid configuration of components within an actor when using components that contain interrupt handlers in which interrupts are enabled.count. } } } Figure 3.count..output => . .. This discussion also assumes the existence of a clock. interleaved. but may suffer from reentrancy problems. Methods that do not access component state will not suffer from race conditions. but that interrupts are masked while servicing them (interleaved invocations of the same interrupt handler are disabled). appstart { SenseActor.

The call keyword indicates that the Clock.2. Section 3.3 discusses the exact specifics of what types of links are valid. While the method runs. Additional rules for linking components together are detailed in the next section. An event on a linked actor input port may trigger the execution of a component method. (2) an event arrives on the actor input port linked to one of the interface methods of C. In Figure 3.setRate() method is called synchronously (explained further in the next section). source components must not also be triggered components. TinyGALS Actors The flow of control between components within a TinyGALS actor occurs on links. Therefore. but it does not matter to which component the method is linked. to improve the ease of analyzability of the system and eliminate the need to make components reentrant. Therefore. The same argument also applies to source components and called components.1. and component method(s). Links represent synchronous communication via method calls. That is. When the StdControl.setRate(). and vice versa. or (3) another component calls one of the interface methods of C. mScale. the component is a source component and when activated by a hardware interrupt. Once activated. and mInterval are internal variables of component TimerM. an interrupt may arrive. parameter(s). the interrupt service routine or method finishes. Reentrancy problems may arise if a component is both a source component and a triggered component.30 TinyGALS Components There are three cases in which a component C may begin execution: (1) an interrupt arrives from the hardware that C encapsulates. Both source components and triggered components may call other components via required methods. the component calls the Clock. mState. In the first case. . leading to possible race conditions if the interrupt modifies internal variables (internal state) of the same component. a component executes to completion.setRate() method with the values of mInterval and mScale as its arguments. This results in the third case. the component is a triggered component. the corresponding interrupt service routine runs. In the second case. Source components do not connect to any actor input ports. When a component calls a required method with the call keyword. where the component is a called component. it is necessary that source components only have outputs (required methods) and no inputs (provided methods). A link is a relation within an actor between its port(s). and the event triggers the execution of a provided method. The component only needs to know the type signature of Clock.init() method of TimerM is called.

if interrupts are not masked (interrupts are enabled). However. In Figure 3. Race conditions and reentrancy problems may occur if C3 is running in a scheduled context and an interrupt causes C1 to preempt C3 . as in Figure 3. Triggered DAGs can be connected to other triggered DAGs. preemption of the normal thread of execution by an interrupt may lead to reentrancy problems. A source DAG is formed by starting with a source component and following all forward links between it and other components in the actor. If all interrupts are masked during interrupt handling (interrupts are disabled). the return value indicates whether the command completed successfully or not.5(c). One can relax the restriction on cycles between components and only disallow cycles in method call chains between components by first separating the methods within a component into separate source and triggered components. since with a single thread of execution. then additional restrictions on source DAGs is unneeded.5(b). any valid configurations of components within an actor can be modeled as a directed acyclic graph (DAG). Notice that in this case. However. There are two cases in which an actor R may begin execution: (1) a triggered component is activated. R may interrupt the execution of another actor. R contains a source component which has received a hardware interrupt. . or (2) a source component is activated.7 Therefore. the recursion must be bounded for the system to be live. then a source DAG must not be connected to any other source DAG within the same actor. Race conditions and reentrancy problems may occur if source DAGs and triggered components are connected within an actor. it is not possible for a triggered component to preempt a component in any other triggered 6 In 7 Recursion TinyOS. The execution of actors is controlled by the scheduler in the TinyGALS runtime system. the source DAG (C1 . C3 ).5(a). In the second case. Cycles within actors (between components) are not allowed. A triggered DAG is similar to a source DAG but starts with a triggered component instead. the scheduler activates the component method linked to an input port of R in response to an event sent to R by another actor.6 The graph of the components and the links between them is an abstraction of the call graph of the methods within an actor. where the methods associated with a single component are grouped together. Therefore. In the first case. As discussed in the previous section. TinyGALS places some restrictions on what configurations of components within an actor are allowed. An actor is considered to have finished executing when the components inside of it have finished executing and control has returned to the scheduler. otherwise reentrant components are required. The external method can return a value through the call just as in a normal method call. within components is allowed. as in Figure 3.31 the flow of control in the actor is immediately transferred to the callee component or port. C3 ) is connected to the triggered DAG (C2 .

However. In Figure 3. since some configurations may lead to nondeterministic component firing order.32 Actor C Actor A C1 C2 Actor B C1 C2 C2 C1 C3 (a) A source DAG is activated by a hardware interrupt. (b) A triggered DAG is activated by the arrival of an event at the actor input port. required component methods may be associated with either one provided method of a single component C or with one or more actor output ports.8 Provided component methods may be associated with any number or combination of required component methods and actor input ports. Then actor input ports may be associated either with one provided method of a single component C or with one or more actor output ports. then actor input ports and required component methods may only be associated with either a single method or with a single output port. the components in a triggered DAG execute to completion. Likewise. one caller (a required component method) can have multiple callees. Let us first assume that both actor input ports and actor output ports are totally ordered (using the order of the ports declared in the port section of the actor definition file). but they may not be associated with actor output ports. Although multiple callees are not part of the TinyGALS semantics. race conditions and reentrancy problems may occur. the configuration of components inside an actor must not contain cycles and must follow the rules above regarding source and triggered DAGs. DAG. the implementation section of the TimerActor declares that whenever component Trigger calls trigger(). all the callees are called in a possibly non-deterministic order. an event is produced at the trigger output port. (c) When a source DAG is connected to a triggered DAG.5: Directed acyclic graphs (DAGs) within actors. if we assume that neither actor input ports nor actor output ports are ordered. . As discussed earlier. TinyGALS places restrictions on what connections are allowed between component methods and actor ports.3. The interpretation is that when the caller calls. it is supported by the galsC software tools for TinyOS compatibility. actor output ports may be associated with any number or combination of required component methods and actor input ports. but that components are not ordered. A combination of the callee’s return values is returned to the caller. Figure 3. the implementation section of the SenseActor definition declares that whenever the trigger input port is triggered (explained 8 In the existing TinyOS constructs. Likewise. Recall that once triggered.

the application starts when the runtime system places an initial token at the input port trigger of SenseActor. Note that since each input port of an actor R is linked to a component method. first-out) queue. The order in which actors are initialized is the same as the order in which they are listed in the application configuration file. and calls the method that is linked to the input port with the contents of the token as its arguments. and the components in the triggered DAG of the starting actor execute to completion. Communication between actors is also possible without the transfer of data. Tokens are dropped if the input port queue is full. The execution of a TinyGALS system begins with the initialization of all methods specified in INITRi for all actors Ri . in Figure 3. In this case. the programmer is currently responsible for selecting the correct queue size. these are stored in the token. A copy of the token is placed in the event queue of each input port connected to the output port. the TinyGALS scheduler removes the token from the event queue of each input port connected to the output port. the system does nothing (i. so other source components cannot interrupt this operation. When the system is not responding to interrupts or events on input ports. the arguments of the call are converted into events called tokens. The scheduler processes tokens in the order in which they are generated. The queue separates the flow of control between actors. Tokens are placed in input port queues atomically. the call to the output port returns immediately. . During execution. sleeps). However. interrupts may occur and preempt the normal thread of execution. During execution of a TinyGALS application.. and the component within the actor can proceed. which are input port(s) declared in the appstart section of the application configuration file. After actor initialization. an empty message (token) transferred between ports acts as a trigger for activation of the receiving actor. For example.e.33 in the next section). each token that arrives on any input port of R corresponds to a future invocation of the component(s) in R. The TinyGALS scheduler passes the token to the linked component method. the TinyGALS runtime system places an initial token at each system start port. The order in which methods are initialized for a single actor is the same as the order in which they are listed in the actor configuration file. the trigger() method of component SenseToInt is called. TinyGALS Application Each input port of an actor has a FIFO (first-in. When a component within an actor calls a method that is linked to an output port. They may generate one or more events at the output port(s) of the actor. If initial arguments to the port were declared in the application configuration file. Later. communication between actors occurs asynchronously through these queues.4.

2 for a discussion of interrupts and their effect on the order of events in the global event queue. such that tokens from multiple sources are merged into a single stream in the order that the tokens are produced.6: A single-output. Currently. This type of merge does not introduce any additional sources of nondeterminacy. The runtime system maintains a global event queue which keeps track of the tokens in all actor input port queues in the system. multiple-input connection acts as a fork. The current galsC implementation processes the tokens in the order that they are generated as defined by the hardware clock. such as ones that take care of timing and energy concerns. single-input connection has a merge semantics. The TinyGALS semantics do not define exactly when the input port is triggered. See Section 3. then in the order in which they are declared in the actor configuration file. This does not lead to reentrancy problems because the queue on an actor input port acts as a delay in the loop. as they appear in the application configuration file. More sophisticated scheduling algorithms can be substituted. Tokens generated at the same logical time are ordered according to the global ordering of actor input ports.34 Actor B Actor A A_out B_in Actor C C_in Figure 3. Tokens that are produced at the same “time” are processed with respect to the global input port ordering. Actor output ports may be connected to one or more actor input ports. every token produced by A out is duplicated and trigger both B in and C in. The previous section discussed limitations on the configuration of links between components within an actor. Connections between actors are much less restrictive. . For example. Input ports are first ordered by actor order. A multiple-output. the runtime system activates the actors corresponding to the tokens in the global event queue using FIFO scheduling.2 discusses the ramifications of token generation order on the determinacy of the system. A single-output. in Figure 3. Section 3.6. Cycles are allowed between actors. multiple-input connection. and actor input ports may be connected to one or more actor output ports. control eventually returns to the normal thread of execution. which the next paragraph discusses.

. each message passed triggers the scheduler and activates a receiving actor. In the TinyGUYS mechanism. Because there is only one buffer per global variable. Several actors may access the same global variables at the same time. i.35 TinyGUYS The TinyGALS programming model has the advantages that actors become decoupled through message passing and are easy to develop independently. the Counter. The interrupt service routine may modify the global variables. the count parameter is passed as the last argument to the output port. A write to a TinyGUYS global variable is actually a write to a copy of the global variable. without delay)..output method has a single argument which is written to the count parameter whenever the method is called.e.e. This is implemented as the parameter feature in the galsC programming language.. One can develop components in their own scope.2 discusses how to eliminate race conditions. A component interface method or an actor port can read a parameter when the method or port is invoked by passing the parameter value as one of the arguments. TinyGUYS have global names that are mapped to the local parameter names of each actor. the last value written will be the new value of the global variable.e. However.2. global variables (parameters) are guarded.3. . One can think of this as a write buffer of size one. The TinyGUYS (Guarded Yet Synchronous) mechanism provides a way for actors to share global data safely. One can think of this as a way of formalizing race conditions.IntOuput. It is possible that while an actor is reading the variables. Section 3. In SenseActor in Figure 3. A component interface method or an actor port can write to a parameter by calling a connected function with a single argument. Actors may read a parameter synchronously (i. This design does not require parameter names to appear inside the component name space. independent of the connected parameters. However. Parameters are updated atomically by the scheduler only when it is safe (i. In TimerActor in Figure 3. it may see an inconsistent state. writes to the parameter are asynchronous in the sense that all writes are delayed. the last actor to write to the variable “wins”. after an actor finishes and before the scheduler triggers the next actor).3. an interrupt may occur and preempt the read. One must be very careful when implementing global data spaces in concurrent programs. When the actor resumes reading the remaining variables after handling the interrupt. which may quickly become inefficient if there is global state that must be updated frequently.

A port is triggered when the scheduler invokes it with the first token in its queue.] 9 This model also applies to connections at the application level. the discussed port directions must be reversed: a source port must be an output port and a target port must be an input port. the following enumerates the valid types of links.. and the type of the last argument of p1 must match that of l1 ) and if the return type of f1 matches that of p1 . However. and f is a component interface function (method): source = (l)∗ (p | f ) (l)∗ target = l | p | f (3.4) (3. and a source function must be a required method and a target function must be a provided method. Additionally. similar to the notion of record types [100].5) A trigger is a port or function that appears as the source of a link.1. where l in (t. The link ( f1 . the types of the first two arguments of p1 must match those of f1 . . transfer the token directly from p1 to the output port p2 . p is an actor port name.3 Link model within actors A link x → y inside an actor consists of a source x and a target y. suppose f1 is a required method with exactly two arguments.e. For example. where l is the local name of a parameter. Note that functions do not appear at the application level. global parameter names should be used instead of local parameter names. l) is an abbreviation for any number of parameters appearing before or after the trigger t: • Without parameters – p1 → p2 [When the input port p1 is triggered. create a token from the arguments of the function f1 and send it to the output port p1 .36 3.] – f1 → p1 [When the function f1 is triggered. trigger a function f1 . Also. Using the regular expression model.] – p1 → f1 [When the input port p1 is triggered. A link x → y is valid if the number of arguments and the types of the arguments of the source match those of the target when the arguments on each side of the arrow are concatenated separately. l1 ) → p1 is valid if p1 is an output port that has exactly three arguments whose types match those of the left hand side (i. trigger another function f2 . The return type of a trigger must also match that of the target.] – f1 → f2 [When the function f1 is triggered. A function is triggered when it is called by another function. a source port must be an input port and a target port must be an output port.9 The equations below use regular expressions to describe possible entities of x and y.

l1 ) → l2 [When the input port p is triggered. concatenate the arguments of p1 with the current value of the parameter(s) l. and trigger another function f2 with the corresponding arguments. In a parameter GET (read) link.] – Parameter GET/PUT ∗ (p.] ∗ ( f1 . write its argument to a parameter l. l1 ) → l2 ( f . and send the resulting token directly to the output port p2 . concatenate the arguments of f1 with the current value of the parameter(s) l. l) → f2 Parameter PUT p→l f →l Parameter GET/PUT (p. read the current value of the source parameter l1 and write it to the target parameter l2 . l1 ) → l2 . l) → f1 ( f1 .37 Table 3. No parameters p1 → p2 p1 → f1 f1 → p1 f1 → f2 • With parameters – Parameter GET ∗ (p1 . l) → p1 [When the function f1 is triggered. l) → p2 ( f1 . and send the resulting token to the output port p1 . l) → p2 [When the input port p1 is triggered.1: Summary of valid types of links in TinyGALS/galsC. the parameter value(s) are Parameter GET (p1 .] ∗ ( f .] ∗ f → l [When the function f is triggered. and trigger a function f1 with the corresponding arguments. concatenate the arguments of f1 with the current value of the parameter(s) l. l) → f2 [When the function f1 is triggered. l) → f1 [When the input port p1 is triggered.] – Parameter PUT ∗ p → l [When the input port p is triggered. concatenate the arguments of p1 with the current value of the parameter(s) l. the trigger either (a) triggers the connected function or (b) passes a token to the connected output port. read the current value of the source parameter l1 and write it to the target parameter l2 .] ∗ ( f1 . l1 ) → l2 [When the function f is triggered.] ∗ (p1 . l) → p1 (p1 .] For links with no parameters. write its argument to a parameter l.

The output port of B is directly connected to the input port of actor C. The input port of actor B is the target of the concatenation of the output port of A with a parameter with type τ3 .} Figure 3. τ5 . The known types (τ1 . the trigger causes the source parameter to be read and its value stored in the target parameter. the write to the parameter occurs first. The output port of B is the target of the concatenation of the input port of B and a parameter with type τ5 . In a parameter GET/PUT (read/write) link. What are the semantics of multiple links (i. There are two parts to the type inference system: connections with ports.4 Type inference and type checking The galsC compiler performs high level type inferencing on the connection graph of an application. 10 Connections containing only functions are checked with the nesC type checker. 3.e.10 Ports In galsC.. In Figure 3. the trigger in a parameter PUT link must have only one argument.7: Type checking example. This policy provides a consistent view of ordering in the system.. Note that for the number of arguments to match. actor A contains a component which has a call to function f with type signature τ1 . In a parameter PUT (write) link.38 τ3 Actor A call f() Actor B τ5 Actor C τ1 τ2 τ4 τ6 τ7 τ8 f() {. ports are untyped. fanout from a function)? For example. . The actual types of ports are inferred from the connection graph of a galsC program. The input port of C is a trigger for a function with type signature τ8 . τ8 ) are shown in bold. The buffered parameter value may then get overwritten in the later computation. and connections with parameters but no ports..7. τ3 . appended to the trigger’s argument list and passed to the connected function or port.1. before any additional computation or transfer of control. and the trigger in a parameter GET/PUT link must have no arguments. what is the order of computation if one has f1 → l1 and f1 → f2 ? Or if one has f1 → l1 and f1 → p? In TinyGALS. the trigger writes its argument to the parameter.

Later. The galsC compiler detects a type error when the set of equations conflicts with itself or is unsolvable. The hidden source aspect of these types of components may lead to TinyOS configurations with race conditions or other synchronization problems. and parameters are valid. the device driver component interrupts with the ready data. which means that they are actually both source and triggered components. A higher level component can call the device driver component to ask for data. it is up to the software developer to write thread-safe code. the type checker merely verifies that all the types in a connection match each other. since there are only two types of connections: (1) mappings between a global name and a local name. and called components and defined what kinds of links and connections between components. Although the TinyOS architecture allows components to reject concurrent requests. especially after components are wired together and may have interleaved events. This call returns immediately. This job is quite difficult.1. Parameters The type system for parameter connections without ports is straightforward. and (2) links between a function and a local name.5 Summary In TinyOS.39 One can write a type equation for each connection in the system: τ 1 = τ2 τ2 × τ 3 = τ4 τ4 × τ 5 = τ6 τ6 = τ7 τ7 = τ 8 One can then solve the set of equations to determine the types of the ports. triggered. The galsC compiler derives types for all ports in the system by matching the return type and the argument types of all connected upstream and downstream functions. ports. . 3. The previous sections showed how the TinyGALS component model enables users to analyze potential sources of concurrency problems more easily by identifying source. Since the types of all of these sources and targets are known. A valid system has a unique solution to the set of equations. many components that are wrappers for device drivers are “split phase”.

and race conditions (i. In TinyGALS. a blocking read) is not part of the semantics across actors. An actor A may begin execution when: (1) the scheduler activates A in response to an event at its input port. Interestingly. Deadlock is not possible across actors. such as enqueuing and dequeuing events.. there is no dynamic memory allocation.e. In event-driven systems.e.. the scheduler first dequeues the event with interrupts disabled. Poorly imple- mented systems may suffer from deadlock (i. or (2) an interrupt service component within A is triggered by an external interrupt. livelock (i. since there are critical system operations. In Figure 3.1 Concurrency There are two mechanisms for actors to communicate in TinyGALS: event queues (ports) and guarded global variables (parameters).g. all memory is statically allocated..8. the only possibility for cross-actor concurrent execution is when one actor is in the scheduled context. The execution activated by the scheduler is called the scheduled context.e. Within the put() function. the code that inserts the event back into the event queue . Thus. where the system falls into deadloop and responds to no further interrupts).2 Concurrency and Determinacy Issues Concurrency management is a significant concern in event-driven systems. 3.2. and one or more other actors are in an interrupt context. The event loops back to the input port where it is inserted into the event queue.. which are atomic. Since all scheduled executions of actors are in the scheduled context and controlled sequentially by the scheduler. it is possible for a scheduler to retain control and disable interrupts indefinitely. which may be interrupted by the hardware. there is a direct link between the input port and the output port inside the actor. and the execution triggered by interrupts is called the interrupt context. A TinyGALS program runs in a single thread of execution (single stack).40 3. where shared variables are accessed by multiple threads at the same time). Blocking on shared resources (e. the Loop actor is first triggered by an internal interrupt. then calls the function connected to the inside of the input port (in this case the put() function of the output port). Theorem 1. This section only considers concurrency issues on single processor platforms. where no tasks can proceed due to blocking on a shared resource). which produces an event (token) at the output port. Can this self-loop prevent further interrupts from entering the system? Once the event is enqueued.

Parameters. will the program have a unique state trajectory independent of the execution/CPU speed? Note that single thread sequential programs. So. which is a problem with a much smaller scope. is also atomic. and access to them is atomic and controlled by the scheduler.1. Tokens are stored in event queues.2. where all inputs are 11 The global event queue is defined as the ordered sequence of tokens in the event queues of all actor ports. Theorem 3. are always guarded. in the galsC scheduler. So. As a result of these claims. there is a risk of livelock.2 Determinacy Notice that the lack of concurrency errors does not mean TinyGALS programs are deterministic. Theorem 2. Since there are shared data between actors. Race conditions are not possible across actors. as discussed in the previous section. interrupts are enabled between dequeuing the event and enqueuing the event.8: A self-loop actor triggered by an interrupt. whose value updates are again controlled by the scheduler (where the last value written wins). Race conditions are another major concurrency concern. programmers can focus on concurrency issues within each actor. (2) the contents of the global event queue11 and (3) the values of all global parameters. Two actors may also try to write to a shared variable at the same time. There are two forms of shared data across actors: tokens and parameters. 3. Thus.2. These issues were discussed in Section 3. an actor may be in the midst of writing the data when another actor tries to read the data. . The system state of a TinyGALS program consists of (1) the internal state of all components.41 Actor Loop interrupt Figure 3. However. Livelock is not possible across actors. without a careful implementation of the scheduler. concurrency errors will not happen at the application level across actors. The question of determinacy is that given a unique initial state of a TinyGALS program and a set of known interrupts (in terms of both interrupt time and value). Thus. so future interrupts will not be blocked.

such as Kahn process networks. ACT ORSA . for event-driven systems. system state (including quiescent system state and active system state).6) are ordered first by order of appearance in the application actors list (ACT ORSA ). Definition 1 (System).3 that an application is defined as A = (GLOBALSA . The global event queue provides an ordering for tokens in all input port queues. events that are produced earlier in time with respect to the system clock appear in the global event queue before events that are produced later in time. . actor iteration (in response to an interrupt and in response to an event).. then by order of appearance in the actors input ports list (INPORT SR . which sacrifices real-time properties. Recall that the input port associated with a connection between actors has a FIFO queue for ordering and storing events destined for the input port. Concurrent models. Thus. a representation of this event is also inserted into the global event queue. read into the system. as in Figures 3.42 Actor R (event. 2.t0 ) (event. beginning with definitions for a TinyGALS system. 3.VARMAPSA .g. STARTA ). Recall from Equation 3. are determinate. determinacy may be sacrificed for reactiveness. The system state consists of four main items: 1.CONNECT IONSA .t0 ) Figure 3. This section analyzes the determinacy property of TinyGALS programs. Whenever a token is stored in an input port queue. The values of all internal variables of all components (VCi ). However. which is an ordered list created from the actors input ports set INPORT SR ). The contents of all of the queues associated with actor input ports in the application. and system execution. Definition 2 (System state). can also be determinate [52]. A system consists of an application and a global event queue.9: Two events are produced at the same time. This section also reviews the conditions for well-formedness of a TinyGALS system. Events that are produced at the same time (e. The contents of the global event queue.9 or 3.

Note that the code executed upon component activation may call other methods in the same component or in a linked component. A source component is activated when the hardware it encapsulates receives an interrupt. . A system state is quiescent if there are no events in the global event queue. Recall from Section 3. An iteration of an actor R is the execution of a subset of the components inside of R in response to either an interrupt or an event at an input port.1 (Quiescent system state). encapsulated as a token. no events in any of the actor input port queues in the system. The following defines these two types of actor iterations in more detail. Definition 2. Create a source DAG D by starting with C and following all forward links between C and other components in R.2 that C therefore must be a source component. and hence. and hence.43 4. Component execution is the execution of the code in the body of the interrupt service routine or method through which the component has been activated. at least one event in the queue of at least one actor input port. Note that iteration of the actor may cause it to produce one or more events on its output port(s). Definition 4 (Actor iteration). A system state is active if there is at least one event in the global event queue. Suppose actor R is iterated in response to interrupt I. Recall that the global event queue contains the events in the system.1. The system state is either quiescent or active: Definition 2.2 (Active system state). but the actor input ports contain the data associated with the event. Component execution also includes execution of all external code until control returns and execution of the code body has completed.1 (Actor iteration in response to an interrupt). Definition 3 (Component execution). Note that a TinyGALS system starts in an active system state.” Definition 4. Let C be the component that contains the interrupt handler of I. including what is meant by “subset of components. Iteration of the actor consists of the execution of the components in D beginning with C. Execution of the system can be partitioned into actor iterations based on component execution. The values of all TinyGUYS (GLOBALSA ). A triggered or called component C is activated when one of its provided methods is called. since execution begins by triggering an actor input port.

The following discusses how to choose the actor iteration order.44 Definition 4. Iteration of the actor consists of the execution of the components in D beginning with C. • Cycles among components within an actor are not allowed. iteration of the actor may cause it to produce one or more events on its output port(s). • Source components may neither also be triggered components nor called components. Recall from Section 3. • Input ports may be associated with a single method of a single component. Definition 5 (System execution). Suppose actor R is iterated in response to an event E stored at the head of one of its input port queues. • Component source DAGs must not be connected to other source DAGs. . The order in which actors are executed is the same as the order of events in the global event queue. but other interrupts are not masked. Q.2 and 3. Let C be the component linked to the input port of Q. system execution is the iteration of actors until the system reaches a quiescent state.1. • Component source DAGs and triggered DAGs must be disconnected.2.1. Create a triggered DAG D by starting with C and following all forward links between C and other components in R. as discussed in Sections 3.2 (Actor iteration in response to an event). Assumes that an interrupt whose handler is running is masked. Given a system state and zero or more interrupts. As with the interrupt case. • Outgoing component methods may be associated with a single method of another component. but triggered DAGs may be connected to other triggered DAGs. or with one or more output ports.1.2 that C therefore must be a triggered component. but loops around actors are allowed. Conditions for well-formedness Below is a summary of the conditions that the components within a single TinyGALS actor must satisfy to be well-formed and avoid concurrency problems. or with one or more output ports.

. one can analyze the determinacy of a TinyGALS system. Recall that a TinyGALS system starts in an active system state. there is only one system execution path. In the intuitive notion of determinacy. rn q1 quiescent state a0. A TinyGALS system is determinate. Components in this triggered DAG execute and may generate events at the output port(s) of the actor.45 an actor iteration interrupt I r0 q0 quiescent state r1 a0. the system always produces the same outputs and ends up in the same state after responding to the interrupts.10: A single interrupt. r1 .n−1 active states Figure 3. this section first discusses determinism of a TinyGALS system in the case of a single interrupt occurring in a quiescent state. the actor selected is determined by the order of events in the global event queue. System execution proceeds until the system reaches a quiescent state. This section then discusses determinism for one or more interrupts during actor iteration in the cases (1) where there are no global variables and (2) where there are global variables... as is usually true in an event-driven system? . and in each of the steps r0 . . for each quiescent state and a single interrupt. rn . Determinacy Given the definitions in the previous section. given an initial quiescent system state and a set of interrupts that occur at known times. Figure 3. The component C is a triggered component. .1 r2 . What if one or more interrupts occur during an actor iteration.0 a0.10 depicts iteration of a TinyGALS system between two quiescent states due to activation by an interrupt I. which is part of a DAG. From this quiescent state. The application start port is an actor input port which is in turn linked to a component C inside the actor. that is. between quiescent states. A system is determinate if. . since the system execution path is the order in which the actors are iterated. Theorem 4 (Determinacy).

. .g... This is illustrated in Figure 3. Figure 3. In Figure 3.k refers to an active system state after an interrupt Ii starting from quiescent state q j and after actor iteration rk .k states. . .. This section first examines the case where there are no TinyGUYS global variables. .11 shows a system execution in which a single actor iteration is interrupted by multiple interrupts.0 iteration of the actor corresponding to interrupt I2 from q0 . the superscript x in ax j. . and hence insertions into the global event queue.. Consider an actor R that contains a component C which produces events on the output ports of R. Since source DAGs must not be connected to triggered DAGs. . In . I2 does not interrupt the handling of I1 ). I2 .0 to interrupt I1 from quiescent state q0 . the interrupt(s) may cause insertion of events into other actor input port queues.46 I1 I2 In I0 .11. This is a source of non-determinacy. one can “add” the combined system j. This section assumes that the handlers for interrupts I1 . Suppose the iteration of actor R is interrupted one or more times.0 ax 0. the interrupt(s) cannot cause the production of events on output ports of R that would be used in the case of a normal uninterrupted iteration.1 ax 0. .k is a shorthand for the sequence of interrupts I0 . Then the system state . I2 . Depending on the¡ relative timing between the interrupts and the production of events by C at the output ports of R. then one can predict the state of the system after a single actor iteration even if it is interrupted one or more times. In the TinyGALS notation. aij. . the order of events in the global event queue may not be consistent between multiple runs of the system if the same interrupts occur during the same actor iteration. Suppose active state a1 would be the next state after an iteration of the actor corresponding 0. Determinacy of a system without global variables.n q1 Figure 3. A partial solution for reducing non-determinacy in the system is to delay producing outputs from the actor being iterated until the end of its iteration. q0 ax 0.12. In execute quickly enough such that they are not interleaved (e.11: One or more interrupts where actors have delayed output. I1 . In order to determine the value of active system state ax . This approach is taken by models of computation such as timed multitasking [69] and Giotto [44]. but at slightly different times. If one knows the order of interrupts. . However. and that active state a2 would be the next state after an 0.

0 One can extend this to any finite number of interrupts. + an 0.. system execution is deterministic for a fixed sequence of interrupts. Another solution.0 0. 0.47 Ii q0 ai 0.. I1 I0 I2 In .0 0.13: Active system state determined by adding the active system state after one noninterleaved interrupt. In . where the value of this 0.0 0. Then. as shown in Figure 3. One can also queue interrupts in order to eliminate preemption. but after the completion of the interrupt handlers for interrupts I1 and I2 .0 0. From a performance perspective..0 Figure 3.. which leads to greater predictability in the system.0 0. one must add the system state (append actor input port queue contents) in the order in which the interrupt handlers finish.0 serted (or “appended”) into the corresponding actor input port queues in active system state a1 . . both of these approaches reduces the reactiveness of the system. is to preschedule actor iterations.0 0.12: Active system state after one interrupt. during which interrupts are masked.0 a1 + a2 + .0 a1 + a2 0. before the completion of the iteration of actor R in response to interrupt I0 .0 a1 + a2 + .0 Figure 3..13. if an interrupt occurs..0 expression is the system state in which the new events produced in active system state a2 are in0. However. If the interrupts are interleaved. + an + a0 0. a sequence of actor iterations is scheduled and executed.0 0. would be a1 + a2 . it is also necessary that interrupt handling be fast enough that the handling of the first interrupt I0 completes in a reasonable length of time. That is. q0 a1 0. It is necessary that the number of interrupts be finite for liveness of the system.

where the processing speed is infinitely fast. This may require that the processing speed be quick enough to process all triggered execution before the next interrupt occurs. if a component in a source DAG writes to a TinyGUYS global variable then no component in any triggered DAG can be a writer. 3. Solution 3 Delay writes to a TinyGUYS global variable by an iterating actor until the end of the iteration. Suppose that while an actor is being iterated. a TinyGALS program is non-determinate. Then without timing information. it is interrupted by . interrupts occur only at quiescent states. An extreme version of this case is the “synchronous” assumption in synchronous/reactive models. In general. one cannot predict the final value of the global variable at the end of the iteration.48 Determinacy of a system with global variables. The source of non-determinacy is the preemptive handling of interrupts. and it takes zero time to react to external events [39]. There are several possible alternatives for eliminating this source of nondeterminacy. Suppose that actor R writes to a global variable. That is. the state of the system after the iteration of actor R is interrupted by one or more interrupts is highly dependent on the time at which the components in R write to the global variable(s). Solution 2 Allow multiple writers. Solution 4 Prioritize writes such that once a high priority writer has written to the TinyGUYS global variables. but only if they can never write at the same time. (Note that when read. As currently defined. Components in other source DAGs are only allowed to write if all interrupts are masked. That is.2. Solution 1 Allow only one writer for each TinyGUYS global variable. Also suppose that the iteration of actor R is interrupted. This section now discusses system determinacy in the case where there are TinyGUYS global variables. no component in any source DAG can be a writer (but components in other triggered DAGs are allowed since they cannot execute at the same time).3 Summary A TinyGALS program is determinate in a restricted case. Likewise. and a component in the interrupting source DAG writes to the same global variable. lower priority writes are lost. where there is pure reactive execution. a global variable always contains the same value throughout an entire actor iteration). if a component in a triggered DAG writes to a TinyGUYS global variable.

. the galsC compiler automatically generates all the code necessary for (1) component links and actor connections. and application. including the Berkeley motes.3 Code Generation The highly structured architecture of the TinyGALS model enables automatic generation of the communication and scheduling code for galsC programs.14. (2) communication between actors.1. then without exact timing information. one cannot predict the final value of the global variable at the end of the iteration. The galsC compiler also inherits the datarace detection feature of nesC. The galsC compiler uses the link model described in Section 3.2 and 3. actors. since the decoupling of execution through ports eliminates some possible sources of race conditions. interrupts should be considered as high priority events which should affect the system state as soon as possible. Given the definitions for the components. If both of these actors write to a global variable (i. and functions (methods). This section also gives an overview of the implementation of the TinyGALS scheduler and how it interacts with TinyOS. The galsC compiler takes advantage of a real compiler backend.1. However. and function inlining. The galsC compiler uses traditional compiler techniques. as well as data on the memory usage of TinyGALS. and can compile both nesC and galsC programs. The discussion throughout this section uses the example system illustrated in Figure 3. and (4) system initialization and start of execution. The detection feature is modified for galsC.3 to check links and connections. dead code elimination. 3. In these cases.e.3 show a summary of the generated functions and data structures for galsC. a parameter). Tables 3. event-driven systems are usually designed to be reactive.49 another actor. parameters.1 toolset. The galsC toolset is an extension of the nesC 1. This is an annotated version of the SenseTag application example shown in Figure 3. allowing software developers to avoid writing error-prone concurrency control code. and to infer and check types in the system graph of ports.1 at the beginning of this chapter. . The output of the galsC compiler can be cross-compiled for any platform used with TinyOS. the order of events in the global event queue may not be consistent when the system is executed at different speeds. including type checking. If both of these actors produce events at their output ports. and (3) TinyGUYS global variable reads and writes.

2: Generated code for ports in galsC.14: Code generation for the SenseTag application. actor$port$argi[] X Queue for the ith argument of the input port. GALSC_eventqueue[] Event queue for the TinyGALS scheduler.13 actor$port$head X Points to the beginning of the input port queue..13 actor$port$count X Number of tokens in the input port queue. GALSC_sched_start() X Put initial tokens into input port queues. Function or variable name Per port12 Function Description GALSC_sched_init() X Initialize scheduler data structures. .output trigger ADC ADCControl .50 GALSC_params_buffer GALSC_params TinyGALS scheduler GALSC_sched_init() GALSC_sched_start() GALSC_eventqueue[] uint16_t count = 0 count TimerActor actorControl IntOutput. actor$port$put() X X Put token into input port queue. output TimerC Photo SenseActor$trigger$arg0[64] SenseActor$trigger$put() SenseActor$trigger$head SenseActor$trigger$get() SenseActor$trigger$count Figure 3. Table 3.output actorControl Counter StdControl Timer StdControl Timer TimerControl Trigger trigger trigger 64 trigger StdControl SenseActor count SenseToInt IntOutput. actor$port$get() X X Get token out of input port queue..

In the example in Figure 3.. 14 “Per parameter” indicates that this function or variable is generated for each parameter.3. which calls TimerM$StdControl$init(). 3. If not indicated. there is only one instance of the function or variable for the entire galsC program. there is only one instance of the function or variable for the entire galsC program. and n is the length specified by the programmer in the application definition file. 12 “Per . The galsC compiler also generates similar aliases and mapping functions for connections between actors. but it still reserves port” indicates that this function or variable is generated for each input port. GALSC_params_buffer Copy of GALSC_params.3. the compiler generates a queue of width m and length n.2 for the source code of the TimerC and TimerM components). for the links between the TimerControl interfaces of the Trigger and TimerC components. 3. For each input port of an actor. where m is the number of arguments in the linked component method.e. The galsC compiler generates a mapping function named Trigger$TimerControl$init(). the compiler does not generate a queue for the port. If the linked component method has no arguments. the alias and destination for the link is TimerM$StdControl$init() (see Figure 3. Function or variable name Per parameter14 Function Description GALSC_params Contains all of the parameters. as detailed in the next section.3: Generated code for parameters (TinyGUYS) in galsC. parameter$put() X X Write to parameter buffer. For the init() method of the TimerControl interface15 . TimerControl is an alias for StdControl that is explicitly declared in the declaration of the Trigger component using the as keyword in nesC. though the called function is a put() or get() function for an actor port. then as an optimization. 15 Here.1 Links and connections The compiler generates a set of aliases and mapping functions that create the links between components. 13 This variable is not generated if the port has no arguments (i. parameter$get() X X Read from parameter. The mapping functions for the links between components is the same as in the original nesC compiler—these are intermediate functions that call the destination function. the galsC compiler generates an alias and a mapping function for each method of the interface. If not indicated.2 Communication The compiler automatically generates a set of scheduler data structures and functions for each connection between actors.14. the token contains no data). as well as the connections between actors.51 Table 3.

The mapping function is called whenever a method of a component wishes to write to an output port. Yet another approach would be to place a higher priority on more recent events by deleting the oldest event in the queue to make room for the new event. In the example in Figure 3. The put() function handles the actual copying of data to the input port queue. the galsC compiler generates a mapping function. the galsC compiler also generates a mapping function. For each link between a component method and an actor output port. one can take one of several strategies. The scheduler also modifies SenseActor$trigger$head and SenseActor$trigger$count before 16 TimerActor. . The galsC compiler also generates a put() and get() function for each input port. as described in the previous section. The galsC scheduler currently takes the simple approach of dropping events that occur when the queue is full. as described in the previous section. For each link between a component method and an actor input port. which in turn calls the linked input port put() function.14.Trigger. The put() function also adds the port identifier to the scheduler event queue so that the scheduler activates the actor at a later time.trigger() is a method with one argument. The mapping function calls the get() function of the linked input port. The mapping function TimerActor$Trigger$trigger() in turn calls SenseActor$trigger$put() to insert data into the queue. In the example in Figure 3.52 space for events in the scheduler event queue. The compiler also generates a pointer and a counter for each input port to keep track of the location and number of tokens in the queue. for the definition of the trigger input port of SenseActor. It modifies SenseActor$trigger$head and SenseActor$trigger$count to keep track of the queue contents. the galsC compiler generates a mapping function TimerActor$Trigger$trigger() for the trigger method of component Trigger in TimerActor. the system calls SenseActor$trigger$get() when the scheduler activates SenseActor to remove data queued in SenseActor$trigger$arg0[0]. However. as well as the variables SenseActor$trigger$head and SenseActor$trigger$count. In the example.14. If the queue is full when attempting to insert data into the queue. an alternate method is to generate a callback function which attempts to re-queue the event at a later time. When the scheduler activates an actor via an input port. and generates functions SenseActor$trigger$put() and SenseActor$trigger$get() for the input port trigger of SenseActor. the system first calls this generated function to remove data from the input port queue and pass it to the component method. the galsC compiler generates an input port queue of length 64 called SenseActor$trigger$arg0[ ]16 .

This function places initial tokens into the input port queues specified in the appstart section of the application definition. along with a buffer named GALSC params buffer.14. The code generator also creates an application start function called GALSC sched start(). The pair of access functions consists of a get() function that returns the value of the global variable. the galsC compiler generates a global variable named GALSC params. SenseActor. 3. which performs all of the runtime initialization. The code generator also creates functions count$put() and count$get().count. Figure 3. There is a single scheduler . and a put() function that stores a new value for the variable in the variable’s buffer.5 Scheduling Execution of a TinyGALS system begins in the scheduler. The order of actors listed in the application definition determines the order in which the interfaces are connected. In the source code shown in Figure 3.4. For the example in Figure 3. The code generator also connects the StdControl interfaces listed in the actorControl section of each actor to the Main component used in TinyOS to initialize the system. Therefore. 3. 3.3. which initializes the scheduler data structures.3. the GALSC sched start() function calls the SenseActor$trigger$put() function at the start of the system.53 calling the trigger() method of the SenseToInt component with the newly removed data as the argument.15 shows the TinyGALS scheduling algorithm. along with a buffer for the storage location.count. The pair of data structures consists of a data storage location of the type specified in the actor definition that uses the global variable.trigger() is listed in the appstart section of the application definition. The mapping functions generated for the component connections to TinyGUYS parameters calls these put() and get() functions.3. A generated flag indicates whether the scheduler needs to update the variables by copying data from their buffers.4 System initialization and start of execution The code generator creates a system-level initialization function called GALSC sched init().3 TinyGUYS The compiler generates a pair of data structures and a pair of access functions for each TinyGUYS global variable declared in the application definition.

the scheduler first copies buffered values into the actual storage for any modified TinyGUYS global variables. and TinyOS tasks run at the lowest priority. end if Figure 3. If the global event queue contains an event. which leads to programs that . at which point the system goes to sleep. The TinyGALS scheduler is a two-level scheduler. TinyGALS actors.15: TinyGALS scheduling algorithm. in TinyGALS which checks the global event queue for events. The user must write synchronization code to ensure that there are no race conditions when multiple threads of execution access this data. Note that the TinyOS scheduler is included as a subset of the TinyGALS scheduler for backwards compatibility with TinyOS tasks. Run task. which is a more natural way to write applications. The scheduler removes the token corresponding to the event from the appropriate actor input port and passes the value of the token to the component method linked to the input port.54 if there is an event in the global event queue then { if any TinyGUYS have been modified Copy buffered values into variables. The TinyGALS programming model removes the need for TinyOS tasks. on the other hand. end if Get token corresponding to event out of input port. However. The asynchronous and synchronous parts of the system are clearly separated to provide a well-defined model of computation. If TinyOS tasks are not used. else if there is a TinyOS task then { Take task out of task queue. The algorithm loops until there are no events or TinyOS tasks. lengthy operations should be spread across multiple tasks. so it is difficult for a developer wiring off-the-shelf components together to predict what non-interrupt driven computations will run in the system. TinyOS tasks are not explicitly defined in the interface of the component. Pass value to the method linked to the input port. TinyGALS actors run at the highest priority. the TinyGALS scheduler is about the same size as the original TinyOS scheduler. allow the developer to explicitly define “tasks” at the application level. If the global event queue contains no events. However. since there is no communication between tasks. the only way to share data is through the internal state of a component. In TinyOS tasks must be short. the scheduler runs any posted TinyOS tasks. Both triggered actors in TinyGALS and tasks in TinyOS provide a method for deferring computation.

memory usage of a TinyGALS application is determined mainly by the user-specified queue sizes and the total number of ports in the system. Assume that the motes know their locations on the grid and the grid size. For a simple galsC photosensor application. The application primarily consists of two tasks: (1) exchanging local sensor readings to determine the “leader” responsible for reporting a detection. the leader election is achieved by having every mote periodically broadcast a packet containing the location of the mote and its sensor reading.4 Example To illustrate the effectiveness of the galsC language.55 are easier to debug. The get() and put() functions for a parameter of type uint16 t use 30 bytes. and to report the detection to a central base station. The developer has no need to write synchronization code when using TinyGUYS to share data between tasks. Note that the goal here is to illustrate the language. A set of sensor nodes (motes) are deployed in a 2-D field. assume that the motes are deployed on a perturbed grid. located at the lower left corner of the field. and (2) multi-hop forwarding of the report messages to the base station. The goal of the sensor network is to detect moving objects modeled as point signal sources. To simplify the discussion. the galsC compiler automatically generates the code. the initialization and scheduling code is 662 bytes compared to 564 bytes for the original nesC code.6 Memory usage TinyGALS provides an improved programming model in exchange for a minimal applicationdependent increase in code size for scheduling and communication between actors. The scheduler event queue size is equal to the sum of the user-allocated sizes for each port connection (depends on the size of the data type). consider a classical sensor network appli- cation that detects and monitors point-source targets. These packets also serve as beacons to establish a multi-hop routing structure. 3. as shown in Figure 3. The multi-hop routing is implemented as a routing tree rooted at the base station. The get() and put() functions for a port with one argument of type uint8 t together use 208 bytes. since event queues are generated as application-specific data structures. Assume . The TinyGALS communication framework is very lightweight. rather than to develop sophisticated algorithms to solve the problem optimally.16.3. Thus. 3. The globally asynchronous nature of TinyGALS provides a way for tasks to communicate. For simplicity.

It then calculates its own hop count from its parent’s hop count.17 shows a high-level view of the galsC implementation of the object detection application. Every message contains the hop count of the sender. The reachable nodes of a wireless broadcast may have a complicated shape. TimerActor emits a token that triggers the SenseAndSend actor. a mote finds out its parent in the tree by eavesdropping on other messages. To compensate for the unreliable and sometimes asymmetric wireless communication links. Every half second. These messages include sensor reading broadcasts and forwarded report messages. Two types of event sources drive the execution of a mote—clock interrupts and received messages. a mote maintains a list of senders it has heard in the past T seconds and chooses the most reliable one (measured by. the TimerActor handles clock interrupts and updates the latest timer count in a parameter named timeCount. Figure 3. a trade-off between low hop count and message repeatability) as its parent node.16: Sensor array for object detection and reporting. for example. that no mote has the global topology of the network. every node that can overhear the message notes that it is probably one hop away from the base station. For example.16. Similar to the example from the beginning of the chapter in Figure 3. All motes run the identical code modular to their locations. Whenever it broadcasts a message. The MessageReceiver actor receives messages from the radio and chooses an action based on the message type: . the mote directly connected to the base station has hop count 0.1. as illustrated by the dashed line in Figure 3.56 base station Figure 3. which indicates the level of the sender in the routing tree.

the actor updates an internal routing table by looking at the repetition frequency of the sender node.57 • If the message is a local broadcast. SenseAndSend generates a report message and queues it with the MessageForwarder actor. The MessageForwarder actor also takes the parentNode ID as part of its input token. actors communicate with each other asynchronously via message passing. it updates the parentNode and hopCount parameters. which separates the flow of control between actors. At the global level.. • If the message is a forwarding message. Both the LocalBroadcast actor and the MessageForwarder actor send out packets with this mote’s hopCount so that other motes can use it to build the multi-hop routing tree. the neighbors are defined as the motes directly above. multitasking programs based on the actor model. the actor queues a local broadcast of the sensor reading.5 Summary This chapter described the TinyGALS programming model for event-driven embedded systems such as sensor networks. locally synchronous model allows developers to use high-level constructs such as ports and parameters to create thread-safe. and the galsC programming language that implements the programming model. . Note that since only the latest neighbor sensor reading matters. the overriding semantics of TinyGUYS variables is a natural fit. The SenseAndSend actor activates the ADC (analog-to-digital converter) to get a sensor reading. and right of this mote in the grid. The actor also compares its own reading with the latest values from its neighbors.e. merged with the requests from SenseAndSend and MessageReceiver.17 If this mote has the highest sensor reading (i. Note that it requires the timeCount value to determine the rate of the messages heard. left. Whenever there is a change of the desired parent node. and thus this node’s hop count. it is closest to the signal source). the actor updates the neighborReadings table. 17 Here. below. Once the sensor reading is available. the actor sends the content of the message to the downstream MessageForwarder actor. • Also for each broadcast message. software components are linked via synchronous method calls to form actors. 3. At the local level. A complementary model called TinyGUYS is a guarded yet synchronous model designed to allow thread-safe sharing of global state between actors via parameters without explicitly passing messages. The globally asynchronous.

The language and compiler are implemented for the Berkeley motes and extend TinyOS/nesC by providing a higher programming abstraction level than the TinyOS primitives. per-node view of the object detection application. This chapter also described a type system for checking connections across synchronous and asynchronous communication boundaries. Having a well-structured concurrency model at the application level greatly reduces the risk of concurrency errors. dead code elimination. which allows developers to avoid writing error-prone task synchronization code. .58 timeCount LocalBroadcast TimerActor SenseAndSend neighborReadings hopCount MessageReceiver MessageForwarder parentNode Figure 3.17: Top-level. and function inlining. which allows galsC to have traditional type checking. The galsC compiler extends the nesC compiler. as well as checking for possible race conditions. such as deadlock and race conditions. The galsC compiler automatically generates communication and scheduling code for programs specified in the galsC language.

that work. to machines. it is necessary to allow for an extensive iteration between the client and the designer as part of the system definition. the clients do not know what they want. Brooks. Therefore the most important function that software builders do for their clients is the iterative extraction and refinement of the product requirements. Successful system development in the future will revolve around . Jr. and to other software systems. Frederick P. things that act. moreover. writes about requirements refinement and rapid prototyping: The hardest single part of building a software system is deciding precisely what to build. No other part of the conceptual work is so difficult as establishing the detailed technical requirements. and they almost never have thought of the problem in the detail that must be specified. For the truth is. in the twentieth-anniversary edition of The Mythical Man Month [17]: Harel argues strongly that much of the conceptual construct of software is inherently topological in nature and these relationships have natural counterparts in spatial/graphical representations: Using appropriate visual formalisms can have a spectacular effect on engineers and programmers. Moreover this effect is not limited to mere accidental issues. Brooks later quotes Harel. No other part of the work so cripples the resulting system if done wrong. So in planning any software activity.59 Chapter 4 Viptos In The Mythical Man Month [17]. Even the simple answer—“Make the new software system work like our old manual information-processing system”—is in fact too simple. The dynamics of that action are hard to imagine. that move. No other part is more difficult to rectify later. Clients never want exactly that. the quality and expedition of their very thinking was found to be improved. including all the interfaces to people. Complex software systems are. They usually do not know what questions must be answered. author of STATEMATE [42].

and its event-driven execution model. visual manner. but it does not allow simulation of networks that contain different programs. especially in an intuitive. each of which conjures up different kinds of mental images. This chapter presents Viptos (Visual Ptolemy and TinyOS). Viptos is built on Ptolemy II. Although a large community uses TinyOS in simulation to develop and test various algorithms and protocols. TOSSIM can efficiently model large homogeneous networks where the same nesC code is run on every simulated node. a TinyOS program consists of a graph of mostly pre-existing nesC components. and TOSSIM. however. text-based format. TinyOS application developers can use TOSSIM [65]. Users may choose from a few built-in radio connectivity models in TOSSIM. an extension to the C programming language. To address these problems. does not provide a mechanism for transitioning from a sensor network application developed within the framework to an implementation for real hard- . they face some key limitations when using the nesC/TinyOS/TOSSIM programming toolsuite. or deployment. A TinyOS program consists of a graph of components that are written in an object-oriented style using nesC [32]. a joint modeling and design environment for wireless networks and sensor node software. even though a graphical block diagram programming environment would be much more intuitive. most existing tools for wireless sensor networks focus on either design. a graphical modeling and simulation environment for embedded systems. and wired subsystems.60 visual representations. but it is difficult to use other models. a TinyOS simulator for the PC that can execute nesC programs designed for a mote. and then formulate and reformulate our conceptions as a series of increasingly more comprehensive models represented in an appropriate combination of visual languages. consider VisualSense [8]. A combination it must be. an interrupt-level discrete-event simulator for homogeneous TinyOS networks. Additionally. As discussed in Chapter 1. which ties in well with an actor-oriented approach. simulation. Similar barriers to integrated design and deployment exist for other popular wireless sensor network development platforms. We will first conceptualize. wireless communication channels. since system models have several facets. physical media such as acoustic channels. None of these allow extensive iteration between design and implementation. which allows modeling of various hardware and other interrupt events. users must write their programs in a multi-file. as discussed in Chapter 1. TOSSIM contains a discrete-event simulation engine. VisualSense. using the “proper” entities and relationships. a Ptolemy II-based graphical modeling and simulation framework for wireless sensor networks that supports actor-oriented definition of sensor nodes. TinyOS was chosen because of its large and active user base in the wireless sensor network community.

61 ware without rewriting the code from scratch for the target platform. VisualSense mainly provides an abstract, mathematically-based modeling environment, and node models must be created from scratch. Integrating TinyOS and VisualSense combines the best of both worlds. TinyOS provides a platform that works on real hardware with a library of components that implement low-level routines. VisualSense provides a graphical modeling environment that supports hierarchical, heterogeneous systems. The result, Viptos, allows networked embedded systems developers to construct block and arrow diagrams to create TinyOS programs from any standard library of TinyOS components written in nesC. Viptos automatically transforms the diagram into a nesC program that can be compiled and downloaded from within the graphical environment onto any TinyOS-supported target platform. Viptos also includes the full capabilities of VisualSense, including modeling of communication channels, networks, and non-TinyOS nodes. It presents a major improvement over VisualSense by allowing developers to refine high-level wireless sensor network simulations down to real-code simulation and deployment, and adds much-needed capabilities to TOSSIM by allowing simulation of heterogeneous networks. Viptos provides a bridge between Ptolemy II and TOSSIM by providing interrupt-level simulation of actual TinyOS programs, with packet-level simulation of the network, while allowing the developer to use other models of computation available in Ptolemy II for modeling the physical environment and other parts of the system. This framework allows application developers to easily transition between high-level design and simulation of algorithms to low-level implementation, simulation, and deployment. The work presented in this chapter has three main contributions. First, it addresses a need for a unified wireless sensor network development environment that allows abstract modeling and refinement to low-level simulation and deployment. Second, it provides insights into the integration of the semantics of two different simulation systems, with different representations of software components, programming languages, types systems, and schedulers. Third, it shows through evaluation that the implementation of the combined system is linearly scalable in the number of nodes, and even without aggressive performance tuning, can simulate moderately large, heterogeneous sensor networks effectively. Section 4.1 describes the architecture of the integrated TinyOS and Ptolemy II toolchain and investigates the semantics of this interface. Section 4.2 evaluates the performance of Viptos. Section 4.3 summarizes this chapter. Related work is presented separately, in Chapter 6 (Section 6.2).

62

4.1

Design
Viptos provides an integrated toolchain for designing, simulating, and deploying sensor net-

work applications by integrating the programming and execution models and the component libraries of two systems: Ptolemy II/VisualSense and TinyOS/TOSSIM. This section describes the architecture of this integrated system in detail, including the representation of nesC components, the transformation of the nesC components into this representation, the generation of deployment and simulation code for TinyOS programs developed in Viptos, and the simulation of sensor network models that include nodes running TinyOS.

4.1.1

Representation of nesC components

Let us review the basics of the nesC programming language used in TinyOS. A nesC component exposes a set of interfaces. An interface consists of a set of methods. A method is known as either a command or an event. A nesC component implements its provides methods and expects other components to implement its uses methods. A nesC component is either a configuration that contains a wiring of other components, or a module that contains an implementation of its interface methods. A TinyOS program consists of a set of nesC components, where the top-level file that describes the application is a nesC component that exposes no interface methods. Figure 4.1(a) shows a TinyOS program called SenseToLeds that displays the value of a photosensor in binary on the LEDs of a mote. SenseToLeds contains a wiring of the components Main, SenseToInt (whose source code is shown in Figure 4.1(b)), IntToLeds, TimerC, and DemoSensorC. These components are just a few of the nesC components that are available in the TinyOS component library. NesC interfaces can also be parameterized to provide multiple instances of the same interface in a single component. In Figure 4.1(a), the TimerC.Timer interface is parameterized. The Timer interface of SenseToInt connects to a unique instance of the corresponding interface of TimerC. If another component connects to the TimerC.Timer interface, it connects to a different instance. Each timer can be initialized with different periods. In Ptolemy II, basic executable code blocks are called actors and may contain input and/or output ports. A port may be a simple port that allows only a single connection, or it may be a multiport that allows multiple connections. Fan-in to, or fan-out from, simple ports may be achieved by placing a relation in the path of the connection. A code block is stored in a class, and an actor is an instance of the class.

63

configuration SenseToLeds { } implementation { components Main, SenseToInt, IntToLeds, TimerC, DemoSensorC as Sensor; Main.StdControl -> SenseToInt; Main.StdControl -> IntToLeds; SenseToInt.Timer -> TimerC.Timer[unique("Timer")]; SenseToInt.TimerControl -> TimerC; SenseToInt.ADC -> Sensor; SenseToInt.ADCControl -> Sensor; SenseToInt.IntOutput -> IntToLeds; }

module SenseToInt { provides { interface StdControl; } uses { interface Timer; interface StdControl as TimerControl; interface ADC; interface StdControl as ADCControl; interface IntOutput; } } implementation { ... }

(a) (b)

Figure 4.1: Sample nesC source code. Table 4.1: Representation scheme for nesC components in Viptos. NesC construct component uses interface provides interface non-parameterized interface single-index parameterized interface1 fan-in or fan-out Ptolemy II construct class output port input port simple port multiport relation Ptolemy II Graphical Icon block outward pointing triangle inward pointing triangle black triangle white triangle black diamond

Viptos uses the representation scheme shown in Table 4.1 for the various parts of nesC components. Figure 4.2(c) shows a graphical representation in Viptos of the equivalent wiring diagram for the SenseToLeds configuration shown in Figure 4.1(a). Relations are represented by diamondshaped icons. Note that the TimerC component in Figure 4.2(c) provides a parameterized interface, or input multiport, as indicated by the white triangle pointing into the block. Non-parameterized interfaces, or simple ports, are represented by black triangles. Viptos can serve as a program design and editing environment—users design programs by manipulating the Ptolemy II graphical icons on the screen, then generate code using the automatic process described later in Sections 4.1.3 and 4.1.4.
multiple-index, parameterized interfaces are allowed in nesC, Viptos does not support them, since they are not used in practice and do not appear in any existing components in the TinyOS component library.
1 Although

Figure 4. For both nc2moml and ncapp2moml. Viptos uses the resulting MoML files to display TinyOS components as a library of graphical blocks.1 compiler. hierarchical components. Figure 4. TinyOS application files in nesC do not have interfaces. Viptos uses the NDReader Java class provided in the nesC 1. for nesC top-level applications. Both versions of nc2moml generate MoML syntax that specifies the name of the component. an XML-based language used in Ptolemy II to specify interconnections of parameterized. Viptos treats subcomponents and top-level applications differently when transforming nesC files into MoML. As discussed previously. which decouples nc2moml from nesC compiler version updates. Viptos uses MoML (Modeling Markup Language) [61].2(c) shows a TinyOS program created graphically using components from the converted library.1. The ncapp2moml tool harvests TinyOS nesC application files and converts them into Viptos MoML model files.2 compiler distribution to parse nesC XML output and create nesC-specific data structures. a nesC component is either a subcomponent of an application if it exposes interface methods. The ncapp2moml tool uses information about the nesC wiring graph and the referenced interfaces in the XML output from the nesC 1.1(a).nc file shown in Figure 4. Figure 4. or a top-level application if it does not. as well as the name and input/output direction of each port.2 compiler to generate MoML syntax that specifies a model containing the class corresponding to each nesC component used.3 shows the generated MoML code for the TimerC component referenced in Figure 4. Unlike the TinyOS component files examined by nc2moml. the relations required at each port. The user may drag and drop components from the library onto the workspace and create connections between component interfaces by clicking and dragging between ports. ncapp2moml can also automatically embed the converted TinyOS application into a template model containing a representation of the hardware interface of the node and optionally.4 shows an example of a portion of the MoML code generated from the SenseToLeds. and the links between the ports and relations such that the connections in the model correspond to the connections between interfaces in the nesC file.2 Transformation of nesC components As the implementation for representing nesC components. The nc2moml tool harvests TinyOS nesC component files and converts them into MoML class files. The initial version of nc2moml was a modification of the source code of the nesC 1. The current version of nc2moml uses the XML output feature of the nesC 1. a default physical environment.2 compiler. For nesC subcomponents. . and whether they are multiports. Viptos provides a tool called ncapp2moml. Viptos provides a tool called nc2moml.64 4.1(a).

65 a b e f d c Figure 4. .2: SenseToLeds application in Viptos.

berkeley...2(c)) into a nesC file.66 <?xml version="1.ptinyos. which controls code generation. such as the clock. including cross- .dtd"> <class name="TimerC" extends="ptolemy.nc" /> <property name="_displayedName" class=". including directories containing the components that encapsulate the hardware components specific to the target platform. The nesC compiler generates a pre-processed C file. radio." /> </port> </class> Figure 4.3 Generation of code for target deployment When a user compiles a TinyOS program for an actual sensor node.eecs." /> </port> <port name="Timer" class="ptolemy.NCComponent"> <property name="source" value="$CLASSPATH/tos/system/TimerC. Viptos can transform a model of a TinyOS program (as in Figure 4.1.actor. and sensors. Viptos does not use XSLT (Extensible Stylesheet Language Transformations) because the generated MoML files are not complex. Viptos does this transformation by means of a director called PtinyOS Director. which means that it is possible to convert back and forth between Viptos models and nesC files. which it can send to a cross compiler for the target hardware.0 to construct and generate XML output. the nesC compiler automatically searches the TinyOS component library paths for included components.domains.actor.IOPort"> <property name="input" /> <property name="multiport" /> <property name="_showName" class="..lib. Note that this is the opposite of ncapp2moml.0"?> <!DOCTYPE plot PUBLIC "-//UC Berkeley//DTD MoML 1//EN" "http://ptolemy.IOPort"> <property name="input" /> <property name="_showName" class=".3: Generated MoML by nc2moml for TimerC. A user can configure the PtinyOS Director (Figure 4. and deployment to target hardware for a single node. 4.nc The tools use JDOM 1..2(d)) to compile the generated nesC code to any target supported by the TinyOS make system.." value="TimerC" /> <port name="StdControl" class="ptolemy..edu/xml/dtd/MoML_1. simulation.

system.IORelation" /> <relation name="relation3" class="ptolemy. Figure 4.Timer" relation="relation5"/> <link relation1="relation5" relation2="relation4"/> .system.Main" /> <entity name="SenseToInt" class="tos.IORelation" /> .. </entity> .TimerC" /> <entity name="Main" class="tos.. <link relation="relation1" port="Main.actor..IORelation" /> <relation name="relation2" class="ptolemy.lib.MicaCompositeActor"> .domains.micasb.StdControl" relation="relation3"/> <link relation1="relation3" relation2="relation1"/> <link relation="relation4" port="SenseToInt..Counters.IORelation" /> <relation name="relation5" class="ptolemy.StdControl" relation="relation2"/> <link relation1="relation2" relation2="relation1"/> <link port="SenseToInt.nc .StdControl"/> <link port="IntToLeds...lib..67 .ptinyos.Counters.actor.actor.lib..IntToLeds" /> <relation name="relation1" class="ptolemy.actor. <entity name="DemoSensorC" class="tos.4: Generated MoML by ncapp2moml for SenseToLeds.sensorboards.DemoSensorC" /> <entity name="TimerC" class="tos. <entity name="MicaCompositeActor" class="ptolemy.IORelation" /> <relation name="relation4" class="ptolemy.Timer"/> <link port="TimerC..actor..SenseToInt" /> <entity name="IntToLeds" class="tos.

Viptos provides a model of the hardware interface of a Mica mote with sensor board. including nonTinyOS nodes.1(a). temperature.4 Generation of code for simulation When a user compiles a TinyOS program for simulation with TOSSIM. The Viptos simulation environment provides more capabilities than TOSSIM alone. The user can take advantage of the hierarchical. microphone. but with the TinyOS scheduler and device drivers replaced with TOSSIM code. and other wireless nodes. timetriggered. photoresistor.2(b) shows this graphically. The director also generates a makefile that includes all of the paths necessary for compilation to target hardware. heterogeneous nature of Ptolemy II to create detailed models of physical phenomena such as light. Viptos users can model and simulate the physical environment. Figure 4. the TOSSIM executable image depends on the particular TinyOS program specified by the user. Developers may choose from diverse models of computation. or TOSSIM for external simulation. If the user specified the ptII simulation target as the target compilation platform. Users may also interface to live data through Ptolemy II library blocks such as those that interface with the microphone or the IP (Internet Protocol) network. and ports for the LEDs and radio communication. microservers. Thus. As a template for modeling a real wireless sensor node. and Kahn process networks. synchronous/reactive. Figure 4. Running the model in Figure 4. and other nodes. In addition to simulating wireless sensor node(s) running TinyOS. which Viptos uses internally and that users can run externally.1. dataflow. and accelerometer. and sound. radio channels. the nesC compiler follows the procedure described in the previous section. A common actor-oriented programming and execution model unifies these modeling capabilities. as well as models of entities such as buildings.68 compilation to target hardware.2(a) shows a basic example with models of a light source and a sensor node. The PtinyOS Director also generates a Java wrapper to load the shared library . the PtinyOS Director then compiles the nesC file against a custom version of TOSSIM to create a shared library. This hardware representation includes ports for the ADC (analog-to-digital converter) channels connected to sensors that include a thermistor.2(b) causes the PtinyOS Director to generate a nesC file and a makefile. equivalent to that shown in Figure 4. The user can also download code to the target hardware from the Viptos interface. such as continuous-time. servers. Running the model in Figure 4. wired subsystems.2(c) causes the PtinyOS Director to generate a nesC component file for SenseToLeds. magnetometer. 4.

Methods may transfer the flow of control to another component by calling a uses method.1. 4. which may be interrupted by hardware events. TOSSIM is a discrete-event simulator for TinyOS. or it may block the processing of other events. NesC component methods encapsulate hardware interrupt handlers. Computation performed in a sequence of method calls must be short. In TOSSIM.5 summarizes the scheduling algorithm. If there is an event in the event queue. there is a single thread of control managed by the scheduler. type system. and support for multiple nodes and multi-hop routing.5 Simulation of TinyOS in Viptos This section explains how Viptos simulates TinyOS programs and discusses the integration of the TOSSIM and Ptolemy II framework in terms of scheduling. Figure 4. Tasks are atomic with respect to other tasks and do not preempt other tasks. Its scheduler contains a task queue similar to the regular TinyOS scheduler. In TinyOS. which Viptos uses to allow calls between the C-based TOSSIM environment and the Java-based Ptolemy II environment. as well as an ordered event queue. Scheduling Let us review the basics of the TinyOS scheduling model. A long-running computation can be encapsulated in a task. The smallest time resolution is equal to 1/(4 MHz). Viptos relies on the nesC compiler to do a complete analysis of the connected nesC interface methods at the TinyOS level to detect incorrect usage of commands or events marked with the async keyword and hence possible race conditions. The TinyOS scheduler processes the tasks in the queue in FIFO order whenever it is not executing an interrupt handler. radio and I/O. the original CPU clock period of the Rene/Mica motes. The processing of an event may cause new tasks to be posted to the task queue and new events to be created with time stamps possibly equal to the current time stamp. the TOSSIM scheduler updates the simulated system time with the time stamp of the new event and then processes the event. An event in this queue has a time stamp implemented as a long long in C (a 64-bit integer on most systems). The TOSSIM scheduler begins its main loop by processing all tasks in the task queue in FIFO order. Upon initialization.69 into Viptos so that the PtinyOS Director can run the shared library via JNI (Java Native Interface) method calls. TOSSIM inserts a boot-up event into the event queue. which a method posts to the scheduler task queue. all components call the queue insert event() function to insert new events into the event queue. . To avoid duplicate functionality.

In Viptos. and then processes all tasks in the task queue. Viptos uses a modified TOSSIM queue insert event() function that also makes a JNI call to insert an event with the TOSSIM time stamp into the event queue of the Ptolemy II discrete-event scheduler (DE director) that controls the PtinyOS Director. processes an event in the TOSSIM event queue. At each event time stamp. Handle the event.70 while (true) { while there are TinyOS tasks { Process them. The main loop updates the TOSSIM system time. At the top level of a model. The DE domain provides execution semantics where interactions between components occur via events with time stamps. The DE domain uses a sophisticated calendar-queue scheduler to efficiently process events in chronological order. the scheduler processes that event along with any tasks that 2 The JNI call uses fireAt() with the TOSSIM system time as the argument.2 Thus. Viptos calls the custom TOSSIM scheduler to process the event. In Viptos. a node model contains an instance of PtinyOS Director. If the TOSSIM event queue contains another event with the current TOSSIM system time. Formal semantics ensure determinate execution of deterministic models [59]. Viptos controls the execution of TOSSIM by using customized TOSSIM scheduler and device driver functions that notify Viptos of all TOSSIM events.5: TOSSIM scheduling algorithm. which compiles and loads a custom copy of TOSSIM that simulates the code for a single node. although the DE domain also supports stochastic models for Monte Carlo simulation. The precision in the semantics prevents the unexpected behavior that sometimes occurs due to modeling idiosyncrasies in some modeling frameworks. Viptos uses a specialization of the discrete-event (DE) domain of Ptolemy II [15] created for modeling wireless systems in VisualSense. Viptos uses the same event time stamps as TOSSIM. . the specialized DE director may control one or more node models. end while if the event queue is not empty { Set the TOSSIM time to the time of next event. end if end while Figure 4.

end while while (the event queue is not empty and the time of the next event is the same as the current TOSSIM time) Figure 4. one thread can be executing a simulation of the model while another changes the structure of the model. and ports may all impose constraints on types. Viptos composes these two type systems. end if while there are TinyOS tasks { Process them. in which actors. or changing the connectivity between actors.71 do { if the event queue of this instance of TOSSIM is not empty { Set the TOSSIM time to the time of next event. by adding. Note that the order in the main loop of the custom TOSSIM scheduler is opposite that of the original TOSSIM.. Thus. The software is carefully architected to support multithreaded access to this mutation capability. parameters. or moving actors. Figure 4. the C type system and the Ptolemy II type system.6: Viptos version of TOSSIM scheduling algorithm.g. Handle the event. Ptolemy II provides its own type system. The results are predictable and consistent. Type system NesC components in TinyOS and TOSSIM use the type system provided by the C programming language. e. A type resolution algorithm identifies the most specific types that satisfy all the constraints. deleting. called .6 summarizes the scheduling algorithm. since tasks may generate events with the current TOSSIM time stamp. A special Java base class created for Viptos. may have been generated. so that static type analysis can be performed. This change is required in order to guarantee causal execution in Viptos. new events may have a time stamp that is before the current Ptolemy II system time. Viptos supports models with dynamically changing interconnection topologies and treats changes in connectivity as mutations of the model structure. This last step is repeated until there are no other events with the current TOSSIM system time. Communication between actors in Ptolemy II occurs through typed tokens. Otherwise. which processes all tasks before updating the TOSSIM system time and processing an event in the TOSSIM event queue.

Viptos can limit type conversion to the data types required by the ADC interface. As a result. the algorithm for determining radio connectivity is itself encapsulated in a component as a channel model. but does not require that the actors inside use the Ptolemy II type system. and hence can be developed by the model builder. In TOSSIM. Viptos automatically converts the char in TOSSIM into a booleanvalued token in Ptolemy II. A Viptos submodel containing nesC components uses a subclass of this base class. the LEDs. called PtinyOSCompositeActor. and the packets sent and received over the radio. so that the components can use the C type system. Since the data communicated between TOSSIM and Ptolemy II only involve a mote’s hardware interface. TinyOS packets are represented by a C data structure containing a char array. Sensor data modeled in Ptolemy II typically use tokens with values of type double. as well as an interface for manually setting the per-node and per-link values and probabilities. which Viptos uses to change the animation state of the simulated LEDs. Viptos performs automatic type conversion between the two type systems during simulation. TOSSIM represents an LED value with a char. Although LED state is binary. In Viptos and VisualSense. TOSSIM represents an ADC value with an unsigned short integer masked for 10-bit usage. Viptos automatically converts between the TOSSIM char array representation and the Ptolemy II string token representation whenever a node transmits or receives a packet. The ADC channels of a mote use 10-bit unsigned values. however. usually do not match the actual data types of the hardware interface. Viptos represents TinyOS packets using Ptolemy II string tokens. This facilitates the embedding of a different type system within Ptolemy II. Viptos automatically performs the lossy conversion from a double-valued token in Ptolemy II to a masked unsigned short integer value in TOSSIM. allows a Ptolemy II actor’s ports to have types. The types provided by C. When TOSSIM requests an ADC value.72 TypeOpaqueCompositeActor. Radio and I/O TOSSIM has built-in models for per-node ADC values and for radio connectivity between multiple nodes. Viptos uses JNI functions in the custom copy of TOSSIM to automatically convert between the C types used in TOSSIM and the token types used in Ptolemy II. When TOSSIM updates the state of the LEDs. TinyOS and TOSSIM use arbitrary data types to represent values with different bit widths. In order to maintain a standard endian format and enable easy parsing of packets. Both .

73 Viptos and VisualSense provide several built-in models, including AtomicWirelessChannel, DelayChannel, LimitedRangeChannel, ErasureChannel, and PowerLossChannel (see the lefthand pane of Figure 4.7(a)). Both tools can determine connectivity on the basis of the physical locations of the components. Viptos overrides the built-in ADC and radio models and LED device drivers in TOSSIM so that they send data to, and receive data from, the ports of the node model. This allows the simulated node to interact with user-created models, such as sources of light (e.g., Figure 4.2(e)), temperature gradients, radio channels, and other nodes. In the DE domain of Ptolemy II, tokens received at the input port of an actor cause the actor to fire at the time of the token time stamp. The actor usually consumes the token, at which point the port becomes empty. In Viptos, the node model may receive tokens at the ADC ports that represent new values. To reconcile the difference in timing between when the simulated environment makes a new ADC value available and when the simulated node reads its ADC ports, Viptos uses a Ptolemy II PortParameter instead of a Port for the ADC ports. This usage of PortParameter makes the port value persistent between updates, such that when the TinyOS program requests data from the ADC port, the program gets the value of the most recently received token. Figure 4.2(a) shows an example containing a model of a light source and a node running the SenseToLeds TinyOS program. Viptos transmits light source data to the sensor node by means of a photo port (Figure 4.2(b)) associated with a LimitedRangeChannel named PhotoChannel (Figure 4.2(a)).

Multiple nodes and multi-hop routing TOSSIM simulates one or more nodes with the same TinyOS program by maintaining a copy of the state of each component for each simulated node. The nesC compiler has built-in support for generating arrays to store these copies, so that users do not need to modify the TinyOS program source code when compiling for TOSSIM. Viptos simultaneously simulates multiple nodes with possibly different programs by embedding multiple node models, with each TinyOS node containing a different PtinyOS Director, into the Wireless domain (the specialized DE domain). To prevent namespace collision between different simulated TinyOS programs, Viptos separately compiles and loads a shared library for each node. Viptos performs this by passing a unique name for each node to the nesC compiler, which the compiler then inserts into the TOSSIM source code by means of macros. Since Viptos models have

74 a global discrete-event scheduler, all nodes operate on the same time reference. Figure 4.7 shows an example model containing two nodes that communicate over a lossless radio channel (AtomicWirelessChannel) with full connectivity. The node on the left contains the CntToLedsAndRfm TinyOS program, which maintains a counter on a 4 Hz timer, displays the counter value on the LEDs, and sends it over the radio in a TinyOS packet. The node on the right contains the RfmToLeds TinyOS program, which listens for radio packets and displays any received counter values on the LEDs. A user can easily replace the radio channel model by deleting it and dragging in a different channel model from the menu in the left-hand pane. Though the application shown in Figure 4.7 uses broadcast, Viptos also supports multi-hop routing. Viptos accomplishes this by passing a node ID to the nesC compiler for each custom copy of TOSSIM. The modified TOSSIM code uses this node ID where it would normally be used in TinyOS, instead of using the default TOSSIM value of the index of the array containing the state of the nodes. Viptos allows users to indicate globally the name of the base station in the PtinyOS Director configuration screen, as shown in 4.2(d). Viptos includes a multi-hop routing demonstration that models a network with multiple TinyOS nodes running the Surge multi-hop routing protocol application, shown in Figure 4.8, where the base station is node 0.

4.2

Performance Evaluation
This section evaluates the scalability of Viptos in terms of execution time as the number of

nodes increases. It separately evaluates the execution time of applications without radio usage, and the execution time of applications with radio usage, in order to determine the scalability of communication within the framework. I collected timing information on an Intel Pentium M 760 processor (2.0 GHz, 2 MB L2 Cache, 533 MHz FSB) with 1024 MB of SDRAM, running Ubuntu 6.06 LTS (Dapper Drake) with Linux kernel 2.6.15-27-386. The tools I used included nesC 1.2.7a, gcc 3.4.3, TinyOS 1.x, and Sun Java VM 1.4.2 13-b06 with a heap size of 512 MB. In order to run large models, I increased the maximum number of open file descriptors allowed in the Bash shell from a default of 1024 to 20000 with the ulimit -n command. To eliminate timing variance due to random boot times, I set all nodes to boot at virtual time 0.0 seconds. I did not set the TOSSIM DBG environment variable, which affects which event debug messages get generated. I sent all printed debug messages (on stdout or stderr) from all copies of

75

a

b

c

d

e

f

g

Figure 4.7: Send and receive application in Viptos.

8: Multi-hop routing in Viptos. .76 a b c Figure 4.

since nodes must wait until Viptos invokes all internal copies of TOSSIM before simulation can proceed because they all operate on the same time reference. To eliminate timing delay due to waiting for remaining threads to join. To reduce timing variance due to Java garbage collection. I stopped timing at the beginning of wrapup(). The figure shows that Viptos has more overhead when compared to TOSSIM. restarted Viptos. I copied and pasted existing nodes into the graph. This overhead scales linearly with the number of nodes.x CVS tree.2. instantiation of Java objects. I collected multiple runs from the same instantiation of Viptos. and several minutes for large models.1 Comparison to TOSSIM This section uses the SenseToLeds application to evaluate the scalability of Viptos as the number of nodes increases and to compare it to TOSSIM. I discarded the timing measurement for the first run in each experiment to eliminate timing delay due to loading of new Java classes. gcc. For TOSSIM. since thread joining is only necessary for running the model multiple times within a graphical environment. This section does not present timing overhead in Viptos for opening files. to eliminate timing variance from printing to the screen under X11. For a given number of nodes.gc() to perform garbage collection before starting the timing measurement. I instrumented Viptos to call System. and is on the order of a few seconds for small models. Figure 4. For models with multiple nodes. I started timing right before Viptos invoked the internal copy of TOSSIM.2.getRuntime() methods to measure elapsed time while running the SenseToLeds application displayed in Figure 4. For modeling additional nodes. To measure the overhead due to integrating TOSSIM with Ptolemy II. I used the timing information from the last node to start. For Viptos. I instrumented the PtinyOS Director with calls to the Java Date(). or loading shared objects. I eliminated the model of the environment in order to make a fair comparison to TOSSIM.77 TOSSIM to /dev/null. and caching. I used the /usr/bin/time command to measure the execution time of the SenseToLeds application from the tinyos-1. since TOSSIM uses random ADC values by default. and took additional measurements. and Java compilers.9 shows the average execution time of the SenseToLeds application with a virtual run time of 300. This does not include the overhead of running the nesC compiler and loading the TOSSIM shared object into memory. but that both simulators scale linearly in the number .0 seconds for an increasing number of nodes. I saved the model. 4. I discarded the timing measurement for the first run in each experiment to eliminate timing variance due to caching.getTime() and Runtime. running the nesC.

2. The plot in Figure 4. 4. the results show that approximately 410 nodes can be simulated in 300. The exact number for any given application depends on the fidelity of simulation required and the complexity of the application. Using a least squares linear regression. and a varying number of senders and receivers. Senders send packets at 4 Hz.78 Figure 4. graphical programming environment. I created a model similar to that of the SendAndReceiveCnt application shown in Figure 4. which means that Viptos can simulate networks up to this size in real time.10 shows the average execution time for this model. This analysis used a virtual run time of 120.0 real seconds or less. I disabled animation of the LEDs. in exchange for slightly increased execution time.2 Radio This section evaluates the scalability of models that use the radio using the same techniques described in the previous section. and an interactive. of nodes.0 virtual seconds. To eliminate timing variance due to the graphical interface.0 seconds for all nodes.9: Execution time of the SenseToLeds application as a function of the number of nodes. the user gains increased modeling and simulation capabilities and flexibility.7. . So. Each simulation ran for 300. The model uses a lossless radio channel model with full connectivity.

3 Summary This chapter described an extensible actor-oriented software framework for modeling sensor networks. and even without aggressive performance tuning. can simulate moderately large sensor networks effectively. This chapter showed that Viptos simulator performance is scalable. builds upon Ptolemy II and TinyOS. 4. This tool. called Viptos.79 Figure 4. and execution time scales linearly as a function of the number of nodes.10: Execution time of a radio send and receive model in Viptos as a function of the number of senders and receivers. hierarchical. . The number of senders versus receivers has no noticeable effect. The execution time of the model increases linearly with the number of nodes. and provides an integrated graphical design and simulation environment. whether or not the radio is used. The plot shows that the main determinant of execution time is the total number of nodes. heterogeneous modeling to low-level implementation. and deployment. simulation. Each simulation ran for 120. Viptos allows users to easily transition from highlevel.0 virtual seconds.

80 .

Frederick P. I use the term “generative programming” in a broad sense: systems or components of systems are automatically generated from a specification written in one or more textual or graphical domain-specific languages [26]. metaprogramming is the writing of computer programs that write or manipulate other programs (or themselves) as their data. This chapter explains how to programmatically specify the wireless sensor network application itself through a variety of techniques that combine higher-order actors or components. The terms “generative programming” and “metaprogramming” are often used interchangeably. I differentiate between them—a metaprogram does not necessarily generate . asserts that “radically better software robustness and productivity are to be had only by moving up a level. and making programs by the composition of modules.81 Chapter 5 Metaprogramming for Wireless Sensor Networks In The Mythical Man Month [17]. 5. In this dissertation. Like Sztipanovits and Karsai [93]. using an actor-oriented framework called Viptos.” Chapter 3 explained how to build wireless sensor node programs from pre-existing TinyOS/nesC components. however. actor-oriented components and pre-existing TinyOS/nesC components. with generative programming and metaprogramming.1 Generative Programming and Metaprogramming Generative programming and metaprogramming are very similar concepts. Brooks. Chapter 4 explained how to build wireless sensor network applications graphically from pre-existing. Jr. using an actor-oriented framework called galsC. or objects. According to Wikipedia [104].

In the early 1960s. such as serial and parallel connection. mesh. According to Reekie [82] (emphasis mine). and Components Related to metaprogramming is the concept shared by higher-order functions. since a generic actor specification is specialized to a particular role in the model.Next-level application builders get richness of function. a shorter development time. and its internal conceptual structure does not have to be designed at all.. better documentation. other higher-order functions capture common interconnection patterns.82 a new program or system. as they can be used to capture patterns of computation. and both the generic actor and specialized actor perform the same role and produce the same behavior.2 Higher-order Functions. This is one of the most persuasive arguments in favour of inclusion of higher-order functions in a programming language. and tree-structured interconnection patterns. He argues that partial evaluation generally requires less explicit specification by a programmer than other metaprogramming techniques. In Actor-Oriented Metaprogramming by Neuendorffer [74]..... and radically lower cost... where partial evaluation is used as a way to generate more efficient programs..e. . each of them captures a particular pattern of iteration. The shrink-wrapped package provides a big module of function. structured metaprograms. The benefits of metaprogramming are best described by Brooks in The Mythical Man Month [17]. with an elaborate but proper interface.. actor-oriented models are viewed as descriptions of concurrent software architectures. only resurgent and renamed. a tested component.Vector iterators are higher-order functions that apply a function across all elements of a vector.Higherorder functions are one of the more powerful features of functional programming languages. yet others represent various linear. 5. computer vendors and many big management information systems (MIS) shops had small groups of specialists who crafted whole application programming languages out of macros in assembly language. where he discusses them in the context of using shrink-wrapped software packages as components: The metaprogramming concept is not new. It is particularly effective in this use case. Neuendorffer describes a metaprogramming system that transforms actor-oriented models in Ptolemy II into selfcontained Java code. although it may accept other programs or systems as input. i.[S]ome higher-order functions encapsulate common types of processes.Now the chunks offered by the metaprogrammer are many times larger than those macros. A higher-order function takes a function argument or produces a function result. Actors. and higher-order components. higher-order actors. In effect... allowing the programmer to re-use these patterns without risk of error.

At compile time. a code generator that produces a loop with an actor as its body.Unlike mapV.” Higher-order actors gain their power from a key restriction: “the replacement actor is specified by a parameter.” The next section investigates Ptalon in more detail. since they capture patterns of instantiation and interconnection between components. for example. if not. Just as functions may serve as arguments to higher-order functions in functional programming languages. which it applies to each element of its input channel. the vector of input streams is divided into groups of the appropriate arity (and the number of invocations of the replacement actor reduced accordingly). For example. . Like higher-order functions in Visual Haskell [82]. not by an input stream.” Reekie explains higher-order actors in Ptolemy Classic [82]: Special blocks represent multiple invocations of a “replacement actor. An actor of this kind mimics higher-order functions in functional languages.the map actor [in Visual Haskell] takes a function as its parameter. The most basic use of icons in [the Ptolemy Classic] visual syntax may therefore be viewed as implementing a small set of built-in higher-order functions.” The Map actor. components may serve as parameters to higher-order components in composition languages..83 Reekie then explains how the concept of higher-order functions can be applied to actors [82]: . Map is replaced by the specified number of invocations of its replacement actor. In a higher-order composition language such as Ptalon [19]. higher-order components are the most powerful feature of these types of languages. the system must support dynamic creation of functions since it will not have knowledge of f until run-time. in this case. is a generalised form of mapV [the vector iterator higher-order function]. Further work is required to explore forms of higher-order function mid-way between fully-static and fully-dynamic. an efficient implementation of map( f ) can be generated. An interesting aspect of Ptalon is that it is. and the parameters may be other systems. the structure of a system is effectively parameterizable. but with number of loop iterations unknown. to quote Reekie. and could therefore be called a higher-order actor. Lee and Parks [62] explain that “dataflow processes with state cover many of the commonly used higher-order functions in Haskell. Thus [the system] avoid[s] embedding unevaluated closures in streams.. or languages for constructing networks of components [19]. “mid-way between fully-static and fully-dynamic. could still execute very efficiently. The requirement that the number of invocations of an actor be known at compiletime ensures that static scheduling and code generation techniques will still be effective. Map can accept a replacement actor with arity > 1. If f is known...

since Ptalon automatically generates components from a specification written in a textual language. Ptalon uses the Ptolemy II expression . I have improved the Ptalon system for evaluating parameters such that the values of Ptalon parameters can be changed at run-time. 19] is a higher-order composition language for constructing higher-order com- ponents in Ptolemy II.3 Ptalon Ptalon [18. I have also improved the Ptolemy II implementation of Ptalon to allow composite actors in addition to atomic actors. This original implementation assumes that models containing higher-order components are static. The value of the local variable i is set by the for loop. Figure 5.1. Components passed as parameters to these higher-order components are atomic actors (i. and Ptalon accepts components as arguments (inputs) to other components. the specified subcomponents may be different types of wireless sensor nodes running various individual programs.e. they are specified in Java. a higher-order component is called a PtalonActor. with varying values for the nodes’ range and location parameters. A developer can use Ptalon to easily generate sensor network applications and configurations.84 5.. That is.1 shows a sample Ptalon file that specifies a component containing n components of type RelayNode. 5. Ptalon is both a generative programming system and a metaprogramming language. The following sections present an example that uses the improved version of Ptalon and explain the implementation of the parameter reconfiguration capabilities. but also with an XML file containing an arbitrary collection of actors. Using the definitions presented in Section 5. which minimizes the amount of input a system designer must provide to create a new system. the underlying programming language of Ptolemy II). Ptalon makes it easy to parameterize a component with the number and types of subcomponents that should be generated within the component. arguments to a higher-order component cannot change once specified. thus enabling a form of scalability in system design [19]. an application developer can specify an actor not only with a Java file. whereas the value of the parameter n is specified externally. Cataldo proved mathematically that higher-order components can lead to succinct syntactic descriptions of large systems.3. In Cataldo’s original Ptalon implementation for Ptolemy II [19].1 A simple example Ptalon code is written in a simple declarative style.

This allows simulation of abstract and concrete node and environment models with various parameters. Ptalon can also be integrated with Viptos (see Chapter 4). The Ptalon compiler is implemented within Ptolemy II and is invoked as soon as the PtalonActor is set to reference a particular Ptalon file.2(d). . The PtalonActor parameter configuration window initially shows a blank value for the ptalonCodeLocation parameter. The second populator phase of the Ptalon compiler begins only when the values of all parameters of the PtalonActor are known. which allow a user to change a parameter to specify different numbers of TinyOS nodes. The Ptalon compiler creates all entities as part of the PtalonActor submodel. the Ptalon compiler parses the Ptalon file and creates an abstract syntax tree (AST). Figure 5.2(a). Note that since the PtalonActor automatically populates itself with actors. a user places a new PtalonActor in a Ptolemy II graph. In its initial phase.2(a) shows a Ptolemy II model containing an instance of a PtalonActor called MultipleNodesMoML that references the Ptalon file in Figure 5.85 language to evaluate all values within double brackets ([[ ]]). The Ptalon compiler walks the AST and creates the remaining entities. for which the user can then give values. and not its internal configuration. The first populator phase of the Ptalon compiler occurs next. Figure 5.2(b). A user specifies the value of n as a parameter of the PtalonActor. Once the user sets this parameter to reference a Ptalon file.1.2(c) shows the components generated inside the PtalonActor. in which the Ptalon compiler instantiates any entities that do not depend on unknown parameter values. the PtalonActor then reconfigures its parameter configuration window to show the parameters declared in the Ptalon file. as shown in Figure 5. In this example. Figure 5. a PtalonActor only needs to save its parameter values. then refine and replace these components with a real code implementation that uses TinyOS. each of the components are nodes that are actually composite actors that contain other components. The Ptalon compiler consists of multiple phases. and eventual validation against a real-world implementation. as shown in Figure 5. I have implemented Ptalon-based versions of the SenseToLeds and SendAndReceiveCnt examples presented in Chapter 4. An application developer can start with regular components that use pre-existing Ptolemy II domains. To use Ptalon within Ptolemy II.3 shows the XML code for the model shown in Figure 5.

} next [[ i + 1 ]] } Figure 5. 100*i] ]] ). or it may be a reference to a model parameter.ptln 5. and ii) execution of the actor on its stream arguments. but with the stream data unknown.86 MultipleNodesMoML is { actor node = ptolemy.wireless. The Ptalon compiler implementation in Ptolemy II uses two steps to handle any change to the value of a PtalonActor parameter.3. First.RelayNode. Neuendorffer . Second. it may cause the internal configuration of the PtalonActor to change.. Thus.2 Reconfiguration in Ptalon In his dissertation [82]. The value of a PtalonActor may be an actual token that has a type corresponding to one in the Ptolemy II token type lattice. The Ptalon compiler proceeds through the population phase. and reuses existing ports whenever possible during the populator phase. Reekie discusses actor parameters: Execution of an actor proceeds in two distinct phases: i) instantiation of the actor with its parameters.domains. As a result. For the latter option. the compiler deletes the internal representation of all entities and relations in the PtalonActor. using the newly assigned value of the parameter. parameter n.demo. What happens if a so-called “compile-time” parameter value changes at run-time? If the value of a PtalonActor parameter changes. code generation can take place with the parameters known. as well as existing values for any other parameters. while preserving existing ports. streams are evaluated during the main execution phase. a change in the value of the referenced model parameter results in a change to the actual value of the PtalonActor parameter.Lee stresses the difference between parameter arguments and stream arguments in Ptolemy: parameters are evaluated during an initialisation phase. which necessitates a reconfiguration of the PtalonActor.1: MultipleNodesMoML. _location := [[ [100*i..SmallWorld. for i initially [[ 1 ]] [[ i <= n ]] { node( range := [[ 40 + 10 * i ]]. the separation between parameters and streams—and between compile-time and run-time values—is both clear and compulsory. the Ptalon compiler restarts itself in its initial phase (as described in the previous section).

.87 b a c d Figure 5.2: PtalonActor in Ptolemy II.

parameter. in which each state of the finite state machine contains a dataflow model. the active dataflow model replaces the finite state machine until the state machine makes a state transition. which I summarize and extend here: • Interactive editing.0" standalone="no"?> <!DOCTYPE entity PUBLIC "-//UC Berkeley//DTD MoML 1//EN" "http://ptolemy. Ptolemy II associates this actor with a parameter of the containing model.MultipleNodes. usually via a dialog box associated with the model.PtalonActor"> <configure> <ptalon file="ptolemy. The SetVariable actor is a special actor that has a single input port.dtd"> <entity name="MultipleNodesMoML" class="ptolemy. A modal model is an extended version of a finite state machine. a reconfiguration port is a special form of dataflow input port.actor.demo. A user may change parameters in Ptolemy II through interactive editing of the model. The actor consumes a single token during each firing and reconfigures the associated parameter during the quiescent point after the firing. • Modal model.MultipleNodesMoML"> <ptalonExpressionParameter name="n" value="3"/> </ptalon> </configure> </entity> </entity> Figure 5.berkeley. Finite state machines transitions can reconfigure parameters of the target state’s refinement when the transition is taken.TypedCompositeActor"> </property> <entity name="MultipleNodesMoML" class="ptolemy.edu/xml/dtd/MoML_1. • Reconfiguration port.88 <?xml version="1. Another way reconfiguration of model parameters may occur in Ptolemy II is through the use of higher-order actors (I do not include PtalonActor as part of this discussion): . Also known as a PortParameter. and tokens received through the port reconfigure the parameter.3: MultipleNodesMoML.eecs. • Reconfiguration actor.ptalon. The Ptolemy II user manual [16] contains more details on constructing modal models. that is active in that particular state.actor. or refinement.ptalon. Ptolemy II binds each reconfiguration port to a parameter of the port’s actor. or actor of interest. Essentially.actor.xml [74] enumerated the ways in which reconfiguration of model parameters may occur in Ptolemy II.

summarizes various articles that question the credibility of published simulations results in the mobile ad hoc network (MANET) research community. Problems cited include lack of independent repeatability. Andel and Yasinac’s proposed solution to the first problem. The actor also uses tokens at an input port to set the value of a top-level parameter with the same name in the contained model. and uses it to set the value of a top-level parameter in the referenced model that has the same name as the port. improper precision. the actor reads an input token from the input port. This actor opens a window to display the specified model. if there is one. they suggest . before executing the referenced model.4 Specifying WSN Applications Programmatically In this section. This actor is almost the same as ModelReference and VisualModelReference. and lack of sensitivity analysis. 5. I present methods for specifying wireless sensor network applications program- matically by combining in various ways higher-order actors in Ptolemy II with an improved version of VisualSense/Viptos. If the actor has input ports. • ModelDisplay. is to properly document all settings. as if it were a top-level model. The developer can use this. on each firing. • RunCompositeActor.1 Motivation “On the Credibility of Manet Simulations” [4]. The ModelReference and VisualModelReference actors are both atomic actors that can execute a model specified by a file or URL (Uniform Resource Locator).89 • ModelReference and VisualModelReference. an article by Andel and Yasinsac. unrealistic application traffic. to create animations by changing parameter values. 5. if there is one. The model developer can provide inputs that are MoML strings that the actor applies to the specified model. improper/nonexistent validation. A developer can use these actors to define an actor whose firing behavior is given by a complete execution of another model. lack of statistical validity.4. and I explain when a particular method might be most applicable. The actor executes the contained model completely. use of inappropriate radio models. The developer can add these ports to an instance of this actor. then on each firing. for example. Since publication venues have limited space. lack of independent repeatability. only it is a composite actor instead of an atomic actor.

There are only a few differences between this modified version and the original version: (1) the modified version stores the histogram data in a file whose name is specified by a new parameter. All nodes (not including the Initiator) have the same implementation as shown in Figure 5. Ptolemy II models are simple XML files that are easy to publish on the web. it turns green if it receives it in more than one hop.4(b) broadcasts a message. It stays white if it never receives the message. It is an open-source tool whose source code is freely distributable and modifiable. The models described in the following sections show that with the techniques introduced in this dissertation.90 including only major settings and/or providing all settings as external references to research web pages. after two hops.2(d). etc.5) is run as a submodel with the same sets of changing parameter values. wireless sensor network simulations are easily repeatable. an Initiator component. The model plots a histogram of the number of nodes that receive the message after one hop. A node turns red if it receives the message in one hop. When the user runs the model. 5. shown in Figure 5. then the probability of delivery drops according to the formula shown. and the Ptolemy II version number with which they are built are automatically stored in the XML file. where a slightly modified version of the SmallWorld model (shown in Figure 5. If the user increases the range above sureRange. both of which perform the same set of experiments. Figure 5. Each node in the sensor network rebroadcasts the first message it receives. Ptolemy II is well-suited to address this problem. I also discuss how these techniques can address the other problems cited. 5. The NodeRandomizer actor randomizes the locations of the nodes at the beginning of each run. fewer hops are needed when the range increases [31]. which keeps the expected number of recipients roughly constant.4 illustrates a phenomenon where ad hoc networks achieve connectivity with fewer hops on average with a network that is less reliable but where ranges are longer. than with a network that is more reliable but ranges are shorter. Franceschetti and Meester showed that on average. which should include freely available code/models and applicable data sets.4.4.3 Parameter Sweep I now introduce two different models.2 Small World The SmallWorld example shown in Figure 5. (2) the modified version has an additional param- .4 shows the SmallWorld model as originally implemented in VisualSense. as well as many of the other problems cited.

Just as in the modal model. The SDF model uses dataflow actors that send the simulation parameters directly to a VisualModelReference actor with the same ports as those in the modal model. for the purposes of sensitivity analysis [83]. However. if the user wants to create simulation scenarios with dynamically derived parameters values. Notice that for this particular application. For each run. Dataflow Figure 5. simulating the ParameterSweep version of the SmallWorld model with runs i different node layouts and runs j different ranges. the model creates an output file with the stored histogram data.7 shows an SDF model which accomplishes the same objectives as the modal model in Figure 5. The initial location of the nodes (not including the Initiator) is not significant. e. For simulations where the parameters values are known a priori. and no additional configuration files are needed.6(b)). and (3) the modified version uses non-zero random seeds so that each run is repeatable. a dataflow language provides a more intuitive interface for specifying these settings. a modal model might be a more appropriate . The transitions in the modal model are used to change the counters i and j. I will call this version of the SmallWorld model the ParameterSweep version. This model allows application developers to create simulation scenarios that are independently repeatable. resetOnEachRun. one for each of the parameters to be changed (range.. Modal model Figure 5. This refinement is an SDF (synchronous dataflow) model containing a VisualModelReference actor with three different ports.5). and for each node layout. and fileName) in the ParameterSweep version of SmallWorld (Figure 5. and set the parameter values for each run.g.5). the VisualModelReference in the SDF model references the ParameterSweep version of SmallWorld (Figure 5.6(a) shows a modal model in which the main state (named state and highlighted in green) contains a refinement (Figure 5. Both top-level models store the settings used as part of the model itself. runs j number of different ranges are simulated. the values of the parameters are more readily apparent in the SDF model than in the modal model. That is. The modal model sweeps over the parameter values such that runs i number of different random node layouts are simulated.6. and to validate their algorithms by quickly creating new simulation scenarios via a few simple parameter value changes.91 eter created to allow node location randomization to be controlled externally.

.4: Small World in Ptolemy II.92 a f b c d e Figure 5.

5: ParameterSweep version of Small World in Ptolemy II.93 Figure 5. .

94 a b c Figure 5. .6: Modal model for changing parameter values of Small World model in Ptolemy II.

Neuendorffer introduces higher-order components (actors) (emphasis mine): In many cases is it useful to build parameterized structures in actor-oriented models. this actor replicates itself a number of times determined by a structural parameter.4.4 Higher-order actors Since most of the nodes in the SmallWorld application have the same implementation. one might also consider using a higher-order actor to specify the nodes. or directly (by graphically instantiating the desired number). Such programmatically generated structures are called higher-order components to emphasize their similarity to higher-order functions in functional languages. Figure 5. the user can feed output from the SmallWorld model back into the modal model. This section considers two different methods. the first using a MultiInstanceComposite actor. Ptolemy Classic took advantage of higher-order functions by allowing a user to specify the number of instances of an actor by modifying the parameters of a bus icon (a line connecting the boxes representing the actors). and the second using a PtalonActor. or when the number of repetitions is specified by a parameter.8 shows the SmallWorld application in Ptolemy II. 5. where a MultiInstanceComposite creates all of the nodes. each of which has an implementation identical to that in Figure 5. In other words. A parameter which is used to determine the structure of a higher-order component is a structural parameter. rather than the visual representation. The MultiInstanceComposite actor in Ptolemy II is one example of a simple higher order component. As described by Lee and Parks [62]. A similar feature also existed in Ptolemy Classic. Just before a model is executed. This actor is often used in situations where a model contains repetitive structures that are awkward to build by hand. which can then automatically select new parameter settings on the basis of noise level or network connectivity. For example.2(d). a user could graphically specify the number of instances of an actor in Ptolemy Classic. either by implication (by graphically specifying the number of instances of upstream actors). Ptolemy Classic also allowed the user to visually represent the replacement function in a way that is conceptually similar to using a box inside of the icon for a higher-order function.95 choice. . MultiInstanceComposite In his dissertation [74]. the application developer can choose the most appropriate domain-specific language to specify the metaprogram.

.7: SDF model for changing parameter values of Small World model in Ptolemy II.96 a b Figure 5.

Figure 5. not just values of actor parameters.5. The second section declares four parameters: channelName. No other changes to the model are required.5. The model shown in Figure 5. and n. MultiInstanceComposite generates the nodes in its container. Ptalon Ptalon is a natural fit for specifying model parameters programmatically. The first section of code declares all of the actor types needed in the model.9 shows the required Ptalon code. range. The third section declares .8 has the same behavior as that in Figure 5.97 a b Figure 5.8: ParameterSweep version of Small World model with MultiInstanceComposite in Ptolemy II. which means that the location parameter of the generated nodes are easily accessible and remain in reference to the Initiator actor in the container. reportChannelName. since it can specify the structure of the model itself. One can use Ptalon to generate the SmallWorld application shown in Figure 5.

Figure 5. and WirelessToWired converter. The remainder of the file instantiates the components.8) and PtalonActor (Figure 5. One advantage of using higher-order actors such as MultiInstanceComposite and PtalonActor is that they enable run-time reconfiguration (e. The Ptalon model will still run correctly.g. The parameter n specifies the number of nodes to create. Because parameters in Ptolemy II use a form of lazy evaluation (changes to parameter values may not be propagated until they are used at run time).9 uses the parameters channelName and reportChannelName to specify concrete names for the channels. e. For all files.11 shows an excerpt of the MoML code for the model in Figure 5. Another advantage of higher-order actors is that they require fewer bytes to express the model. So. The PtalonActor also contains an output port. Figure 5. Figure 5.10) versions of SmallWorld with either the modal model or the SDF model discussed previously. in addition to annotations and comments that were not constant across all models.g.4. I explicitly declare these variables because they are useful for visualization. through which the actor transmits the data to be recorded.9 is not explicitly declared as a Ptalon parameter. except that Ptalon generates the nodes. wireless ports are parameterized by the name of the wireless channel on which they receive or transmit. Note that all of the parameters.10(c). Also note that the resetOnEachRun parameter in Figure 5. in VisualSense and Viptos. the Ptalon file shown in Figure 5.10(b) shows the values of the PtalonActor parameters. the user must create a Ptalon parameter as a mirror of any Ptolemy parameters that should be evaluated before run time. I removed all extra white space (tabs. However. 5. spaces. even if the range parameter is not declared as a Ptalon parameter. Note that this model is similar to the model shown in Figure 5.98 an output port named output. the number of nodes in the model can be controlled programmatically). refer to model parameters with the same name. Table 5.5 Discussion A user can control both the MultiInstanceComposite (Figure 5... with no modifications required. The first column . except the number of nodes.10. Ptalon automatically generates names of actor instances. NodeRandomizer. as shown in Figure 5. wireless channels.10(a) shows a Ptolemy II model containing a PtalonActor named SmallWorld that refers to the Ptalon code in Figure 5.1 shows a comparison of the three different ways presented for implementing the SmallWorld application. and extra linefeeds).5.9. The parameter range specifies the radio range of the nodes. to verify visually that the ranges are correct. before running the model.

range := [[ range ]]. lossProbability := [[ 1. actor nodeRandomizer = ptolemy.0.0}} ]]. 0. channel( seed := [[ 1L ]]. 345. /* Instantiation of components */ channel( defaultProperties := [[ {range=range} ]]. actor wirelessToWired = ptolemy. parameter range. 400.0.0.LimitedRangeChannel.0 * i. 0.WirelessToWired.lib.wireless.SmallWorld.probability ]].wireless.ptalon. nodeRandomizer( maxPrecision := [[ 3 ]].actor. seed := [[ 1L ]]. initiator( _location := [[ [230. _location := [[ [0.SmallWorld.demo.ptalon. randomize := [[ randomize ]].actor. randomizeInInitialize := [[ true ]]. 0.domains. _location := [[ [10. .Initiator. 10. payload := output. for i initially [[ 1 ]] [[ i <= n ]] { node( nodePropagationDelay := [[ nodePropagationDelay ]].RelayNode.0 . name := [[ reportChannelName ]] ).0 * i] ]] ).lib. /* Ptalon parameters */ parameter channelName. haloColor := [[ {0.0. name := [[ channelName ]] ).99 SmallWorld is { /* Actor types */ actor node = ptolemy. parameter n. resetOnEachRun := [[ resetOnEachRun ]].lib. {200.ptln). wirelessToWired( inputChannelName := [[ reportChannelName ]].domains. actor initiator = ptolemy.5.0] ]] ). range := [[ {{100.0.0] ]] ). parameter reportChannelName. 500. /* Port declaration */ outport output. seed := [[ 1L ]] ).NodeRandomizer.9: Ptalon code for SmallWorld (SmallWorld. probability*visualDensity} ]].demo. actor channel = ptolemy.5. } next [[ i + 1 ]] } Figure 5.0}.domains.wireless.

100 a b c d e f g Figure 5.10: Ptalon version of Small World in Ptolemy II. .

SmallWorld"> <ptalonExpressionParameter name="n" value="49"/> <ptalonExpressionParameter name="channelName" value="channelName"/> <ptalonExpressionParameter name="reportChannelName" value="reportChannelName"/> <ptalonExpressionParameter name="range" value="range"/> </ptalon> </configure> </entity> .101 .11: Excerpt of MoML code for Ptalon version of Small World. Increasing the number of nodes in the implementations in the MultiInstanceComposite and PtalonActor versions requires no extra bytes (except if the number of digits in the number of nodes exceeds two. however..Location" value="[240.demo. wrapup()) are invoked during model .10..8). Note that in all of the non-Ptalon versions of the SmallWorld application (Figures 5.. since the ParameterSweep version requires 705 bytes for each additional node to store the parameter values and the instance declaration.0]"> </property> <configure> <ptalon file="ptolemy.0.. The main difference between using MultiInstanceComposite and PtalonActor for this particular application is that one cannot visualize the generated components using the MultiInstanceComposite actor.8. Even though the RelayNode code is stored external to the ParameterSweep version.util.4.5. 5.. and 5. the MoML code for the Initiator actor must be stored externally so that the actor can be referenced in the Ptalon file. Additionally. is the ParameterSweep version of SmallWorld as shown in Figure 5. i. as shown in Figure 5. Also note that the code for the RelayNode actor in the ParameterSweep version is stored externally.5.ptalon.PtalonActor"> <property name="_location" class="ptolemy. The third column is the Ptalon implementation as shown in Figure 5. The difference in the number of lines of between the ParameterSweep and MultiInstanceComposite versions would be even greater if there were more nodes.ptalon.. in which case there is an extra byte for each digit). so that its Actor interface methods (preinitialize(). the MultiInstanceComposite actor must be opaque. have a director.kernel. .actor. the code for the Initiator actor is stored in the model itself. The second column is the MultiInstanceComposite implementation.. Figure 5. whereas the code for the RelayNode actor in the MultiInstanceComposite version must be stored in the MultiInstanceComposite itself.e. the parameter values for each node must still be stored internally. <entity name="SmallWorld" class="ptolemy.SmallWorld..actor. 210. For the Ptalon model.

Tools and Design Methods. They can then 1 International Conference on Information Processing in Sensor Networks and Track on Sensor Platforms. For example.xml SmallWorld. in order to test the behavior of a routing algorithm under different channel assumptions. 2 Conference on Embedded Networked Sensor Systems.102 Table 5. Ptalon has the advantage over the other methods in that it is easier to express model structure. flexibility in specifying simulation parameters is extremely important. over one hundred different simulation parameters were used. ParameterSweep SmallWorld. a user would need to create a new instance of the actor for each type of duplicated node in the network. with very few repeated counts [63]. IPSN/SPOTS 20071 and SenSys 20062 . sensor network developers can easily specify experimental simulation setups programmatically using a variety of techniques. In general.ptln Total 83548 48212 16882 28314 5292 1151 51639 initialization. Developers can choose the method that best fits the particular application. This is not possible with MultiInstanceComposite alone (one would need to use a Case actor or other similar actor to achieve the same results). . Combined with generative programming and metaprogramming techniques.xml 48212 with Ptalon SmallWorld. including modal models.xml 28228 with MultiInstanceComposite SmallWorld.5 Summary In this chapter. and higher-order actors.xml RelayNode. In Leung’s survey of the 70 full-length papers from prominent wireless sensor networking conferences. dataflow. Ptalon makes no such constraints on the PtalonActor component.xml 55320 RelayNode. These results show that parameter choices are largely application-dependent. 5. the application developer can modify the Ptalon code to cycle through a number of different types of radio channel models.1: Comparison of number of bytes between different implementations of SmallWorld. I demonstrated how higher-order components provide a powerful way to build wireless sensor network applications.xml Initiator. With MultiInstanceComposite. and that there are few standard benchmarks. Ptalon also allows the user to specify heterogeneous networks more easily.

103 refine these simulations to real-world implementations using a technology such as Viptos (presented in Chapter 4). .

104

105

Chapter 6

Related Work
This chapter details information on work related to TinyGALS and galsC, as well as work related to Viptos and the metaprogramming techniques for wireless sensor networks discussed in earlier chapters.

6.1

TinyGALS and galsC
This section summarizes the features of several related operating systems and software ar-

chitectures, and discusses how they relate to TinyGALS and galsC. Herlihy’s method for building non-blocking operations, as well as the message passing interface (MPI) offer concurrency and communication alternatives to those used in TinyGALS. The SVAR (state variable) mechanism of PBOs (port-based objects) and FPBOs (featherweight port-based objects) influenced the design of TinyGUYS. The Click Modular Router project has interesting parallels to the TinyGALS model of computation, as do Ptolemy II, the CI (component interaction) domain, and the TM (Timed Multitasking) domain.

6.1.1

Non-blocking

Herlihy proposes a methodology in [45] for constructing non-blocking and wait-free implementations of concurrent objects. Programmers implement data objects as stylized sequential programs, with no explicit synchronization. Each sequential operation is automatically transformed into a non-blocking or wait-free operation via a collection of synchronization and memory management techniques. However, operations may not have any side-effects other than modifying the memory block occupied by the object. Unlike TinyGALS, this technique does not address the need

106 for inter-object communication when composing components. Additionally, this methodology requires additional copying of memory, which may become expensive for large objects.

6.1.2

MPI

MPI (Message Passing Interface) is the de facto standard library interface for writing message passing programs on high-performance parallel computing platforms [104]. MPI provides virtual topology, synchronization and communication functionality between a set of processes that have been mapped to processing nodes. Interface functions include point-to-point, rendezvous-type send/receive operations (including synchronous, asynchronous, buffered, and ready forms); choosing between a Cartesian or graph-like logical process topology; exchanging data between process pairs (send/receive operations); combining partial results of computations (gather and reduce operations); synchronizing nodes (barrier operation); as well as obtaining network-related information such as the number of processes in the computing session, identity of the current processor to which a process is mapped, and neighboring processes accessible in a logical topology. MPI was originally targeted for distributed memory systems, though implementations for shared memory systems have appeared as these platforms have become more popular. In MPI, all parallelism is explicit; the programmer is responsible for correctly identifying parallelism and implementing parallel algorithms using MPI constructs. The number of tasks dedicated to run a parallel program is static. New tasks cannot be dynamically spawned during run time, though the new MPI-2 standard addresses this issue. The advantages of MPI over older message passing libraries are portability (because MPI has been implemented for almost every distributed memory architecture) and speed (because each implementation is in principle optimized for the hardware on which it runs). Hempel and Walker [43] summarize MPI and its alternatives: The main function of MPI is to communicate data from one process to another. Other mechanisms, such as TCP/IP and CORBA, do essentially the same thing. MPI provides a level of abstraction appropriate for communication of data in scientific computing, whereas TCP/IP is geared to low-level network transport, and CORBA to clientserver interactions... The idea of communicating sequential processes as a model for parallel execution was developed by C.A.R. Hoare in the 1970s, and is the basis of the message passing paradigm. This paradigm assumes a distributed process memory model, i.e., each process has its own local address space. Processes co-operate to perform a task by independently computing with their local data and communicating data with other processes

send or receive a message. interoperability. MPI has had considerable impact on the development of middleware and other tools for wireless sensor networks. It is possible to call a user procedure that inputs a message in a local variable and returns before the input has been completed. MPI has its detractors. inventor of Concurrent Pascal..2. unless the modules are mapped to the same processor. or the file system.107 by explicitly exchanging messages. and therefore may be reused by unrelated procedure calls! Twenty years ago. however: (1) modules can only communicate by sending messages (no direct method call or member access) unless they are mapped to the same processor. However. discussed later in Section 6. Concurrent Pascal proved that nontrivial parallel programs can be written exclusively in a secure programming language. Technically. (3) a module may not send directly to a submodule of another module. The Message-Passing Interface follows in the footsteps of the Unix threads library: both extend a sequential programming language with subroutines for parallel execution and data communication. which (conceptually) no longer exists. asynchronous communication is dangerously insecure. 98]. a family of spatial operators that capture local communication within regions of a wireless sensor . [Although regarded as competing standards]. The design of MPI focused on message passing capabilities. In his evaluation of MPI [41]. There are some constraints. fault tolerance. and resource management—its message passing capabilities are not very sophisticated. homogeneous parallel architectures. such as Per Brinch Hansen. named pipes. this message passing is normally realized by calls to library functions.. Personally. (5) currently only static topologies are supported. The MPI routines for synchronous message passing work as expected. he states. Message passing provides the most explicit way of programming a parallel computer with physically distributed memory. including MPI. which. the first concurrent programming language. for example. MPI and PVM [Parallel Virtual Machine] were designed for different uses. I regard the attempt to replace a parallel programming language and its compiler with insecure procedures as a step backwards in programming technology. OMNeT++ [84. and is well-suited to this type of machine since there is a good match between the distributed memory model and the distributed hardware. Welsh and Mainland take inspiration from MPI in their approach to abstract regions [101]. and it is intended to attain high performance on tightly-coupled. (4) lookahead must be present in the form of link delays. However. supports parallel distributed simulation using one of various communication mechanisms. or broadcast some data to a whole group of processes. PVM was originally intended for use on networks of workstations (NOWs) and addresses issues such as heterogeneity. This time-dependent error may change a variable. (2) no global variables are allowed.

and there is no explicit synchronization with other processes. or other node properties. Viptos. UW-API (University of Wisconsin-Madison’s Application Programmer’s Interface) [5.1. which users can create with specific primitives. The Open Source Cluster Application Resources (OSCAR) package [29. many-to-many. in that the actor model specifies scheduling and execution semantics. In their system. where the sensor network sends its data to the computing cluster through a gateway node. others are for collective communication. MPI has been extremely successful in the parallel processing community as it is highlevel enough to shield programmers from most of the details of the underlying machine. in addition to communication primitives. A PBO is an independent concurrent process.108 network.3 Port-Based Objects The port-based object (PBO) [92] is a software abstraction for designing and implementing dynamically reconfigurable real-time software. The software framework was developed for the Chimera multiprocessor real-time operating system (RTOS). it is more comprehensive than MPI. such as broadcast and reduction. to be invoked simultaneously by a group of nodes in a geographic region. 66] is an integrated software bundle designed for high performance cluster computing. Some of the UW-API primitives are to be invoked by a single sensor node. all-to-one (data gather). 6. MPI hides the details of the communication hardware and provides efficient implementations of common collective operations. OSCAR provides the standard Message Passing Interface (MPI) for communication between the parallel computing processes. It has been used in sensor network applications to parallelize data fusion processes. 81] for sensor network communication is motivated by MPI. Bakshi and Prasanna [6] have a similar goal in their library of structured communication primitives. All operations take place on regions. geographic location. We wish to provide communication interfaces that serve a similar role for sensor networks. However. The actor model used by TinyGALS/galsC. and Ptolemy II uses message passing. yet low-level enough to permit extensive application-specific optimizations. and permutation. which may be defined in terms of radio connectivity. PBOs may execute either . “structured communication” refers to a routing problem where the communication pattern is known in advance. all-to-all. with example patterns including one-to-all (broadcast). They state that [MPI] provides a unified interface for message passing across a large family of parallel machines. Barrier synchronization is also supported for the sensor nodes that lie within a region.

embedded microcontrollers. If the total time that a CPU is locked to transfer a state variable is small compared to the resolution of the system clock. Configuration constants are used to reconfigure generic components for use with specific hardware or applications. A PBO can only access its local table. or before the processing of each event for an aperiodic PBO. single-processor. PBOs communicate with each other via state variables stored in global and local tables. PBOs are separate processes. PBOs may also have resource ports that connect to sensors and actuators via I/O device drivers. which contains only the subset of data from the global table that is needed by the PBO. which creates potential implicit blocking. and updates to the tables only occur at predetermined times. no explicit synchronization is needed to read from or write to a state variable. which is especially important since memory in embedded processors is a limited resource. Every input and output port and configuration constant is defined as a state variable (SVAR) in the global table. there is no possibility of deadlock. The design is based on the featherweight port-based object (FPBO) [91]. Echidna FPBO implementation takes advantage of context sharing to eliminate the need for local tables. The system updates these values in the global table only after the PBO completes its processing for that cycle or event. multiple accesses to the same SVAR in the global table are mutually exclusive. In an RTOS. The Chimera PBO implementation uses data replication to maintain data integrity and avoid race conditions. It is guaranteed that the task holding the global lock is on a different processor and will not be preempted. Although there is no explicit synchronization or communication among processes. whereas FPBOs all share the same context. thus it will release the lock shortly. The system updates configuration constants only during initialization of the PBO. Since every PBO has its own local table. and it assumes that the amount of data communicated via the ports on each cycle of a PBO is relatively small. a PBO may update the state variables corresponding to the PBO’s output ports at any time. During its cycle. Consistency between the global and local tables is maintained by the SVAR mechanism. which are not PBOs. The system uses spin-locks to lock the global table. which is stored in shared memory. A task busy-waits with the local processor locked until it obtains the lock and goes through its critical section. then there is negligible effect on the predictability of the system due to this mechanism locking the local CPU. Echidna [9] is a related real-time operating system designed for smaller. The system performs all transfers between the local and global tables as critical sections. Access . The system updates the state variables corresponding to input ports prior to the execution of each cycle of a periodic PBO.109 periodically or aperiodically. The application programmer interface (API) for the FPBO is identical to that of the PBO. Since there is only one lock. A PBO communicates with other PBOs only through its input ports and output ports.

and agnostic ports with a double outline. . However. updates to TinyGUYS are buffered until a module has completed execution. This is more closely related to the local tables in the Chimera PBO implementation than the global tables in the Echidna FPBO implementation. 6. where the vertices are called elements and the edges are called connections. An element is implemented as a C++ object that may maintain private state. In Click diagrams. in both the PBO and FPBO models. This section provides a detailed description of the constructs and processing in Click and compares it to TinyGALS. in TinyGALS. since components within a module may be tightly coupled in terms of data dependency. Every element supports the simple packet-transfer interface. as well as the element’s initialization procedure and data layout. through which they communicate at runtime. which are router configuration fragments that behave like element classes. A Click router configuration consists of a directed graph. Each element supports one or more method interfaces. An element can have any number of input and output ports. Echidna constrains when preemption can occur. modular software architecture for creating routers. Each element belongs to a single element class. The Click configuration language allows users to define compound elements. and agnostic. push ports are drawn in black. There are three types of ports: push. At initialization time. To summarize. However. An element may also have an optional configuration string which contains additional arguments to pass to the element at router initialization time. Updates to an SVAR are made atomically.4 Click Click [54. Elements in Click A Click element is a software module which usually performs a simple computation as a step in packet processing. instead of using semaphores. 55] is a flexible.1. However.110 to global data must still be performed as a critical section to maintain data integrity. and the components always read the latest value of the SVAR. which are similar to global variables. pull. which specifies the code that should be executed when the element processes a packet. pull ports in white. but elements can create and export arbitrary additional interfaces. The SVAR concept is the motivation behind the TinyGALS strategy of always reading the latest value of a TinyGUYS parameter. software components only communicate with other components via SVARs. there is no possibility of blocking when using the TinyGUYS mechanism. each use of a compound element is compiled into the corresponding collection of simple elements.

The element reacts to requests for packets by choosing one of its inputs. The element is initialized with the configuration string “2”.111 element class input port Tee(2) configuration string output ports Figure 6. A connection between two push ports is a push connection. Queues in Click must be defined explicitly and appear as Queue elements. which in this case configures the element to have two output ports. if packets arriving on an agnostic input might be emitted immediately on an agnostic output. push inputs and pull outputs can be connected more than once. pulling a packet from it. A connection between a push port and a pull port is illegal.1: An example Click element. A connection is implemented as a single virtual function call. In addition. Source: Eddie Kohler. A Queue has a push input port (responds to pushed packets by enqueuing them) and a pull output port (responds to pull requests by dequeuing packets and returning them). where packet handoff along the connection is initiated by the destination element (or destination end. Every push output and every pull input must be connected exactly once. the system propagates constraints until every agnostic port has been assigned to either push or pull. then both input and output must be used in the same way (either push or pull). the . in the case of a chain of pull connections). If the chosen input has no packets ready. However.1 shows an example Click element that belongs to the Tee element class. This is an element with multiple pull inputs and one pull output. in the case of a chain of push connections). The element has one input port. which means that they do not carry the associated performance and complexity costs. A connection between two pull ports is a pull connection. When a Click router is initialized. An agnostic port behaves as a push port when connected to push ports and as a pull port when connected to pull ports. Another type of element is the Click packet scheduler. where packet handoff along the connection is initiated by the source element (or source end. which sends a copy of each incoming packet to each output port. and returning the packet. Connections in Click A Click connection represents a possible path for packet handoff and at- taches the output port of an element to the input port of another element. but each agnostic port must be used exclusively as either push or pull. There are no implicit queues on input and output ports. Figure 6.

where each timer calls an arbitrary method when it fires. the elements are indistinguishable. An element should place itself on the task queue if the element frequently initiates push or pull requests without receiving a corresponding request. A task is an element that needs special access to CPU time. FromDevice polls the device’s receive DMA (direct memory access) queue for newly arrived packets and pushes them through the configuration graph.2 kernel. Most elements are never placed on the task queue. An element can have any number of active timers. scheduler usually tries other inputs. which loops over the task queue and runs each task using stride scheduling [99]. The kernel thread runs the Click router driver. Timers are another way of activating an element besides tasks.112 FromDevice receive packet p Null Null ToDevice push(p) return push(p) return dequeue p and return it enqueue p pull() return p pull() return p ready to transmit send p Figure 6. Click is a pure polling system. Click runtime system Click runs as a kernel thread inside the Linux 2. Source: Eddie Kohler. ToDevice examines the device’s transmit DMA queue for empty slots and pulls packets from its input. Since Click runs in a single thread. the device never interrupts the processor. Both Queue elements and scheduling elements have a single pull output. When activated. device-handling elements such as FromDevice and ToDevice place themselves on Click’s task queue. The placement of Queues in the configuration graph determines how CPU scheduling may be performed. following it from element to element along a path in the router graph (a chain of push() calls. For example. which are compound elements that act like queues but implement behavior more complex than FIFO (first in. they are implicitly scheduled when their push() or pull() methods are called. until the packet is explicitly stored or dropped (and similarly for pull requests). . a call to push() or pull() must return to its caller before another task can begin. so to an element downstream. Click timers are implemented using Linux timer queues. This leads to an ability to create virtual queues. or a chain of pull() calls).2: A simple Click configuration with sequence diagram. The router continues to process each pushed packet. first out) queuing.

Source: Eddie Kohler.2. .3: Flowchart for Click configuration shown in Figure 6.113 Poll packet from receive DMA ring Push packet to Queue Queue full? N Y Drop packet (Queue drop) Enqueue packet on Queue Pull packet from Queue Enqueue packet on transmit DMA ring Figure 6.

Control flow moves forward during a push sequence. Overhead in Click Modularity in Click results in two main sources of overhead. FromDevice calls push() on its output port.1. found that element generality had a relatively small effect on Click’s performance since not many elements in a particular configuration offered much opportunity for specialization [55]. Later. ToDevice calls pull() on its input port. the packet p) always moves forwards. Note that in the sequence diagram in Figure 6. time moves downwards. This leads to one or two virtual function calls. Data flow (in this case. The two chains are separated by a Queue element. The Queue element enqueues the packet if its queue is not full. otherwise it drops the packet. the task corresponding to ToDevice is activated. This overhead is avoidable—the Click distribution contains a tool to eliminate all virtual function calls from a Click configuration. et al. Kohler. The first source of overhead comes from passing packets between elements. Both types of objects (Click elements and TinyGALS components) communicate with other objects via method calls. which calls the push() method of Null.2. but push inputs may be connected more than once (see Sections 3. The push() method of Null calls push() on its output port. which calls the pull() method of the Queue. Comparison of Click to TinyGALS An element in Click is comparable to a component in TinyGALS in the sense that both are objects with private state. each of which involve loading the relevant function pointer from a virtual function table. When the task corresponding to FromDevice is activated. The second source of overhead comes from unnecessarily general element code. which calls the pull() method of Null. it performs no processing on the packet. In Click. as well as an indirect jump through that function pointer.2.2). If there is an empty slot in its transmit DMA ring. Rules in Click on connecting elements together are similar to those for connecting components in TinyGALS: push outputs must be connected exactly once. the element polls the receive DMA ring for a packet. The Queue element dequeues the packet and returns it through the return of the pull() calls. there is no fundamental difference between push processing and pull processing at the method-call level.2 shows a simple Click router configuration with a push chain (FromDevice and Null) and a pull chain (Null and ToDevice).2. and moves backward during a pull sequence.3 illustrates the basic execution sequence of Figure 6.114 Figure 6. The pull() method of Null calls pull() on its input port. which calls the push() method of the Queue. The calls to push() then return in the reverse order. both push and pull processing are sets of method calls that .2 and 3. Figure 6. The Null element simply passes a packet from its input port to its output port.

Push processing can be thought of as event-driven computation (if one ignores the polling aspect of Click). shows that a TinyGALS actor forms a boundary for control flow. data flow within an actor is not represented explicitly. with elements C1 and C2 grouped into an actor A and elements C3 and C4 grouped into an actor B. However. where control and data flow downstream in response to an upstream event.4 provides a more detailed analysis of the difference in control and data flow between Click and TinyGALS. In Click.4 shows a push processing chain of four elements connected to a queue. Data flow between components in an actor . Figure 6. Pull processing can be thought of as demand-driven computation. which is connected to a pull processing chain of two elements. control begins at element C1 and flows to the right and returns after it reaches the Queue.4: Click vs. In TinyGALS. In Click.115 Actor A C1 C2 Actor B C3 C4 Actor C C5 C6 Click router thread task invocation task invocation TinyGALS thread event scheduler invokes actor from event queue ??? Click push Control flow Data flow Click pull TinyGALS Figure 6. control flows to the connected element (recall that a compound element is compiled to a chain of simple elements). Figure 6. differ only in name. TinyGALS. where control flows upstream in order to compute data needed downstream. Data (a packet) flows to the right until it reaches the Queue. Note that a compound element in Click does not form the boundary of control flow. if an element inside of a compound element calls a method on its output. Visualizing this configuration as a TinyGALS model. the direction of control flow with respect to data flow in the two types of processing are opposite of each other.

rather than a sink object. push processing in Click is equivalent to synchronous communication between components in a TinyGALS actor. which is interrupt-driven and allows preemption to occur in order to process events. whereas TinyGALS is motivated by powerand resource-constrained hardware platforms. does not have a natural equivalent in TinyGALS. Data flow between actors always has the same direction as the connection arrow direction.1 1 Although.5 for more information. although TinyGUYS provides a possible hidden avenue for data flow between actors. since Click is a pure polling system. but execution is asynchronous between chains. From this global point of view. locally synchronous execution model of TinyGALS. a TinyGALS system goes to sleep when there are no external events to which to respond. arrival of data in a queue does not cause downstream objects to be scheduled. Aside from the polling/interrupt-driven difference. Additionally. In Figure 6. which are long running computations placed in the task queue by a TinyOS component method.3. Much of this is because Click’s design is motivated by high throughput routers. tasks can be preempted by hardware interrupts. The only way of passing data between Click elements is to add annotations to a packet (information attached to the packet header. elements C5 and C6 may have to be rewritten to reflect the fact that C6 is now a source object. although this can be emulated by linking a CLOCK component with an arbitrary component. Unlike Click. unlike TinyGALS. control flow in this new TinyGALS model is the same as in Click. Also note that the Click Queue element is not equivalent to the queue on a TinyGALS actor input port. it does not respond to events immediately. However. The scheduler runs tasks in the task queue only after processing all events in the event queue. Also unlike Click. where data flow has the same direction as the connection. elements C5 and C6 are grouped into an actor C. In Click. for backwards compatibility with TinyOS. . unlike in Click. If one reverses the arrow directions inside of actor C. TinyGALS does not contain timers associated with elements. however. the execution model of Click is quite similar to the globally asynchronous. In Click. Unlike TinyGALS.4. Pull processing in Click.116 can have a direction different from the link arrow direction. the TinyGALS runtime system implementation supports TinyOS tasks. as in TinyGALS. elements in Click have no way of sharing global data. Additionally. which are separated by a Queue element. execution is synchronous within each push (or pull) chain. the TinyGALS model does not contain a task queue. See Section 3. This highlights the fact that Click configurations cannot have two push chains (where the end elements are activated as tasks) separated by a Queue. but which is not part of the packet data).

It is known that an intruder is most likely to come from the west. so nodes should send data only when necessary.6: Pull processing across multiple nodes. This example also demonstrates a way to perform distributed multitasking. Each node is only capable of detecting intruders within a limited range and has a limited battery life. Figure 6. the following example by Jie Liu given in Yang Zhao’s paper [109] illustrates a situation in which pull processing is desirable for eliminating unnecessary computation. This could be an extension to the current single-node architecture of TinyGALS.5: A sensor network application. The center component is similar to the Click scheduler element. Pull processing in sensor networks Although TinyGALS does not currently use pull processing. Node D (and others) may be free to perform other computations while node A performs most of the intrusion detection. Figure 6. .5 shows a sensor network application in which four nodes cooperate to detect intruders. Node B Node C Node A Node D Figure 6. somewhat likely to come from the south. Under these assumptions. a configuration for the application in Figure 6. but very unlikely to come from the east or north.117 North A D B C Figure 6.6 shows one possible configuration for this kind of pull processing.5. node A may want to pull data from other nodes only when needed. Node A has more power and functionality than other nodes in the system. Communication with other nodes consumes more power than performing local computations.

The system activates an actor when its trigger condition is satisfied. an actor should not need any additional data to complete its finite computation. and/or messages from other actors. then the system grants the actor at least the declared execution time before it reaches its deadline.118 6. unlike . actors are never blocked on reading.1.1. where a reaction is a finite piece of computation. A trigger condition can be built using real-time physical events. The system makes the results of the execution available to other actors and the physical world only at the deadline time. The communication among the actors has event semantics.5 Click and Ptolemy II The MESCAL project has created a tool called Teepee [71]. in which. which carries from one reaction to another. communication packets. An actor represents a sequence of reactions. execution time.6 Timed Multitasking Timed multitasking (TM) [69] is an event-triggered programming model that takes a timecentric approach to real-time programming but controls timing properties through deadlines and events rather than time triggers. and deadlines. Software components in TM are called actors. Triggers must be responsible.e. CI is motivated by the push/pull interaction between data producers and consumers in middleware services such as the CORBA event service. possibly using the ClassWrapper actor to model TinyGALS components. CI actors can be active (i. 6.. There is a natural correlation between the CI domain and Click. which is based on Ptolemy II and implements the Click model of computation. In cases where an actor cannot finish by its deadline. Actors in a TM model declare their computing functionality and also specify their execution requirements in terms of trigger conditions. interaction with the ports of an actor may not directly transfer the flow of control to another actor. which means that once triggered. CI and Click could be leveraged to implement an implementation of TinyGALS in Ptolemy II. Actors have state. due to the implementation of TM in Ptolemy II. have their own thread of execution) or passive (triggered by an active actor). Unlike method calls in object-oriented models. Therefore. the TM model includes an overrun handler to preserve the timing determinism of all other actors and allow an actor that violates the deadline to come to a quiescent state. The CI (component interaction) domain [16] in Ptolemy II models systems that contain both event-driven and demand-driven styles of computation. If there are enough resources at run time. Actors can only communicate with other actors and the physical world through ports.

In a TM model. The TM runtime system uses an event dispatcher to trigger a task when a new event is received at its port. This is similar to the TM method of only producing outputs at the end of an actor’s deadline. and tasks are triggered entirely by events produced by peer actors. every piece of data is produced and consumed exactly once. . routing. and a flag indicating whether the event has been consumed. and deploying wireless systems exist. Liu and Lee [69] describe a method for generating the interfaces and interactions among TM actors into an imperative language like C. The event semantics can be implemented by FIFO queues.1 Design and simulation environments ns-2 [77] is a well-established. These two types do not intersect.2. the sender of a communication is never blocked on writing. link. Some information presented in this section is excerpted from papers on VisualSense [8] and Viptos [21. simulating. and Deployment Environments A number of frameworks for designing. which contains the communicating data. An ISR is synthesized as an independent thread. Events on a connection between two actors are represented by a global data structure.2. Wireless and mobility support in ns-2 comes from the Monarch project. Simulation. open-source network simulator. Section 3. ISRs do not have triggering rules. though none include all of the capabilities of Viptos. There are two types of actors: interrupt service routines (ISRs) respond to external events. 6. 22]. 6. and routing layers [14]. which provides channel models and wireless network layer components in the physical.119 state semantics. Tasks have a much richer set of interfaces than ISRs and have a set of methods that define the split-phase reaction of a task. It is a discrete-event simulator with extensive support for simulating TCP/IP. and outputs are made immediately available as trigger events to downstream actors.2 Design.2 suggested that a partial method of reducing non-determinacy in TinyGALS programs due to one or more interrupts during an actor iteration is to delay producing outputs from an actor until the end of its iteration. an ISR usually appears as a source actor or a port that transfers events into the model. and multicast protocols over wired and wireless (local and satellite) networks. Conceptually. a mutual-exclusion lock to guard the access to the variable if necessary.

Users can specify transceiver frequency. OPNET Modeler [78] is a commercial tool that offers sophisticated modeling and simulation of communication networks. OMNeT++ defines a component interface for the basic module. It also includes a set of classes and mechanisms to realize network emulation. (2) sensor and wireless communication channels. with an object-oriented approach similar to the abstract semantics of Ptolemy II [28]. in a block-diagram fashion. it shares many concepts. The OPNET Wireless Module provides support for wireless and mobile communications. J-Sim [97] is an open-source. and sensors). and provides an object-oriented definition of (1) target.120 SensorSim [79] builds on ns-2 and claims power models and sensor channel models. called processes. solutions. nodes are connected by static links. and (3) physical media such as seismic channels. OMNeT++ [98] is an open source tool for discrete-event modeling. each corresponding to a different trade-off between performance and power. A new wireless sensor framework [90] is builds upon the autonomous component architecture (ACA) and the extensible internetworking framework (INET) of J-Sim. The transceiver pipeline stages use these characteristics to calculate the average power level of the received signals to determine whether the receiver can receive this signal. Each node can be constructed from software components. The sensor channels model the dynamic interaction between the physical environment and the sensor nodes. and other characteristics. It uses a discrete-event simulator to execute the entire model. radio. OPNET also supports antenna gain patterns and terrain models. and each process can be constructed using finite state machine (FSM) models. In conventional OPNET models. A power model consists of an energy provider (the battery) and a set of energy consumers (CPU. This new framework extends the notion of network emulation to Berkeley Mica . SensorSim also claims hybrid simulation in which real sensor nodes can participate. bandwidth. With the Mobility Framework extension. Application-specific models can be defined by sub-classing classes in the simulation framework and customizing their behaviors. power. and features with OPNET. component-based. The NesCT tool of the EYES WSN project allows users to run TinyOS applications directly in OMNeT++ simulations. compositional network simulation environment developed entirely in Java. Unfortunately. It uses a 13-stage “transceiver pipeline” to dynamically determine the connectivity and propagation effects among nodes. and mobility models and power models (both energy-producing and energy-consuming components). An energy consumer can have several modes. An OPNET model is hierarchical. sensor and sink nodes. SensorSim is no longer under development and will not be publicly released. where the top level contains the communication nodes and the topology of the network. But instead of using FSM models for processes.

4 kernel. EmTOS [35] is an extension to Em* that enables an entire nesC/TinyOS application to run as a single module in an Em* system. Prowler is an event-driven simulator that can be set to operate in either deterministic mode (to produce replicable results while testing the application) or in probabilistic mode (to simulate the nondeterministic nature of the communication channel and the low-level communication protocol of the motes). Em* [34] is a toolsuite for developing sensor network applications on Linux-based hardware platforms called microservers. or actuating the simulation itself.121 mote-based wireless sensor networks. TinyViz supports software plugins that watch for events coming from the simulation—such as debug messages and radio messages—and react by drawing information on the display. which supports both wired and wireless networks. Physical environment data from the network is extracted with SerialForwarder. emulation. simulation. Em* modules are implemented as user-space processes that communicate through message passing via device files. a utility distributed with TinyOS that collects TinyOS packets sent to a mote base station attached to a PC and forwards them through the serial port. Thus. It relies on Parsec. Bagrodia founded Scalable Network Technologies. is a scalable environment for parallel simulation of wireless systems [106]. similar to the OSI (Open Systems Interconnection) seven-layer network architecture. TinyViz [65] is a Java-based graphical user interface for TOSSIM. each with its own API. by setting the sensor values that simulated motes read. corresponding to the Linux jiffy clock that is part of the scheduler in the Linux 2.. GloMoSim (Global Mobile system Simulator). from the application to the physical communication layer. Prowler [86] is a probabilistic wireless network simulator running under MATLAB and can simulate wireless distributed systems. Inc. both real and simulated. It supports deployment. and it was designed to be easily embedded into optimization algorithms. Although Prowler provides a generic simulation environment. setting simulation parameters. It can incorporate an arbitrary number of motes. its current target platform is the Berkeley Mica mote running TinyOS. on arbitrary (possibly dynamic) topology. for example. which expanded and further developed GloMoSim into a commercial tool called QualNet. and visualization of live systems. EmTOS modules are restricted to using the Linux scheduler as the main programming model. from UCLA. The EmTOS wrapper library is similar to the TOSSIM simulated device library. . GloMoSim is designed to be extensible and composable: the communication protocol stack for wireless networks is divided into a set of layers. This means that the minimum granularity of a timer is 10 milliseconds. a C-based simulation language for sequential and parallel execution of discrete-event simulation models.

and no connectivity to other motes). and time-triggered. DyMND-EE [110] is a wireless sensor network simulator based on Ptolemy II and uses Em* to run nesC code in a Linux environment using the FUSD kernel module to provide connections between simulated nodes and the DyMND-EE simulation manager. One interesting part of this project is the DyMND Execution Sequencer (DES) user interface. and its simulation speed scales much better than ATEMU for large number of nodes. Some are also open-source software. They also appear to be the only frameworks to provide a modern type system at the actor level (vs. All of these systems provide extension points where model builders can define functionality by adding code. their digital circuits. All except Em* provide some form of discrete-event simulation. as well as the physical dynamics of mobility of sensor nodes. and to generate an XML configuration file. like Viptos. to model the physical environment. Other simulators used in the TinyOS community for cycle accurate simulation/emulation of the Atmel AVR (processor used in the Mica mote series) instruction set include ATEMU [80] and Avrora [96]. synchronous/reactive. DyMND-EE is similar to Viptos. dataflow. along with any properties files required by external runtime environments like Em*. i. ATEMU simulates a byte-oriented interface to the radio and its transmissions at the bit level with precise timing. which allows users to graphically specify the deployment topology of a sensor network including target positions and trajectories. they are not restricted to be leaf nodes [33]. such as continuous-time. energy consumption and production.. signal processing. Avrora works at the byte level with precise timing. Viptos and Ptolemy II support hierarchical nesting of heterogeneous models of computation [28].e. . but none provide the ability that Viptos inherits from Ptolemy II to integrate diverse models of computation. Such models would have to be built with low-level code. They also appear to be unique among these modeling environments in that FSM models can be arbitrarily nested with other models. DES generates a model encompassing the full sensor network from this XML configuration and existing Ptolemy II descriptions of the required actors. This capability can be used. or real-time software behavior. except that it requires modification to the nesC source code in order to use simulated sensor and other devices.122 TinyViz includes a radio model plugin with two built-in models: “Empirical” (based on an outdoor trace of packet connectivity with the RFM1000 radios) and “Fixed radius” (all motes within a given fixed distance of each other have perfect connectivity. the code level) [105]. This generative technique is similar to the metaprogramming techniques presented in Chapter 5. Both support simulation of heterogeneous networks. None provide the ability to transition from high-level modeling to real code simulation and deployment. for example.

2. the GRATIS II code generator can transform all the interface and wiring information into a set of nesC target files. support for multiple target platforms and sensor boards. debugging and programming tool” for Sun SPOTs [87. and users can manage each device. It runs the Squawk VM [85]. to get device status information. even as the application runs. set a persistent name property. SPOTWORLD depicts each automatically discovered Sun SPOT.123 6.g.. and support for multiple TinyOS source trees. code navigation. team development support (through Eclipse-CVS integration).4 radio with an effective range of about 80 meters. and a CC2420 802. deployment. This open source project features syntax highlighting of nesC code. 89]. TinyDT uses a Java-based nesC parser implemented using ANTLR to build an in-memory representation of the actual nesC application. automatic build support. TinyOS IDE is another Eclipse plugin that supports TinyOS project development and provides nesC syntax highlighting. Both TinyDT and TinyOS IDE complement Viptos in that they can be used to create and edit the source code for new TinyOS library components. Given a valid model. SPOTWorld also has an experimental feature that allows users to drag an application from one SPOT to the next. SPOTWorld is “an integrated management. which nc2moml can then import into Viptos for simulation. The TinyOS component library is available as graphical blocks within GRATIS II. reset the device.3 Programming and deployment environments Sun Microsystems Laboratories has created a Java-based wireless sensor network platform called Sun SPOT (Small Programmable Object Technology) [88]. SPOTWorld . wirings. resume.x plugin for the Eclipse platform that implements an IDE (integrated development environment) for TinyOS/nesC development. deploy code. interfaces and the JavaDoc style nesC documentation. TinyDT is a TinyOS 1.2. The Sun SPOT supports multiple concurrently running applications. or start any of the available applications. e. which includes component hierarchy. Users can graphically address individual running applications in order to pause.2 TinyOS development and editing environments GRATIS II (Graphical Development Environment for TinyOS) is built on top of GME 3 (Generic Modeling Environment). 6.15. The Sun SPOT is based on a 32-bit 180 MHz ARM920T core with 512 KB of RAM and 4 MB of flash memory. This graphical tool can run stand-alone or be integrated with NetBeans. or exit each one. a small J2ME-compliant Java virtual machine. GRATIS II was developed mainly for static analysis of TinyOS component graphs and does not support simulation. code completion for interface members. However.

Future versions of these tools can benefit from cross-fertilization of the techniques presented in this dissertation. The developers of the Sun SPOT and SPOTWorld have expressed interest in integrating features of Viptos with SPOTWorld.3 Summary This chapter presented a number of frameworks related to TinyGALS/galsC.124 enables the user to compile a collection of applications and deploy the resulting file over the air to selected Sun SPOTs. and the metaprogramming techniques presented in earlier chapters. . 6. Viptos.

simulation. simulating. and/or Viptos enable wireless sensor network application developers to create high-level descriptions or models and automatically . VisualSense.125 Chapter 7 Conclusion Developing software for wireless sensor networks today is an error-prone and tedious process that involves patching together many different tools and techniques. node-centric. concurrency error detection. and design stages. locally synchronous programming model that combines an actor-oriented (message-oriented) model with an object-oriented (procedure-oriented) model that allows application developers to use high-level actors as a first-order programming concept. actor-oriented design. This combination balances fast response with an easy-to-understand programming model that puts application tasks first. middleware. Various metaprogramming and generative programming techniques described in this dissertation using higher-order actors in Ptalon. This dissertation discussed raising the conceptual level of designing. simulation. and macroprogramming layers. and between the deployment. and deployment environment for wireless sensor network applications. galsC is a language that implements the TinyGALS programming model. and scheduling and communication code generation facilities provided by the galsC compiler. Application developers can use Viptos to create abstract models of their intended systems and refine them down to low-level code that can be transferred to target hardware. and this dissertation described its syntax and the high-level type checking. Viptos provides an integrated. Ptolemy II. usually using very low-level code. TinyGALS provides a globally asynchronous. and deploying wireless sensor network applications by using actor-oriented programming tools and techniques. Actor-oriented programming provides a way to unify the layers and stages of application development—between the operating system. but still allows them to use a low-level programming model when needed.

.126 generate sensor network simulation scenarios. All of the tools I developed and described in this dissertation are open-source and freely available on the web. The networked embedded computing community can use these tools and the knowledge shared in this dissertation to improve the way we program wireless sensor networks.

Structured communication in single hop sensor networks. pages 138–153. pages 423–430. Alberto Sangiovanni-Vincentelli. and Kei Suzuki. The MIT Press Series in Artificial Intelligence. 7(1):1–72. On the credibility of manet simulations. [4] Todd R. Agha. 18(6):834–849. IEEE Parallel and Distributed Technology: Systems and Applications. 39(7):48–54. Prasanna. Rajendra Panwar. Sentovich. 2004. In ICPP ’04: Proceedings of the 2004 International Conference on Parallel Processing (ICPP’04). 1(2):3–14. Mason. Scott F. Edward A. and Daniel Sturman.127 Bibliography [1] Gul Agha. Smith. Cambridge. and Yang Zhao. Journal of Functional Programming. Agha. Luciano Lavagno. Ellen M. [8] Philip Baldwin. Harry Hsieh. Computer. Xiaojun Liu. June 1999. [7] Felice Balarin. [5] Amol Bakshi and Viktor K. Sanjeev Kohli. January 2004. [6] Amol Bakshi and Viktor K. 1986. MIT Press. ACTORS: A Model of Concurrent Computation in Distributed Systems. In Proceedings of the First European Workshop on Wireless Sensor Networks (EWSN 2004). 1997. Algorithm design and synthesis for wireless sensor networks. Modeling of sensor nets in Ptolemy II. IEEE Computer Society. DC. [3] Gul A. Lee. In IPSN’04: Proceedings of the Third International Symposium . IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems. Prasanna. [2] Gul A. July 2006. Massimiliano Chiodo. Attila Jurecska. Abstraction and modularity mechanisms for concurrent computing. Andel and Alec Yasinsac. WooYoung Kim. Washington. Anna Patterson. USA. Ian A. and Carolyn L. Talcott. 1993. A foundation for actor computation. Paolo Giusto. Svend Frølund. Synthesis of software programs for embedded control applications.

2004. pages 977–982. Springer-Verlag. Mobile Networks and Applications. Adam Torgerson. Eric Fiterman. University of California. Brian Shucker. London. and Jorjeta Jetcheva. Technical Report UCB/ERL M05/23. ACM Press. Belgium. IEEE Transactions on Computers. Chris Collins. 3001 Leuven. UK. 2006. Lee.128 on Information Processing in Sensor Networks.). Tiebing Zhang. A performance comparison of multi-hop wireless ad hoc network routing protocols. 52(11):1454–1469. [12] J. Brinda Ganesh. and Richard Han. Paul Kohout. Xiaojun Liu. pages 359–368. The performance and energy consumption of embedded real-time operating systems. EECS Department. [11] Jan Beutel. NY. pages 162–177. 2004. Bhasker. James Carlson. Stephen Neuendorffer. . Xiaojun Liu. Heterogeneous concurrent modeling and design in Java (Volume 1: Introduction to Ptolemy II). Yang Zhao. 1998. Hui Dai. 1999. 2005. Technical Report UCB/EECS-2007-7. Yang Zhao. 11 January 2007. Jing Deng. and Haiyang Zheng (eds. Heterogeneous concurrent modeling and design in Java (Volume 3: Ptolemy II domains). and Paul Le Guernic. 10(4):563–579. Berkeley. A SystemC Primer. Christine Smit. New York. Edward A. Star Galaxy Publishing. Stephen Neuendorffer. 2003. [10] Albert Benveniste. In DATE ’06: Proceedings of the Conference on Design. [9] Kathleen Baynes.). Jeff Rose. Jul 2005. NY. Edward A. Belgium. Charles Gruenwald. USA. Berkeley. Yih-Chun Hu. and Haiyang Zheng (eds. EECS Department. USA. [16] Christopher Brooks. Maltz. Benoˆt Caillaud. ACM Press. In MobiCom ’98: Proceedings of the 4th Annual ACM/IEEE International Conference on Mobile Computing and Networking. and Bruce Jacob. Anmol Sheth. Johnson. Fast-prototyping using the BTnode platform. Lee. New York. [13] Shah Bhatti. European Design and Automation Association. [14] Josh Broch. David A. In ı CONCUR ’99: Proceedings of the 10th International Conference on Concurrency Theory. [15] Christopher Brooks. Second Edition. University of California. Automation and Test in Europe. From synchrony to asynchrony. pages 85–97. David B. MANTIS OS: an embedded multithreaded operating system for wireless micro sensor platforms.

Inc. [23] Elaine Cheong. Berkeley. Berkeley. EECS Department. University of California. Technical Report UCB/EECS-2006-15. pages 326–341. TinyGALS: A programming model for event-driven embedded systems. In Proceedings of the Eighteenth Annual ACM Symposium on Applied Computing. [24] Elaine Cheong and Jie Liu. [18] Adam Cataldo. 20th Anniversary Edition. University of California. 1995. February 2006. Lee. Published as Technical Memorandum UCB/ERL M03/14. [20] Elaine Cheong. Lee. Lee. EECS Department. The Mythical Man-Month: Essays on Software Engineering. EECS Department. [22] Elaine Cheong. Berkeley. Technical Report UCB/EECS-2006-48. November 2006. Overview of generative software development. [21] Elaine Cheong. [25] Elaine Cheong and Jie Liu. PhD thesis. Thomas Huining Feng. volume 3566/2005 of Lecture Notes in Computer Science. and Yang Zhao. 18 December 2006. In Unconventional Programming Paradigms (UPP) 2004. University of California. Edward A. [19] James Adam Cataldo. 2005. and Yang Zhao. Berkeley. Addison Wesley Longman. A formalism for higher-order composition languages that satisfies the ChurchRosser property.129 [17] Frederick P. Judy Liebman. Master’s thesis. . March 2003. Edward A. and Feng Zhao. CA. Berkeley. Elaine Cheong. pages 698–704. Automation and Test in Europe (DATE05). Brooks. April 2004. EECS Department. USA 94720. galsC: A language for event-driven embedded systems. [26] Krzysztof Czarnecki. Technical Report UCB/EECS-2006-150. University of California. galsC: A language for event-driven embedded systems. May 2003. 9 May 2006. and Andrew Christopher Mihal. Memorandum UCB/ERL M04/7. Viptos: A graphical development and simulation environment for TinyOS-based wireless sensor networks. 7–11 March 2005. Berkeley. Springer Berlin / Heidelberg. Berkeley. Design and implementation of TinyGALS: A programming model for eventdriven embedded systems. Jr. In Proceedings of Design. Jie Liu. University of California. The Power of Higher-Order Composition Languages in System Design.. Joint modeling and design of wireless networks and sensor node software. Edward A. University of California.

Alberto Cerpa.a lightweight and flexible o o operating system for tiny networked sensors. and David Culler.130 [27] Adam Dunkels. Nithya Ramanathan. IEEE Transactions On Computer-Aided Design Of Integrated Circuits and Systems. In HPCS ’07: Proceedings of the 21st International Symposium on High Performance Computing Systems and Applications. Deborah Estrin. Proceedings of the IEEE. 43(4):1173–1180. USA. and deployment of heterogeneous sensor networks. Ferreira. C. Lee. Eric Brewer. The nesC language: A holistic approach to networked embedded systems. Lee. 2004. R. IEEE. Montez. June 2003. A. [31] Massimo Franceschetti and Ronald Meester. Jeremy Elson. [34] Lewis Girod. Xiaojun Liu. April 2005. Dantas. In Proceedings of the 4th International Conference on Information Processing in Sensor Networks (IPSN’05). [29] D. Thanos Stathopoulos. [28] Johan Eker. Gruia-Catalin Roman. Bilung Lee. Sonia Sachs. [30] Chien-Liang Fok. and Chenyang Lu. Mobile agent middleware for sensor networks: An application case study. Thanos Stathopoulos. J. Edward A. Matt Welsh. [35] Lewis Girod. [32] David Gay. and Martius Rodriguez. Journal of Applied Probability. DC. Stephen o Neuendorffer. J¨ rn W. 91(1):127–144. Jie Liu. M. R. Pinto. Washington. Tampa. Berkeley. A middleware for OSCAR and wireless sensor network environments. Bj¨ rn Gr¨ nvall. Contiki . EmStar: A software environment for developing and deploying wireless sensor networks. Janneck. In ATEC’04: Proceedings of the USENIX Annual Technical Conference 2004. November 2004. In Proceedings of Programming Language Design and Implementation (PLDI) 2003. In SenSys ’04: Proceedings of the 2nd International . USA. Jozsef Ludvig. Hierarchical finite state machines with multiple concurrency models. Taming heterogeneity—the Ptolemy approach. 2007. Navigation in small world networks: a scalefree continuum model. A. Phil Levis. Nithya Ramanathan. Eric Osterweil. USA. 18(6):742–760. Jeremy Elson. Florida. and Yuhong Xiong. emulation. USENIX Association. [33] Alain Girault. A system for simulation. 2006. pages 283–296. IEEE Computer Society. CA. and Deborah Estrin. Rob von Behren. January 2003. In Proceedings of the First IEEE Workshop on Embedded Networked Sensors (EmNetS-I). and Tom Schoellhammer. and Thiemo Voigt. and Edward A. pages 382–387. June 1999.

[40] Chih-Chieh Han. 16(4):403–414. USA. pages 201–213. Kluwer Academic Publishers. In SenSys ’06: Proceedings of the 4th International Conference on Embedded Networked Sensor Systems. volume 3560/2005 of Lecture Notes in Computer Science. Rivi Sherman. and Ramesh Govindan. Applications.131 Conference on Embedded Networked Sensor Systems. Ki-Young Jang. 1993. ACM SIGPLAN Notices. 2005. 2006. USA. pages 153–166. The emergence of the MPI message passing standard for parallel computing. and Eddie Kohler. Hagi Lachover. ACM Press. ACM Press. [43] Rolf Hempel and David W. and Mani Srivastava. ACM Press. Amnon Naamad. Ramesh Govindan. . NY. New York. STATEMATE: A working environment for the development of complex reactive systems. The Tenet architecture for tiered sensor networks. New York. Computer Standards & Interfaces. [38] Ramakrishna Gummadi. New York. Aharon Shtull-Trauring. [42] David Harel. Amir Pnueli. Ram Kumar. NY. Roy Shea. Jeongyeup Paek. Omprakash Gnawali. An evaluation of the message-passing interface. Walker. NY. April 1990. 2004. Michal Politi. pages 69–80. 1999. ACM Press. 1998. [41] Per Brinch Hansen. Synchronous Programming of Reactive Systems. Springer Berlin / Heidelberg. pages 163–176. [37] Ben Greenstein. Eddie Kohler. In Proceedings of the International Conference on Distributed Computing in Sensor Systems (DCOSS). 21(1):51–62. A dynamic operating system for sensor nodes. and Services. 2004. and Deborah Estrin. [39] Nicolas Halbwachs. USA. NY. Deborah Estrin. In SenSys ’04: Proceedings of the 2nd International Conference on Embedded Networked Sensor Systems. Marcos Vieira. pages 126–140. Ben Greenstein. Eddie Kohler. In MobiSys ’05: Proceedings of the 3rd International Conference on Mobile Systems. New York. 33(3):65–72. USA. [36] Omprakash Gnawali. IEEE Transactions on Software Engineering. Macro-programming wireless sensor networks using Kairos. August Joki. A sensor network application construction kit (SNACK). and Mark Trakhtenbrot. 2005.

[53] Oliver Kasten and Kay R¨ mer. [47] Jason Hill. University of California. pages 93–104. Embedded control systems development with Giotto. [51] Anoop Iyer and Diana Marculescu. EECS Department. The semantics of a simple language for parallel programming. Compilers and Tools for Embedded Systems (LCTES’01). In Proceedings of the IFIP Congress 74. ACM Press. France. [45] Maurice Herlihy. pages 64–72. July 2003. 2000. North-Holland Publishing Company. In MobiCom ’00: Proceedings of the 6th Annual International Conference on Mobile Computing and Networking. Stephen Neuendorffer. In Proceedings of the 29th Annual International Symposium on Computer Architecture. Xiaojun Liu. Master’s thesis. David Culler. Berkeley. [48] Jason Hill. Yuhong Xiong.132 [44] Thomas A. 1974. In Proceedings of the ACM SIGPLAN Workshop on Languages. Jie Liu. 2000. Benjamin Horowitz. 2000. November 1993. pages 56–67. A software architecture supporting networked sensors. USA. USA. Ramesh Govindan. ACM Transactions on Programming Languages and Systems. International Federation for Information Processing. University of California. [52] Gilles Kahn. [50] Chalermek Intanagonwiwat. In IPSN ’05: Proceedings of the 4th International Symposium on . Overview of the Ptolemy project. Edward Lee. 8(3):323–364. In Proceedings of the Ninth International Conference on Architectural Support for Programming Languages and Operating Systems. Seth Hollar. and Haiyang Zheng. New York. Berkeley. NY. NY. and Christoph Meyer Kirsch. Journal of Artificial Intelligence. 2001. and Deborah Estrin. Viewing control structures as patterns of passing messages. 2002. 1977. 15(5):745–770. Alec Woo. IEEE Computer Society. and Kristofer Pister. A methodology for implementing highly concurrent data objects. [49] Christopher Hylands. ACM Press. New York. pages 471–475. pages 158–168. [46] Carl Hewitt. Paris. System architecture directions for networked sensors. Technical Report UCB/ERL M03/25. Henzinger. Yang Zhao. Beyond event handlers: programming wireless sensors with o attributed state machines. Robert Szewczyk. Power and performance evaluation of globally asynchronous locally synchronous processors. ACM Press. Directed diffusion: a scalable and robust communication paradigm for sensor networks.

Reviving the value of WSN simulation results through Viptos extensions. [63] Man-Kit Leung. Second International Symposium on Operating Systems. [59] Edward A. and M. Lauer and Roger M. USA. 3–19. ActorNet: an actor platform for wireless sensor networks. Functional and performance modeling of concurrency in VCC. 83(5):773–801. Oct 1978. On the duality of operating system structures. Technical Report UCB/ERL M00/12. 7(1-4):25–45. USA. [61] Edward A. 2000. ACM Transactions on Computer Systems (TOCS). Lee. [60] Edward A. Lee and Steve Neuendorffer. 18(3):263–297. 2005. IRIA. John Jannotti. NY. ACM Press. pages 191–227. Dataflow process networks. Robert Morris.4. Parks. 2006. [62] Edward A. November 2000. The Click modular router. 2000. [57] William W. Needham.133 Information Processing in Sensor Networks. Frans Kaashoek. 13.2 April 1979. 2002. The Click Modular Router. New York. Lee. Modeling concurrent real-time processes using discrete events. [54] Eddie Kohler. Reprinted in Operating Systems Review. [55] Eddie Kohler. Advances in Petri Nets. In Concurrency and Hardware Design. May 1995. MoML – a modeling markup language in XML – version 0. Piscataway. . Springer-Verlag. Kirill Mechitov. PhD thesis. In Proc. EECS Department. 2002. pp. Berkeley. London. LaRue. [58] Hugh C. IEEE Press. Massachusetts Institute of Technology. Sameer Sundresh. Sherry Solden. University of California. 9 May 2007. pages 1297–1300. [56] YoungMin Kwon. and Gul Agha. Proceedings of the IEEE. 1999. Lee and Thomas M. Advances in Computers. In AAMAS ’06: Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems. 56. Benjie Chen. NJ. pages 45–52. Annals of Software Engineering. Embedded software. UK. Spring 2007 EE290Q (Wireless Sensor Networks) Class Project Report. and Bishnupriya Bhattacharya.

Maurice Chu. 2(4):50–62. Soley. NY. January 1999. In ASPLOSe X: Proceedings of the 10th International Conference on Architectural Support for Programming Languages and Operating Systems. New York. 2003. Kluwer Academic Publishers. IEEE Pervasive Computing. Semantics-based optimization across uncoordinated tasks in networked embedded systems. New York. [68] Jie Liu. [72] Thomas J. Joseph M. Michael J. In Axel Jantsch and Hannu Tenhunen. and Richard M. 2003. NY. 10-12 May 2006. 25(1):1907–1929. Edward A. [69] Jie Liu and Edward A. Mapping concurrent applications onto architectural platforms. Nelson Lee. Franklin. pages 273–281. In SIGMOD ’03: Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data. Steve Tanner. State-centric programming for sensor-actuator network systems. and Guang R. Networks on Chip. Timed multitasking for real-time embedded software. and Feng Zhao. pages 65–75. USA. Ruh. Real time target tracking with binary sensor networks and parallel computing. John Rushing. Najjar. pages 39–59. Gao. Juan Liu. New York. pages 126–137. [71] Andrew Mihal and Kurt Keutzer. ACM Press. editors. 2003. [73] Walid A. ACM Press. Advances in the dataflow computational model. [65] Philip Levis. 2003. 1997. The design of an acquisitional query processor for sensor networks. [70] Samuel Madden. TOSSIM: accurate and scalable simulation of entire TinyOS applications. ACM Press. and Feng Zhao. pages 85–95. Graves. 2005. Mowbray. USA. Matt Welsh. Addison-Wesley. [67] Jie Liu. Elaine Cheong.134 [64] Philip Levis and David Culler. NY. . William A. Parallel Computing. In Proceedings of 2006 IEEE International Conference on Granular Computing. Lee. chapter 3. and Evans Criswell. IEEE Control Systems Magazine. February 2003. USA. 2002. In EMSOFT ’05: Proceedings of the 5th ACM International Conference on Embedded Software. pages 491–502. ACM Press. Hellerstein. pages 112–117. In Proceedings of the 1st International Conference on Embedded Networked Sensor Systems (SenSys 2003). and Wei Hong. James Reich. Mat´ : a tiny virtual machine for sensor networks. and David Culler. [66] Hong Lin. Lee. Sara J. Inside CORBA: Distributed Object Standards and Applications.

and Manish Karir.. http://www. Jonathan McGee. [81] Parmesh Ramanathan. pages 145–152. In IPSN ’05: Proceedings of the 4th International Symposium on Information Processing in Sensor Networks. UW-API: A network routing application programmer’s interface (draft version 1. USA.isi. 2003. NY. Neuendorffer. and Mani B. Dan Rusk. [80] Jonathan Polley. 2005. [76] Ryan Newton. In Proceedings of the First IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks (SECON’04). New York. Visual. NY. and Matt Welsh. Analysis and Simulation of Wireless and Mobile Systems. Inc. Department of Electrical and Computer Engineering. Greg Morrisett. The Regiment macroprogramming system. [82] Hideki John Reekie. Brian J. USA. 2000. EECS Department. IEEE Press. Dionysys Blazakis. [75] Ryan Newton.ns-2. Realtime Signal Processing: Dataflow. Arvind. 29 October 2001. Kuang-Ching Wang. 2004. The Design and Analysis of Computer Experiments.com.135 [74] Stephen A. Building up to macroprogramming: an intermediate language for sensor networks. Srivastava. [78] OPNET Technologies. pages 37–44. and Thomas Clouqueur. In IPSN ’07: Proceedings of the 6th International Conference on Information Processing in Sensor Networks. ATEMU: A fine-grained sensor network simulator. http://www. Actor-Oriented Metaprogramming.opnet. Williams. University of Wisconsin-Madison. Inc. 2005. [77] The network simulator . 1995. Baras. 2007. Berkeley. Piscataway. Springer Series in Statistics. Technical report. USA. Notz. Opnet modeler. and William I. PhD thesis. NJ. Andreas Savvides. Santner. Springer-Verlag New York. University of Technology at Sydney. John S.edu/nsnam/ns. New York. . pages 104–111. Kewal Saluja. University of California. [83] Thomas J. ACM Press. and Functional Programming. SensorSim: a simulation framework for sensor networks. and Matt Welsh. PhD thesis. In MSWIM ’00: Proceedings of the 3rd ACM International Workshop on Modeling. pages 489–498. ACM Press. [79] Sung Park.2).

In VEE ’06: Proceedings of the 2nd International Conference on Virtual Execution Environments. J-Sim: A simulation environment for wireless sensor networks. John Daniels. USA. [87] Randall B. . and Gregory K. Languages. and Honghai Zhang. and Doug Simon. In Proceedings of the 15th European Simulation Symposium (ESS’03). Lu-Chuan Kung. Ning Li. SPOTWorld and the Sun SPOT. and Dave Cleal. Enabling JavaTM for small wireless devices with Squawk and SpotWorld. USA. IEEE Computer Society. In 2nd Workshop on Building Software for Pervasive Computing. In OOPSLA ’06: Companion to the 21st ACM SIGPLAN Conference on Object-Oriented Programming Systems. and Derek White. pages 706–707. Smith. Hung-Ying Tyan. New York. Brown.136 [84] Y. In ANSS ’05: Proceedings of the 38th Annual Symposium on Simulation. DC. Hou. Parallel simulation made easy a with OMNeT++. 2005. [88] Randall B. Jennifer C. [91] David B. Stewart and Robert A. Andr´ s Varga. John Daniels. Simulation-based optimizae o o o e tion of communication protocols for large-scale wireless sensor networks. Smith. ´ [86] Gyula Simon. USA. 2006. Washington. JavaTM on the bare metal of wireless sensor devices: The Squawk Java virtual machine. ACM Press. Cristina Cifuentes. P´ ter V¨ lgyesi. Mikl´ s Mar´ ti. volume 3. 2006. 2007. Egan. pages 3 1339–3 1346. NY. USA. In IPSN ’07: Proceedings of the 6th International Conference on Information Processing in Sensor Networks. Programming the world with Sun SPOTs. In Proceedings 2003 IEEE Aerospace Conference. ACM Press. Dave Cleal. October 2003. NY. 16 October 2005. New York. ACM Press. [89] Randall B. [85] Doug Simon. Wei-Peng Chen. Cristina Cifuentes. NY. pages 493–499. 8-15 March 2003. In Workshop on RealTime Mission-Critical Systems in conjunction with the 1999 Real-Time Systems Symposium. [90] Ahmed Sobeih. Hyuk Lim. pages 565–566. New York. pages 175–187. and Akos L´ deczi. Bernard Horan. and Applications. November 1999. Ahmet Sekercioglu. pages 78–88. Smith. Grand challenges in mission-critical systems: Dynamically reconfigurable real-time software for flight control systems.

Springer-Verlag. Richard A. USA. Programming sensor networks using abstract regions. In GPCE ’02: Proceedings of the 1st ACM SIGPLAN/SIGSOFT Conference on Generative Programming and Component Engineering. realization and evaluation of a component-based compositional software architecture for network simulation. [96] Ben Titzer.tinyos. Fourth International Conference on Information Processing in Sensor Networks. June 1995. David Chu. In Proceedings of the Fourth Annual Symposium on Logic in Computer Science. Design of dynamically reconfigurable real-time software using port-based objects. 23(12):759–776. [98] Andr´ s Varga. USENIX Association.137 [92] David B. Declarative sensornet architecture. Stride scheduling: Deterministic proportionalshare resource management. London. . December 1997. CA. An open-source OS for the networked sensor regime. Piscataway. 2002. PhD thesis. Design. In Proceedings of the a European Simulation Multiconference (ESM’2001). [93] Janos Sztipanovits and Gabor Karsai. pages 92–97. [97] Hung-Ying Tyan. Type inference for record concatenation and multiple inheritance. and Jens Palsberg. Generative programming for embedded systems. Stewart. Joseph Hellerstein. Weihl. The OMNeT++ discrete event simulation system. In International Workshop on Wireless Sensor Network Architecture (WWSNA 2007). 6-9 June 2001. [94] Arsalan Tavakoli. [99] Carl A. In Proceedings of IPSN’05. Philip Levis. Cambridge. 2002. 1989. pages 29–42. 2005. NJ. MA. UK. pages 32–49. Volpe. and Pradeep K. Berkeley. 2004. [95] TinyOS community forum: http://www. In NSDI’04: Proceedings of the 1st Conference on Symposium on Networked Systems Design and Implementation. Waldspurger and William E. Technical Report MIT/LCS/TM-528. Khosla. pages 477–482. Daniel Lee. IEEE Press. The Ohio State University. [101] Matt Welsh and Geoff Mainland. USA. Avrora: Scalable sensor network simulation with precise timing.net. and Scott Shenker. Massachusetts Institute of Technology. IEEE Transactions on Software Engineering. USA. 25-27 April 2007. [100] Mitchell Wand.

91(8):1199–1209. . 2002. NY. Semantic Streams: A framework for composable semantic interpretation of sensor data. [107] Feng Zhao and Leonidas Guibas. and Mario Gerla. In K. pages 154–161. Rajive Bagrodia. http://www. 2004. and Services. editors. and Jie Liu. USA. Eric Brewer. TinyGALS and CI. http://ptolemy. EECS Department. Cory Sharp. 5-12 March 2005.eecs. [110] Andrew L. GloMoSim: A library for parallel simulation of large-scale wireless networks. ACM Press.org/. PhD thesis. In Proceedings of 2005 IEEE Aerospace Conference. April 2003. pages 5–20. A study of Click. R¨ mer. and James Reich. In Proceedings of the 12th Workshop on Parallel and Distributed Simulation – PADS ’98. 2006.wikipedia. [105] Yuhong Xiong. Feng Zhao. End-to-end prototyping and validation for health management sensor networks. Berkeley. and Prasanta Bose.pdf. Applications. New York. and F. 2004.edu/˜ellen zh/click tinygals ci. Wireless Sensor Networks: An Information Processing Approach. volume 3868 of Lecture Notes in Computer Science. H. [104] Wikipedia. Springer-Verlag Berlin Heidelberg. Zimdars. [106] Xiang Zeng.berkeley. University of California. Elsevier/Morgan-Kaufmann. August 2003. [109] Yang Zhao. The o Third European Workshop on Wireless Sensor Networks (EWSN). Mattern. 26-29 May 1998. Hood: a neighborhood abstraction for sensor networks. pages 99–110. Collaborative signal and information processing: An information directed approach. Proceedings of the IEEE. In MobiSys ’04: Proceedings of the 2nd International Conference on Mobile Systems. Jie Liu. [103] Kamin Whitehouse. [108] Feng Zhao. An Extensible Type System for Component-Based Design. and David Culler. Juan Liu.138 [102] Kamin Whitehouse. Karl. pages 3820–3830. James Yang. Leonidas Guibas.

Sign up to vote on this title
UsefulNot useful