Actor-Oriented Programming for Wireless Sensor Networks

Elaine Cheong

Electrical Engineering and Computer Sciences University of California at Berkeley
Technical Report No. UCB/EECS-2007-112 http://www.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-112.html

August 30, 2007

Copyright © 2007, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission.

Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong

B.S. (University of Maryland, College Park) 2000 M.S. (University of California, Berkeley) 2003

A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences in the GRADUATE DIVISION of the UNIVERSITY OF CALIFORNIA, BERKELEY

Committee in charge: Professor Edward A. Lee, Chair Professor Eric A. Brewer Professor Paul K. Wright Fall 2007

Actor-Oriented Programming for Wireless Sensor Networks Copyright 2007 by Elaine Cheong .

manufacturing and industrial control. making it a challenging and error-prone process. Actor-oriented programming provides a common high-level language that unifies the programming interface between the operating system. with wide-ranging applications. and TOSSIM. This dissertation also presents methods for using higher-order actors with various metapro- . an actor-oriented graphical modeling and simulation environment for embedded systems. including environmental monitoring and conservation. Viptos is built on Ptolemy II. Lee. and simulating wireless sensor network applications. transportation. I advocate using an actor-oriented approach to designing. and macroprogramming layers of a sensor network application. programming. which provides constructs to systematically build concurrent tasks called actors. TinyGALS is implemented in the galsC programming language. The galsC compiler toolsuite provides high-level type checking and code generation facilities and allows developers to deploy actor-oriented programs on actual sensor node hardware. Berkeley Professor Edward A. business asset management. This dissertation then describes Viptos (Visual Ptolemy and TinyOS). which is built on the TinyOS programming model. node-centric.1 Abstract Actor-Oriented Programming for Wireless Sensor Networks by Elaine Cheong Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California. In this dissertation. and home automation. Building sensor networks today requires piecing together a variety of hardware and software components. each with different design methodologies and tools. Chair Wireless sensor networks is an emerging area of embedded systems that has the potential to revolutionize our lives at home and at work. Locally Synchronous) programming model. generating. seismic and structural monitoring. This dissertation presents the TinyGALS (Globally Asynchronous. middleware. a joint modeling and design environment for wireless networks and sensor node software. health care. an interrupt-level discrete-event simulator for TinyOS networks.

All of the tools I developed and describe in this dissertation are open-source and freely available on the web. Lee Dissertation Committee Chair .2 gramming and generative programming techniques that enable wireless sensor network application developers to create high-level. parameterizable descriptions and automatically generate sensor network simulation scenarios from these models. The networked embedded computing community can use these tools and the knowledge shared in this dissertation to improve the way we program wireless sensor networks. Professor Edward A.

encouraged.i To the teachers who have challenged. and guided me through life. .

Crème Fraîche. Avocado.ii Café Ptolemy Executive Chefs: Elaine Cheong and Andrew Mihal Appetizers Avocado Smash and Double-Decker Baked Quesadillas Corn Pancakes with Green Onions. and Cherry Tomatoes Butternut Squash Lasagna Cumin Crusted Chicken with Cotija and Mango-Garlic Sauce with Green Onion Pesto Mashed Motes Butternut Squash. Actors. and Cherry Tomatoes Salads Fuyu Persimmon. Metacabbage & Pancetta Risotto with Basil Oil Desserts Marillenknödel: Austrian apricot dumplings Zwetschgendatschi: Bavarian plum delicacy Drinks Plum Granita with Limoncello Vinho do IOPorto Lychee-flavoured Ramune (ラムネ) . Feta. Avocado & Watercress Salad with Discrete-Event Miso Dressing Corn and Cherry Tomato Salad with Arugula Soups Mexican Chicken Soup Butternut Squash Bisque Why-the-Chicken-Crossed-the-Model Santa Fe-Tastic Tortilla Soup Entrees Chicken Pot Pie Fettuccine with Concurrent Meyer Lemon Butter Sauce Chicken Tikka Masala Farfalle with Ptalon Pesto.

2 Ptolemy II . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Previously Published Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Concurrency and Determinacy Issues . . . 3. . . . . . . . . . . .1 VisualSense . . . . . . . . . . . . .1 The TinyGALS Programming Model and galsC Language 3. . . . . . . . . . . . .2 TinyOS execution model 2.4 System initialization and start of execution . . . . . . . .1 Links and connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . .3 TinyGUYS . . . . . . . . 3.3. . . . . . . . . . .2 Execution model and language semantics .4 Type inference and type checking . . . . . 3. . . . . . . . . . . . . . . . . .3 Code Generation . . . . . .2. . . .2 Communication . . . . 2. . . . . . . . . . . . . . .1. 3 TinyGALS and galsC 3. . . . . . . . . . . 3. .2. . . . . . . . Background 2. . .1. . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . .2 Determinacy . . . . . . . . . 3. . .3 Actor-Oriented Programming for Wireless Sensor Networks 1. . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . .1 Concurrency . . . . . . . . .1. . . . . . . .1 Programming constructs and language syntax . . . . . . . . . . . . . . . . . 2. . . . . . . . .1 Wireless Sensor Networks .3 Link model within actors . . . . . . . . . . . . . . . . . . .iii Contents List of Figures List of Tables 1 Introduction 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . 2. . . . . . . . . . . . . . . . . . .1. . 3. . . . . . . . . . 3. . . . . . . . . . . . .1. . . . . . . . . . . 3. . . . . . 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Summary . . . . .2 Actor-Oriented Programming . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . 2. . . . . . . . . . . . .1 NesC syntax . 3. . . . . . . . . .1. . . . . . . . . . . . 1. . . . . . . . . . . 3. . . . . . . . . . .5 Summary . . . . . . . . . . .1 TinyOS . . . . . . . . . . . . . .2. . . . 3. . . . . .3. . . . . 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi viii 1 2 3 5 8 9 9 10 10 11 14 17 19 22 22 28 36 38 39 40 40 41 48 49 51 51 53 53 . . . . . . . 3. .3 Summary . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Actors. . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . 5 . . . . 53 55 55 57 59 62 62 64 66 68 69 74 77 78 79 3. . . .1 TinyGALS and galsC . . . . . . . .2 Design. . . . . .1 Design and simulation environments . . . . . . . . . . . 4. . . . . . . .2. . . . . Simulation. . . . . . . . . . . . . . . . . . . . 90 . . . 6. . . . . . 95 . . . . . . . . . . . . . . . .3 Programming and deployment environments . . . . . . . . . . . . . . . .1. . . . . . . . 102 105 105 105 106 108 110 118 118 119 119 123 123 124 6 Related Work 6. . . .4. . . . . . . . . . . 98 . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . 84 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Generation of code for target deployment 4. . . . . . . . 4. . .1. . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 . 6. . . . .4 Click . .3 Port-Based Objects . . . . . . .1. . . . . . . . . . . 90 .3. . . . . . . . . 4. . . .2 Higher-order Functions. . . . .1 A simple example .4. . . . . . . .4. . . . .3 Parameter Sweep . . . . . . . . . . . . . . . . .5 Click and Ptolemy II .3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . .1 Generative Programming and Metaprogramming 5. . . .4. . Summary . . . . . . . . . . . . . . . . .1. . .2 Small World . . . . . . . . .iv 3. . Metaprogramming for Wireless Sensor Networks 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . .5 Simulation of TinyOS in Viptos . . . . . . 6. . . . . . . . . . 89 . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. .3. . . 6. . . . . . . . .1 Non-blocking . . . . . . . . . . . . . . . . . . . . . .3 Summary . . . . . . . . . . . . . . . . .4 Specifying WSN Applications Programmatically 5. . . . . . . . . . .2. .1. . . . . . . .5 4 Viptos 4. . . . . .2 Reconfiguration in Ptalon . . . 5. . . . . . . . . . . . . . . . . . . . . . .1.2 MPI . . . . . . . . . and Components 5. . . . . . . . . . . . . . . . . . . .2 TinyOS development and editing environments 6. . . . . . . . . . . . . .1. . .3. . . . . . . . . . .6 Timed Multitasking .2 Performance Evaluation .2 Radio . .4 3. .5 Summary . . . . . . . . . .2 Transformation of nesC components . . . . . . .2. . . . . . . . . . . . . . . . . . . . . .1. . . . .5 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . 84 . . . . . . . . . . . . . . . . and Deployment Environments . . . . . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Comparison to TOSSIM . . . . . . . . . . . . 6. . 82 . . . . . . . 4. . . . . . . . . 81 . . . . . . . . 5. . . .1 Motivation . . .5 Discussion . . . . 86 . . . .1 Representation of nesC components . . . 6. 81 . . . . . . . . . 6. . . . . . . . . .1. . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . . .4. . . . 5.2. . . . .3 Ptalon . . . . . . . . . . . . . . . . . . . . . . .1 Design . . . .6 Memory usage Example . . . . . . . . . 4. . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . 6. . .4 Higher-order actors . .4 Generation of code for simulation . 5. . . 4. . . . . . . . . . .

v 7 Conclusion 125 127 Bibliography .

. . . A self-loop actor triggered by an interrupt. . . . . . . . . . . . Type checking example. . . . . . . .9 3. . . . . . . . . . . . . . . . . . . . . . . XML representation of the Sinewave source. . . . .12 3. . . . . . . . . . . . . . . .2 2. . . . . . . . . .7 3. . . . . . . . . . .1 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .nc . . . . . . . . . . . Directed acyclic graphs (DAGs) within actors. . . . . . . . . . . Generated MoML by ncapp2moml for SenseToLeds. TinyGALS scheduling algorithm. . . . . . . . . . . . . . . . . . . . . . . . . A single-output. . A single interrupt. . . . . .13 3. . . . . . . . . . . . . . . . . . . .15 3. . . . . . . . . . . . . . . . . . . . . .5 3. . . . . . . . . . . . . WSN landscape. . . Source code for TimerActor and SenseActor. . . . . . . . . . . . . . . . . .16 3. . . . . . . . . . . . .11 3. . .17 4. multiple-input connection. . . . . . . . .6 3. . . . .6 4. . . . . . . One or more interrupts where actors have delayed output. Viptos version of TOSSIM scheduling algorithm. . . . .8 3. . . .14 3. .10 3. . . . . . . . . . . . . . . . . . . Two events are produced at the same time. . . . . . . . . . . . . . . SenseToLeds application in Viptos. . . . . . . Source code for the TimerC and TimerM components. . . . Active system state determined by adding the active system state after one noninterleaved interrupt. . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Top-level. . Sensor array for object detection and reporting. Sample nesC source code. . . . . . . . . . . . . . . . . . . Sample nesC source code. . . . . . . . . . . . . . . . . . . . . . . Source code for the SenseTag application. . . . Generated MoML by nc2moml for TimerC. . . . . . . . . . . . . . .nc TOSSIM scheduling algorithm. . actor-oriented design. . . . . . . . . . . .3 3. . . . Send and receive application in Viptos. . . . . . . . . . . . . . . . . . . . . . . . . . . .1 2. . . Active system state after one interrupt. . 5 6 11 13 15 22 24 27 29 32 34 38 41 42 45 46 47 47 50 54 56 58 63 65 66 67 70 71 75 Graphical representation of the SenseTag application. . . . . . . . . . . . . . . . . . . . . . . . . . per-node view of the object detection application. . . . . . . . . . . . . Illustration of an actor-oriented model (top) in Ptolemy II and straction (bottom). Code generation for the SenseTag application. . . . . . . . Lee. . . . . . . . . .1 1. . . . . . . . . . .2 2. . . . . . .4 3. . . .2 4. . . . . . . . . . . . . . . . . . . . . . . . .7 Object-oriented design vs. its hierarchical ab. . . . . . . . . . . . . . . . .3 4. . . . . . . . . . . . . . . . . . . . . . . . .4 4. . . . . .1 3. . . . . .vi List of Figures 1. . . . . . . . . . Source: Edward A. . . . . . . . . . . . . . . . .3 3. . . . . . . .2 3. . . .

. . . Each simulation ran for 120. .8 76 78 79 MultipleNodesMoML. . . . a configuration for the application in Figure 6. . . 99 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6. . . . A sensor network application. . . . . . . . . . . . . Flowchart for Click configuration shown in Figure 6. .3 5. . . . . . . . .xml . . . . . . . . . .11 Excerpt of MoML code for Ptalon version of Small World. . . . . . . . . . . . . . . . . . 92 ParameterSweep version of Small World in Ptolemy II. . . . . . . Execution time of the SenseToLeds application as a function of the number of nodes. . .ptln . 86 PtalonActor in Ptolemy II. . . . . . . . . . . . . 111 112 113 115 117 117 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 4. . . . . . . . . . . . . . . Each simulation ran for 300. . .7 5. . . .3 6. . . . Source: Eddie Kohler. . 97 5. . . 5. .0 virtual seconds. . .4 5. . . . . .5. . . . .2. . . .10 Ptalon version of Small World in Ptolemy II.ptln). . . . . . . . . . . . . . . .2 5. . . .0 virtual seconds. . . . . . . . . . .9 Ptalon code for SmallWorld (SmallWorld. A simple Click configuration with sequence diagram. . . . . . . . . . . 96 ParameterSweep version of Small World model with MultiInstanceComposite in Ptolemy II. . . . . . . . . . . . . .6 5. . . . . . . . . .10 Execution time of a radio send and receive model in Viptos as a function of the number of senders and receivers. . .1 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TinyGALS. . . . . . . . 4. . . . . . . .5 5. . . Source: Eddie Kohler. . . .1 5. . . . . Source: Eddie Kohler. .5 6. . . . . . . 87 MultipleNodesMoML. . . . . . . . . . . . . . 93 Modal model for changing parameter values of Small World model in Ptolemy II. 94 SDF model for changing parameter values of Small World model in Ptolemy II. .9 Multi-hop routing in Viptos. . . . 100 5. . . .2 6. . . . .4 6. . . .6 An example Click element. Click vs. . . . . . . . . . . . . . 88 Small World in Ptolemy II. . . . . . . . . . .vii 4. . . . . . . . . . . . . . . . . . . . . . Pull processing across multiple nodes. . . . . . . . . . . . . . . . .

. . . . . . . . Representation scheme for nesC components in Viptos. .2 3. . . . . . . . . . . . . . . . . Generated code for parameters (TinyGUYS) in galsC. . . . . . . 102 . . . Generated code for ports in galsC. . . . .1 Summary of valid types of links in TinyGALS/galsC. 37 50 51 63 Comparison of number of bytes between different implementations of SmallWorld. . . . . .3 4. . . . . . . . . . .1 3. . . . . .1 5.viii List of Tables 3. . . . . . . . . . . . . . . . .

without which this dissertation would not be possible. for their feedback. Prabal Dutta. Recipes courtesy of Food Network and Cooking Fresh from the Bay Area. Edward A. Epicurious. For their feedback. Cook’s Illustrated. Rob Szewczyk. Yang Zhao. L. Kamin Whitehouse. Stewart. advice. I would like to thank my undergraduate advisor. o . and Greens Restaurant. Jackie Leung. Mary Stewart. Finally. I would also like to thank Feng Zhao for giving me the opportunity to work with him and Jie at both PARC and Microsoft Research. I would like to thank: Christopher Brooks. Roberto Passerone. Adam Cataldo. John Reekie. Phoebus Chen. Parke.ix Acknowledgments I would like to thank Jie Liu for his invaluable guidance throughout my graduate career. for introducing me to embedded systems and starting me on this path. Steve Neuendorffer. Edward. as well as Backen k¨ stlich wie noch nie. Andrew Mihal. I would especially like to thank my advisor. and Paul Wright. and/or help with hardware and software. I would like to thank my dissertation committee. Eddie Kohler. for his advice and support throughout all these years. David Gay. Xiaojun o Liu. Eric Brewer. and the rest of the Ptolemy Group and NEST Group. Judy Liebman. J¨ rn Janneck. Heather Taylor. David B. Lee.

x .

channels. and with concomitant gains in reliability. Most observers credit that development with at least a factor of five in productivity. Jr. There are not a lot more of these easy pickings. it eliminates a whole level of complexity that was never inherent in the program at all. disks. simplicity. Radical progress is going to have to come from attacking the essential difficulties of fashioning complex conceptual constructs. The most obvious way to do this recognizes that programs are made up of conceptual chunks much larger than the individual high-level language statement—subroutines. An abstract program consists of conceptual constructs: operations. Brooks. Frederick P. Brooks elaborates further: Most past progress in software productivity has come from eliminating noninherent difficulties such as awkward machine languages and slow batch turnaround. The above passage was originally published in 1975. datatypes. . and comprehensibility. and simplicity has been the progressive use of high-level languages for programming. The concrete machine program is concerned with bits. and eliminated the vast amounts of work and the copious opportunities for error that dwell at the individual statement level. registers. sequences. and such. or classes. reliability.1 Chapter 1 Introduction In his classic software engineering text. In the twentieth-anniversary edition (1995) of the text. discusses high-level languages as an essential tool for increasing programmer productivity: Surely the most powerful stroke for software productivity. conditions. The Mythical Man Month: Essays on Software Engineering [17]. To the extent that the high-level language embodies the constructs wanted in the abstract program and avoids all lower ones. and communication. we have radically raised the conceptual level. If we can limit design and building so that we only do the putting together and parameterization of such chunks from prebuilt collections. or modules. branches. What does a high-level language accomplish? It frees a program from much of its accidental complexity.

Each sensor node communicates wirelessly with a few other neighboring nodes within its radio communication range [107]. or temperature. and/or video or still cameras. An application area where these concepts are particularly needed is that of wireless sensor networks. Little or no integration exists among the tools necessary to create these software components. wired nodes with greater network capacity and computation power. A wireless sensor network may also be augmented with a higher tier of more powerful. each with different design methodologies and tools. Each node is equipped with one or more sensing devices. automated data collection and monitoring systems. and partitioning of tasks across multiple nodes. Building sensor networks today requires piecing together a variety of hardware and software components. scheduler services. acceleration or vibration. manufacturing and industrial control. Typical networked embedded system software development may require the design and implementation of device drivers. it still applies today to new high-level programming concepts. humidity. changing magnetic field. making it a challenging and errorprone process. A sensor node in a typical sensor network has a battery. In addition. electrical resistance. tetherless. using actor-oriented programming concepts. a microprocessor.2 Although twelve years have passed since Brooks wrote the above passage. mostly because the interactions between the programming models are poorly understood. pH. Nodes in this higher tier are sometimes called masters [36] or microservers [67]. network stack protocols. including environmental monitoring and conservation.1 Wireless Sensor Networks Wireless sensor networks is an emerging area of embedded systems that has the potential to revolutionize our lives at home and at work. and home automation [107]. sensor networks are spatially aware and are more closely linked to geographic location and the physical environment than centralized systems. a sensor network is constrained by finite on-board battery power and limited network communication bandwidth. and a small amount of memory for signal processing and task scheduling. business asset management. Wireless sensor networks provide a way to create flexible. as in the Tenet architecture [36]. 1. with wide-ranging applications. seismic and structural monitoring. In addition. Unlike traditional networked systems. application-level tasks. such as sensors for visible or infrared light. This dissertation presents methods for raising the conceptual level of building wireless sensor network applications. acoustic microphone arrays. transportation. these tools typically have little infrastructure . health care.

Gul Agha extended the notion of actor to include history-sensitive behavior necessary for shared. which emphasizes inheritance and procedural interfaces.3 for building models and interactions that are not part of their original scope or software design paradigms. The actor model was originally proposed by Carl Hewitt. Unlike Agha’s actors. reliability. Actors are concurrent dataflow-oriented components that specify behavior abstractly without relying on low-level implementation constructs . Hewitt [46] showed that control structures can be represented as patterns of message passing between simple actors with a conditional construct but no local state. Lee’s actors are not required to encapsulate a thread of control [60]. Instead of object-oriented design. Ports and parameters define the interface of an actor.2 Actor-Oriented Programming Actor-oriented programing is a high-level programming concept that can increase software productivity. though the meaning of the term has evolved over time. These actors are objects that interact in a purely local way by sending messages to one another. A port represents an interaction with other actors. A brief history of actor research up to 1993 is summarized by Agha [3] and excerpted and extended here. Like Neuendorffer [74]. from actors which exchange and process data [74]. and simplicity. One can view each of the experts as a society that can be further decomposed until reaching the primitive actors of the system. He intended actors to be used as a paradigm for exploiting parallelism on massively parallel architectures and as a suitable language for concurrency [2]. Edward A. it does not have call-return semantics. software components are parameterized actors with ports. where instead of objects. Lee distinguishes data tokens. Lee generalized the notion of actors and applied it to software design for concurrent systems [49]. I view actor-oriented programming as an approach to system-level design. Agha assumes that each actor encapsulate a thread of control. Hewitt and Agha view actors as a universal concept. everything in the system is an actor that responds to messages. The precise semantics depends on the model of computation. This dissertation uses Lee’s concept of actors. Hewitt proposes actors as an approach to modeling intelligence as a society of communicating knowledge-based problem-solving experts. mutable data objects [1]. which encapsulate data and do not interact with one another. In his model. but conceptually it represents signaling between components. he suggests the term actor-oriented design as a refactored software architecture. 1. but unlike a method.

they lie in the substrate upon which the system is built. Lauer and Needham explain that though “no real system precisely agrees with either model in all respects.” Actor-oriented programming and other message-oriented systems are well-suited to embedded systems and other highly concurrent systems. Other systems are able to be partitioned into subsystems.e.” “i. In actor-oriented programming. In other words. threads.” They conclude that “the considerations for choosing which model to adopt in a given system are not found in the applications which that system is meant to support. the ease with which scheduling and dispatching can be done. Other constraints are those “imposed by the machine architecture and hardware. what flows through an object is sequential control.4 such as function calls. where a variety of peripheral devices and interrupts must be accessed frequently. Actor-oriented programming and object-oriented programming are duals of each other.” such as the “organization of real and virtual memory.1). what flows through an object is evolving data.” “most modern operating systems can be usefully classified using them. . In traditional objectoriented programming.. the arrangement of peripheral devices and interrupts. things happen to objects. actors make things happen (see Figure 1. with a fast response rate. machine architecture and/or programming environment—on which the process and synchronization facilities are implemented.” They suggest that a message-oriented (actor-oriented) style is best when it is easy to allocate message blocks and queue messages but difficult to build a protected procedure call mechanism. or distributed computing infrastructure [74]. each of which corresponds to one of the models. Some systems are implemented in a style which is very close in spirit to one model or the other. In other words. and the architecture of the instruction set and the programmable registers. Instead. similar to Lauer and Needham’s concept of the duality of message-oriented systems and procedure-oriented systems [58]. Actor-oriented programming can be combined with object-oriented programming and other procedure-oriented systems in a structured way to achieve the best of both worlds. the size of the stateword which must be saved on every context switch. The factors and design decisions of the system upon which the process and synchronization facilities are built are the things which make one or the other style more attractive or more tedious. and memory space is at a premium (and memory protection is often not provided in the underlying infrastructure). and which are coupled by explicit interface mechanisms.

Token Machines [75].1 The node-centric approach forms the next layer above the operating system layer. Input data Output data Figure 1.NET. return Things happen to objects. and . 1. SOS [40]. and the Object State Model [53]. Actors make things happen. and simulating wireless sensor network applications. which makes programming easier for the user.3 Actor-Oriented Programming for Wireless Sensor Networks Wireless sensor networks are highly concurrent systems. SNACK [37]. e The middleware approach forms the third layer. programming.2. and more abstract programming models are used. whose focus is to provide basic programming abstractions to allow a program to run on the sensor node hardware. or emulation for the Atmel AVR microcontroller instruction set. I advocate using an actor-oriented approach to designing. Examples include TinyOS [48]. . Software in the node-centric layer runs on a single node on top of the operating system. In this dissertation. as shown in the vertical axis of Figure 1. Instruction-level emulation lies below the operating system approach and is not shown in the figure. Lee. with concurrency at many different levels. which begins to include programming abstractions that allow the user to address multiple nodes. Existing approaches to building wireless sensor networks can be divided into four layers. Linux. actor-oriented design. MantisOS [13]. Contiki [27]. NutOS [11].2 rely on either simulation with a combination of TOSSIM and gdb. call The alternative: Actor-oriented: actor name data (state) parameters ports What flows through an object is evolving data. The operating system approach forms the bottom-most layer. Examples include Mat´ [64]. Source: Edward A. Examples include directed diffusion [50].5 The established: Object-oriented: class name data methods What flows through an object is sequential control. 1 Many of the examples shown in Figure 1. generating.1: Object-oriented design vs.

.2: WSN landscape.6 Figure 1.

Kairos [38]. . DHT (Distributed Hash Table). including Semantics Streams [103]. Other tools are programming models or languages that focus solely on design. which allows the user to create an application by programming the wireless sensor network as a whole. and Em*. rather than programming individual nodes separately. with the exception of Em*. and deployment. rather than an integrated approach. simulation. and UML (Unified Modeling Language). Hood [102]. The process of building a wireless sensor network can be divided into three stages of development: design. IDSQ (information-driven sensor querying) [108]. These constraints dictate that sensor network problems are best approached in a holistic manner. and TinyViz for visualization. DSN (Declarative Sensornet) [94]. Macroprogramming is also known as programming the ensemble. from design to simulation and testing. though it does not translate easily to actual deployment. Agilla [30]. The goal of this work is to create integrated tools and programming models for networked embedded application developers to model and simulate their algorithms and quickly transition to 2 Chapter 6 contains a more detailed discussion of these simulation tools.2 PIECES (Programming and Interaction Environment for Collaborative Embedded Systems) [68] is a higher-level simulation tool implemented in a mixed Java-Matlab environment. and embedded web services [67]. J-Sim. actor-oriented programming provides a common high-level language that unifies the programming interface across the four application layers and between the different stages of development. These tools. and not simulation. In this dissertation. However. by jointly considering the physical. networking. and application layers and making major design trade-offs across the layers [107]. OPNET. existing development tools are disjoint and difficult to integrate. Many of the tools shown in Figure 1. wireless sensor networks are often deployed in resourceconstrained environments. Most existing work focuses on only one stage of development. Unfortunately. SensorSim. or communication model between actors. Prowler. that best fits the target application. Examples include TinyDB [70]. are usually stand-alone and not designed for hardware deployment. OMNeT++. and actorNet [56].7 abstract regions [101]. The developer can choose the model of computation. Simulation tools that fall somewhere between the middleware and node-centric layers include ns-2. Regiment [76]. and to deployment.2 rely on the TOSSIM TinyOS simulator for operating system-level simulation and testing. The macroprogramming approach forms the top layer.

encompasses multiple layers and lies above the operating system approach. called Viptos. I use TinyOS as an interface for the bottom-most layer. Design and Implementation of TinyGALS: A Programming Model for Event-Driven Embedded Systems [20]. which allows construction of actor-oriented models of computation. updated. and it was revised. Chapter 2 also introduces Ptolemy II. Chapter 3 describes an actor-oriented. These two publications are combined and updated to form the basis of Chapter 4 and part of Chapter 6.8 testing their software on real hardware in the field. TinyGALS: A Programming Model for Event-Driven Embedded Systems [23] was the first paper published on this topic. revised. These four publications are combined and updated to form the basis of Chapter 3 and part of Chapter 6. Chapter 6 discusses related work. Viptos: A Graphical Development and Simulation Environment for TinyOS-based Wireless Sensor Networks [22] was the first paper published on this topic. This tool. and published under the same title [25]. Chapter 2 introduces the reader to TinyOS. The language implemented for the programming model described in these two papers was redesigned and reimplemented as part of the nesC compiler and described in galsC: A Language for Event-Driven Embedded Systems [24]. Chapter 5 describes various techniques for using higher-order actors to generate multiple simulation scenarios for design and test of wireless sensor network applications. and it was extended into a master’s report. which was later condensed. while allowing them to use the model of computation most appropriate for each part of the system. a Java-based software framework with a graphical user interface. Chapter 4 introduces an actor-oriented modeling and design environment for wireless sensor networks. a runtime environment for wireless sensor nodes. . A summary of how these papers have been incorporated into this dissertation follows. node-centric model called TinyGALS for programming individual sensor nodes. 1. and extended as Joint Modeling and Design of Wireless Networks and Sensor Node Software [21].4 Previously Published Work Some of the material in this dissertation was previously published in technical reports or con- ference proceedings. and Chapter 7 concludes this dissertation. the operating system approach.

9

Chapter 2

Background
In this chapter, I present TinyOS, one of the most popular software toolsuites in the wireless sensor network research and development community. I also present Ptolemy II, the current version of one of the most influential actor-oriented design frameworks. Together, these tools form the background knowledge required for understanding the implementation of the tools and techniques presented later in this dissertation.

2.1

TinyOS
TinyOS [47, 48] is an open-source runtime environment designed for sensor network nodes

known as motes. TinyOS has a large user base—over 500 research groups and companies use TinyOS on the Berkeley/Crossbow motes. It has been ported to over a dozen platforms and numerous sensor boards, and new releases see over 10,000 downloads. TinyOS differs from traditional operating system models in that events drive the behavior of the system. Using this type of execution, battery-operated nodes can preserve energy by entering a sleep mode when no interesting events are happening. According to the TinyOS website [95], “TinyOS’s event-driven execution model enables fine-grained power management yet allows the scheduling flexibility made necessary by the unpredictable nature of wireless communication and physical world interfaces.” In this section, I present the details of the nesC syntax and the TinyOS execution model. Note that in this dissertation, I focus on TinyOS 1.x. TinyOS 2.x is a rewritten implementation of TinyOS 1.x that provides users with a cleaner interface. All material presented in this dissertation can easily be transferred to TinyOS 2.x.

10

2.1.1

NesC syntax

TinyOS provides a library of reusable software components written in nesC, an extension to the C programming language. A TinyOS application connects these components using a wiring specification that is independent of the component implementation. Some TinyOS components are thin wrappers around hardware, though most are software modules which process data. The distinction is invisible to the developer. Decomposing different OS services into separate components allows unused services to be excluded from the application. Figure 2.1(a) shows a TinyOS program called SenseToLeds that displays the value of a photosensor in binary on the LEDs of a mote. The TinyOS component library includes those “wired” together in SenseToLeds: Main, SenseToInt (shown in Figure 2.1(b)), IntToLeds, TimerC, and DemoSensorC. A nesC component may expose a set of interfaces. Each interface is a set of methods. A method may be either an event or a command, where an event is usually called “upwards” from a hardware interrupt handler, and a command is usually called “downwards” from the application code. A nesC component provides methods that it implements, and uses methods that are implemented by other components. A nesC component is either a configuration that contains a wiring of other components, or a module that contains an implementation of its interface methods. NesC interfaces may also be parameterized to provide multiple instances of the same interface. In Figure 2.1(a), SenseToLeds is a configuration that exposes no interface methods. The TimerC.Timer interface is parameterized. The Timer interface of SenseToInt connects to a unique instance of the corresponding interface of TimerC. If another component connects to the TimerC.Timer interface, it connects to a different instance. Each timer can be initialized with different periods.

2.1.2

TinyOS execution model

TinyOS contains a single thread of control managed by the scheduler, which may be interrupted by hardware events. Component methods encapsulate hardware interrupt handlers. These methods may transfer the flow of control to another component by calling a uses method. Computation performed in a sequence of method calls must be short, or it may delay the processing of other events. There are two sources of concurrency in TinyOS: tasks and events. Tasks are a deferred computation mechanism. A long-running computation can be encapsulated in a task, which a component method posts to the scheduler task queue. The post operation returns immediately, deferring the computation until the scheduler executes the task later. The TinyOS scheduler processes the tasks

11

configuration SenseToLeds { } implementation { components Main, SenseToInt, IntToLeds, TimerC, DemoSensorC as Sensor; Main.StdControl -> SenseToInt; Main.StdControl -> IntToLeds; SenseToInt.Timer -> TimerC.Timer[unique("Timer")]; SenseToInt.TimerControl -> TimerC; SenseToInt.ADC -> Sensor; SenseToInt.ADCControl -> Sensor; SenseToInt.IntOutput -> IntToLeds; } (a)

module SenseToInt { provides { interface StdControl; } uses { interface Timer; interface StdControl as TimerControl; interface ADC; interface StdControl as ADCControl; interface IntOutput; } } implementation { ... }

(b)

Figure 2.1: Sample nesC source code. in the queue in FIFO order whenever it is not executing an interrupt handler. Tasks run to completion and do not preempt each other. Events signify either an event from the environment or the completion of a split-phase operation. Split-phase operations are long-latency operations where operation request and completion are separate functions. Commands are typically requests to execute an operation. If the operation is split-phase, the command returns immediately and completion is signaled later with an event; non-split-phase operations do not have completion events. Events also run to completion, but they may preempt the execution of a task or another event. Resource contention is typically handled through explicit rejection of concurrent requests. Because tasks execute non-preemptively, TinyOS has no blocking operations. TinyOS execution is ultimately driven by events representing hardware interrupts.

2.2

Ptolemy II
Ptolemy II, a modeling and design framework for concurrent systems, and VisualSense, an

extension to Ptolemy II that supports modeling and simulation of wireless sensor networks, form the basis of the tools described in this dissertation. In this section, I excerpt and summarize information from Hylands, et al. [49] and Baldwin, et al. [8]. The Ptolemy Project conducts foundational and applied research in software-based design tech-

like objects. A set of rules that govern the interaction of components is called the semantics of the model of computation. they interact by sending messages through channels. . object-oriented design by emphasizing concurrency and communication between components. and design of concurrent systems. The next section discusses wireless channels in more detail. Executable models are constructed under a model of computation.12 niques for embedded systems. feedback control. such as networking. Ptolemy II is the current software infrastructure of the Ptolemy Project and is published freely in open-source form. and provide a framework within which a designer builds models. simulation. and parameters which are used to configure the operation of an actor. analog and digital electronics. Whereas with object-oriented design. mode changes. hardware and software. Often. controls the execution of a model. Central to actor-oriented design are the communication channels that pass data from one port to another according to some messaging scheme. components interact primarily by transferring control through method calls. have a well-defined component interface. models of computation in Ptolemy II support actor-oriented design. A model of computation may have more than one semantics. Most. the model of computation is a set of rules that are more abstract. in that there might be distinct sets of rules that impose identical constraints on behavior. and user interfaces. and complements. It serves as a laboratory for experimenting with design techniques. in actor-oriented design. particularly those that mix technologies including. A director. signal processing. The focus is on embedded systems. but not all. then the model of computation may literally be the laws of physics. which is a component specific to the model of computation used.2. sequential decision making. The interface includes ports that represent points of communication for an actor. and electronics and mechanical devices. parameter values are part of the a priori configuration of an actor and do not change when a model is executed. It studies heterogeneous modeling. Components called actors execute and communicate with other actors in a model. The focus is also on systems that are complex in the sense that they mix widely different operations. This contrasts with.1 If a model describes a mechanical system. 2 These channels may be wired or wireless. More commonly.2 function as both ports and parameters. but not always. and restricts how an actor interacts with its environment. The “port/parameters” shown in Figure 2. This interface abstracts the internal state and behavior of an actor.2 The use of channels to mediate communication implies 1 These components are not the same as TinyOS/nesC components. as illustrated in Figure 2. which is the set of the “laws of physics” that govern the interaction of components in the model. though Chapter 4 explores the relationship between Ptolemy II components and TinyOS/nesC components. however. Actors.

2: Illustration of an actor-oriented model (top) in Ptolemy II and its hierarchical abstraction (bottom).13 annotation director port/parameters external port model hierarchical abstraction Figure 2. .

which are distinct from the ports and parameters of the individual actors in the model. 2. parameters. and wired subsystems. The semantics is largely orthogonal to the syntax and is determined by a model of computation. may also define an external interface. such as in SystemC. External parameters of a model can be used to determine the values of the parameters of actors inside the model. Taken together. and perform external communication. such as in a bubble-and-arc or block-and-arrow diagram. It is important to realize that the syntactic structure of an actor-oriented design says little about its semantics. To support this style of modeling. These rules determine when actors perform internal computation. VisualSense uses a specialization of the discrete-event (DE) domain of Ptolemy II. a library of subclasses that provide specific wireless channel models and node models. The model of computation also defines the nature of communication between components. update their internal state. The software architecture consists of a set of base classes for defining wireless channels and sensor nodes.3. The interface of a model is called its hierarchical abstraction. Custom wireless channels can be defined by subclassing the WirelessChannel base class and by attaching functionality defined in Ptolemy II models. actors. The external ports of a model can be connected by channels to other external ports of the model or to the ports of actors that compose the model.1 VisualSense VisualSense [8] is a modeling and simulation framework for wireless sensor networks that builds on Ptolemy II. in XML (Extensible Markup Language). ports. This syntax can be represented concretely in several ways: graphically. This interface consists of external ports and external parameters. Custom nodes can be defined by subclassing the base classes and defining the behavior in Java or by creating composite models using any of several Ptolemy II modeling environments. or in a program designed to a specific API (Application Programming Interface). and an extensible visualization framework. like actors. A sophisticated calendar-queue .14 that actors interact only with the channels to which they are connected and not directly with other actors. The DE domain of Ptolemy II [15] provides execution semantics where interaction between components occurs via events with time stamps. and channels describe the abstract syntax of actor-oriented design. A relation is an object used to represent the (wired) interconnection. the concepts of models. The model of computation might give operational rules for executing a model. Ptolemy II offers all three alternatives. This framework supports actor-oriented definition of sensor nodes. wireless communication channels. physical media such as acoustic channels.2. such as in Figure 2. Models.

data.0" standalone="no"?> <!DOCTYPE class PUBLIC "-//UC Berkeley//DTD MoML 1//EN" "http://ptolemy.lib.Parameter" value="8000.actor.Ramp"> <property name="firingCountLimit" class="ptolemy.actor.expr.actor.0"/> <property name="SDF Director" class="ptolemy.TypedIORelation"/> <relation name="relation" class="ptolemy.AddSubtract"/> <relation name="relation3" class="ptolemy.lib.edu/xml/dtd/MoML_1.actor.expr.plus" relation="relation"/> <link port="AddSubtract.actor.lib.15 <?xml version="1.Parameter" value="phase"/> </entity> <entity name="AddSubtract" class="ptolemy.PortParameter" value="0.TypedIORelation"/> <link port="output" relation="relation3"/> <link port="Ramp.TypedIORelation"/> <relation name="relation2" class="ptolemy.eecs.actor.0"/> <property name="phase" class="ptolemy.sdf.parameters.ParameterPort"> <property name="input"/> </port> <port name="phase" class="ptolemy.TrigFunction"/> <entity name="Const" class="ptolemy.parameters.output" relation="relation4"/> </class> Figure 2.actor.Parameter" value="0"/> <property name="step" class="ptolemy.input" relation="relation4"/> <link port="TrigFunction.actor.Const"> <property name="value" class="ptolemy.output" relation="relation2"/> <link port="AddSubtract.data.lib.actor.data.0"/> <port name="frequency" class="ptolemy.actor.PortParameter" value="440.domains. .SDFDirector"/> <property name="frequency" class="ptolemy.data.Parameter" value="0"/> <property name="init" class="ptolemy.actor.TypedCompositeActor"> <property name="samplingFrequency" class="ptolemy.plus" relation="relation2"/> <link port="AddSubtract.output" relation="relation3"/> <link port="Const.parameters.3: XML representation of the Sinewave source.actor.TypedIOPort"> <property name="output"/> </port> <entity name="Ramp" class="ptolemy.parameters.expr.berkeley.PortParameter" value="(frequency*2*PI/samplingFrequency)"/> </entity> <entity name="TrigFunction" class="ptolemy.actor.actor.kernel.dtd"> <class name="Sinewave" extends="ptolemy.actor.TypedIORelation"/> <relation name="relation4" class="ptolemy.parameters.output" relation="relation"/> <link port="TrigFunction.ParameterPort"> <property name="input"/> </port> <port name="output" class="ptolemy.expr.

VisualSense is a subclass of the DE modeling framework in Ptolemy II that is specifically intended to model sensor networks. for example by adding. The software is carefully architected to support multithreaded access to this mutation capability. and VHDL. one thread can be executing a simulation of the model while another changes the structure of the model. the type system in Ptolemy II includes a type constraint for each connection in a block diagram. it removes the need for explicit connections between ports. these connections do not represent all the type constraints. using more conventional DE models (as block diagrams) or other Ptolemy II models (such as dataflow models. and hence can be developed by the model builder. sensor nodes themselves can be modeled in Java. Ptolemy II and VisualSense permit customized icons for components in a model. In this type system. The DE domain in Ptolemy II supports models with dynamically changing interconnection topologies. finite-state machines. In particular. a sensor node can have as an icon a translucent circle that represents (roughly or exactly) its transmission range. or continuous-time models). Connectivity can then be determined on the basis of the physical locations of the components. parameters. The most straightforward uses of the DE domain in Ptolemy II are similar to other discreteevent modeling frameworks such as ns [77]. and a type resolution algorithm identifies the most specific types that satisfy all the constraints. For example. In VisualSense. Another feature of Ptolemy II and VisualSense is a sophisticated type system [105]. “RadioChannel”). Components (which are called actors) have ports. The algorithm for determining connectivity is itself encapsulated in a component as a wireless channel model. deleting.16 scheduler is used to efficiently process events in chronological order. or moving actors. Changes in connectivity are treated as mutations of the model structure. In particular. Visual depictions of systems can help to offset the increased complexity that is introduced by heterogeneous modeling. and to lend insight into the behavior of models. The precision in the semantics prevents the unexpected behavior that sometimes occurs due to modeling idiosyncrasies in some modeling frameworks.. By default. The results are predictable and consistent. and instead associates ports with wireless channels by name (e.g. However. or more interestingly. although stochastic models for Monte Carlo simulation are also well supported. OPNET [78]. and the ports are interconnected to model the communication topology. actors. Thus. in wireless models. The DE domain has a formal semantics that ensures determinate execution of deterministic models [59]. and ports can all impose constraints on types. every actor that sends data to a wireless channel requires that every recipient from that channel be able to . Ptolemy II provides a visual editor for constructing DE models as block diagrams. or changing the connectivity between actors.

3 Summary This chapter summarized background information on TinyOS and Ptolemy II. . They are inferred from the ultimate sources of the data and propagated throughout the model. 2. so that the reader can understand the underlying implementation of the tools and techniques presented in the following chapters. so unless a particular model builder needs more sophisticated constraints. VisualSense imposes this constraint in the WirelessChannel base class.17 accept that data type. the model builder does not need to specify particular data types in the model.

18 .

in terms of CPU speed and memory size. most of these high-level languages are designed for writing sequential programs to run on an operating system and fail to handle concurrency intrinsically. and conserving power. to host a full-scale modern operating system. maintaining consistent state across multiple tasks. avoiding concurrency errors. do not scale with the growing complexity of today’s applications. components communicate synchronously via method calls. Despite the fact that “high-level” languages such as C and C++ have recently replaced assembly language as the dominant embedded software programming languages. these actors communicate with each other asynchronously via message passing. . The TinyGALS (Globally Asynchronous and Locally Synchronous) programming model [23] aims to fill this gap by providing language constructs to systematically build concurrent tasks called actors.19 Chapter 3 TinyGALS and galsC Networked embedded software designers face issues such as managing computation as well as communication. since a node can go into a sleep mode to preserve energy when no interesting events are happening. where conceptually concurrent components are activated by incoming signals (or events). inherited from writing device drivers and optimizing assembly code to achieve a fast response and small memory footprint. At the application level. Within each actor. These tasks become even more challenging when the resources of the hardware platforms are too limited. Traditional technologies for developing embedded software. For many networked embedded systems. handling irregular interrupts. there is a fundamental gap between this event-driven execution model and sequential programming languages. Event-driven execution is particularly suitable for untethered devices such as sensor network nodes. as in most imperative languages. Event-driven embedded software is similar to hardware.

very little time compared to the intervals between successive arrivals of input signals). the TinyGALS programming model is globally asynchronous and locally synchronous in terms of transfer of the flow of control. yet components can quickly read their values. This language has a type system that spans synchronous and asynchronous communication boundaries. In the system modeling community. In this programming model. concurrent tasks in TinyOS are not exposed as part of the galsC component interface. thus causing confusion.” and “globally asynchronous. is consistent with the usage of these terms in distributed programming paradigms [72]. and the galsC compiler generates executable code. However. where synchronous refers to circuits that are driven by a common clock [51]. locally synchronous (GALS)” mean different things to different communities. galsC [24. Lack of explicit management of concurrency forces TinyOS component developers to manage concurrency by themselves (locking and unlocking semaphores). at a high level. In order to incorporate shared variable semantics where only the latest value matters. The TinyGALS notion of synchronous and asynchronous. application developers have precise control over the concurrency in the system. synchronous means that the software flow of control transfers immediately to another component and the calling code blocks awaiting return. execution of the other component is decoupled. including an application-specific operating system scheduler. TinyOS/nesC components provide an interface abstraction that is consistent with synchronous communication via method calls. which makes TinyOS applications difficult to develop. and they can develop software components without the burden of thinking about multiple threads. Thus. in practice. however.” “asynchronous. In this chapter. such as race conditions. synchronous often refers to computational steps and communication (propagation of computed signal values) that take no time (or. galsC takes advantage of the nesC specification for TinyOS 1. The galsC language provides basic concurrency constructs.20 The terms “synchronous. from high-level specifications.x. The circuit and processor design communities use these terms for synchronous and asynchronous circuits. a set of guarded yet synchronous variables (called TinyGUYS) is provided at the system level for actors to exchange global information “lazily. This generative approach allows further analysis of concurrency problems.” Access to these variables is thread-safe. control eventually returns to the calling code. 25] is a language that implements the TinyGALS programming model. Automatically generated code also reduces implementation and . GALS then refers to a modeling paradigm that uses events and handshaking to integrate subsystems that share a common tick (an abstract notion of an instant in time) [10]. Steps do not take infinite time. Asynchronous means that the software flow of control does not transfer immediately to another component.

The remainder of this chapter is organized as follows. event-driven system. The design of TinyGALS is influenced by the trend of introducing formal concurrency models in embedded software. and guards on variables). and they are easy to develop and backwards compatible with most legacy software. Section 3. the port-based object (PBO) model [92] has a global shared variable space mediating component interaction. As an actor-oriented. A reasonably small amount of additional code to enhance software modularity will not greatly affect the performance of the system.1 compiler and toolsuite for the wireless sensor network nodes known as the Berkeley motes. At the same time. in that it allows designers to directly control the concurrent execution and sizes of buffers between asynchronous actors.g. Components in the TinyGALS model are entirely sequential. In a reactive. Section 3. communication ports. queues. most of the processor time is spent waiting for an external trigger or event.4 describes a sample application implemented in galsC. TinyGALS is closer to system-level hardware/software codesign languages. . high-level programming language. Various dataflow models [73] use FIFO queues to separate flow of control.5 summarizes this chapter. In particular. the galsC compiler generates the scheduling framework as part of the application. Instead. than embedded software languages such as nesC. When it is not possible to compile away concurrency.1 describes the TinyGALS programming model and galsC language. The galsC compiler and toolsuite is built on the nesC 1. The POLIS codesign approach [7] uses an event-driven model for both hardware and software execution. TinyGALS programs do not rely on the existence of an operating system. Section 3.1.2 discusses concurrency and determinacy issues in TinyGALS programs. functions..21 debugging time. To some extent. since the developer does not need to reimplement standard constructs (e. Section 3. it uses a thread-safe global data space to store messages that do not trigger reactions. such as SystemC [12] and VCC [57]. the TinyGALS/galsC framework can greatly improve software productivity and encourage component reuse. synchronous languages try to compile away concurrent executions based on the synchronous (zero-time execution) assumption [39].3 explains a code generation technique based on the twolevel execution hierarchy and a system-level scheduler. The TinyGALS approach differs from coordination models like those discussed above. Section 3.

3. Section 3.1.22 uint16_t count = 0 count TimerActor actorControl IntOutput. This section presents the abstract TinyGALS notation.1.1. Section 3. for each construct. shown in Figure 3. a hardware clock triggers the system to update a time tick counter.output actorControl Counter StdControl Timer StdControl Timer TimerControl Trigger trigger trigger 64 trigger StdControl SenseActor count SenseToInt IntOutput..1.1 introduces the basic constructs in the TinyGALS programming model and the syntax of the galsC programming language.1 The TinyGALS Programming Model and galsC Language This section uses a simple sensing application to illustrate the TinyGALS programming model and galsC syntax and semantics. Section 3. and Section 3. as well as the concrete galsC syntax.1 Programming constructs and language syntax There are three basic constructs in TinyGALS and galsC: components.3 describes valid links in TinyGALS/galsC. The system tags the resulting sensor value with the latest value of the counter and sends it downstream for further processing.4 discusses type inference and checking in galsC.2 explains the semantics of TinyGALS and galsC. In this example. and applications. output TimerC Photo Figure 3.1: Graphical representation of the SenseTag application..output trigger ADC ADCControl .1.1. Reading the sensor may take time. A downsampled clock signal triggers the system to read the light intensity level from a photoresistor at a lower rate. actors. 3. .

LINKSC is the set of relations among the interface methods of the components (including that of C). whereas the implementation of a configuration only contains a list of components and the links between their interface methods. but with explicit definition of the external methods it uses. Using the tuple .1. TimerC contains a module named TimerM that implements the provided interface methods. or requires. A component that provides an interface (in PROV IDESC ) contains an implementation of the interface method(s). COMPONENT SC is the set of components that form C. Syntactically. Figure 3.2 shows the source code for the TimerC configuration used in Figure 3. A TinyGALS component C is a 4-tuple: C = (PROV IDESC . and VC is the set of internal variables that carry the state of C from one invocation of an interface method of C to another. (3. an interface (in USESC ) expects another component to implement the interface. LINKSC .1) where PROV IDESC and USESC are the sets of methods that constitute the interface of C. whereas a component that uses. Components in galsC are written in the nesC programming language.VC ). a component is like an object in most objectoriented programming languages.COMPONENT SC .USESC . The implementation of a module contains executed code. A component is either a module or a configuration. a component is defined in two parts—an interface definition and an implementation.23 TinyGALS Components Components are the most basic elements of a TinyGALS program. Thus.

uint8_t mScale. / interface keyword in nesC refers to a set of methods.ClockC. mInterval. . NesC allows the shorthand notation of linking two interfaces of the same type. (Timer. LINKSc = {(TimerM.stop(). VC = 0).2: Source code for the TimerC and TimerM components.. (StdControl. ClockC.. the TimerC component can be defined as1 C = (PROV IDESC = {Timer. and Timer. mInterval = 230. mScale = 3. USESC = 0....init().. For brevity.}.stop(). Timer. StdControl. } .. } implementation { components TimerM. mScale). StdControl = TimerM. So. uint32 t). and the StdControl interface refers to the set containing StdControl. .24 configuration TimerC { provides interface Timer[uint8_t id]. } module TimerM { provides interface Timer[uint8_t id]. StdControl}. notation given in Equation 3.init() { mState = 0..Clock. .start(char. command result_t StdControl.Clock). return call Clock. TimerM.. } implementation { // Each bit represents a timer state...setRate( mInterval.StdControl). TimerM. and StdControl.fired(). 1 The . . provides interface StdControl.start(). ..StdControl. which means that each of the individual methods are linked. rather than the individual methods in the interface. uint32_t mState.Clock -> ClockC.1. . TimerM....Timer)}. the Timer interface refers to the set containing Timer.Timer. the TinyGALS notation used in this chapter only lists the name of a given interface. uses interface Clock.ClockC. provides interface StdControl. Timer = TimerM. } Figure 3.. / COMPONENT SC = {TimerM.

Actors are different from components.3 shows the source code for TimerActor. writes to the count parameter. A TinyGALS actor R is a 6-tuple: R = (INPORT SR .3 An actor may also contain an actorControl section which exports the StdControl interface of any of its components to the application level for system initialization (e. PARAMET ERSR . which contains the TimerC component.trigger. which is linked to a component interface method.1. for initializing hardware components).1. .output. (2) a port. PROV IDESC and USESC refer to component methods and may be linked to actor ports in INPORT SR and OUT PORT SR . Figure 3.1) and the input and output ports of R (INPORT SR and OUT PORT SR ) and the parameters of R (PARAMET ERSR ). 3. respectively. The galsC syntax for an actor is similar to that of a galsC (or nesC) configuration component.2. The only difference is that the relations in LINKSR may also include actor ports and parameters.3 describe links in more detail. (3) a parameter. and INITR is the list of initialization methods that belong to the components in COMPONENT SR .2) where INPORT SR and OUT PORT SR are the sets that specify the input ports and output ports of R. LINKSR is the set that specifies the relations among the interface methods of the components (PROV IDESC AND USESC in Equation 3.3 also shows the source code for 2 Refer 3 Sections to information on TinyGUYS in Section 3. Counter. COMPONENT SR is the set of components that form the actor.g. TimerActor has an output port named trigger.1. encompassing one or more TinyGALS components. PARAMET ERSR is the set of external variables—they are global variables that can be both read and written2 . (3. LINKSR of an actor R is similar to LINKSC of a component C. LINKSR . OUT PORT SR . Figure 3. An actor implementation contains a list of components and links. A different component interface method. INPORT SR and OUT PORT SR of an actor R are not the same as PROV IDESC and USESC of a component C.2. whereas INPORT SR and OUT PORT SR are not. A link can join a component interface method to one of four types of endpoints: (1) another component interface method. Trigger.COMPONENT SR . TimerActor exports the StdControl interfaces of Count and Trigger for system initialization..25 TinyGALS Actors Actors are the major building blocks of a TinyGALS program.2 and 3. The interface of an actor consists of a set of input and/or output ports and a set of parameters. However.IntOutput. PROV IDESC and USESC are executable. INITR ). including which configurations of components within an actor are valid. or (4) some combination of these. whose source code was shown in Figure 3.

n − 1}. TimerC. ACT ORSA is the list of actors that form A. COMPONENT SR = {Counter. (3.3) where GLOBALSA is the set of global variables. actors are connected to form a complete application. (Counter. Figure 3. (Trigger.output and the value read from the count parameter.StdControl]).TimerControl.2.TimerControl). TimerC.2. The semantics of the execution of components within an actor are discussed in more detail in Section 3.26 SenseActor.CONNECT IONSA .VARMAPSA .out put.Timer. “Timer”).2) of an actor in ACT ORSA 5 . / OUT PORT SR = {trigger}. PARAMET ERSR = {count}. If the program contains n calls to unique() with the same identifier string (in this example.trigger)}. . TimerC. each call returns a different unsigned integer in the range {0. . STARTA is the list of input ports of actors in the application. (Trigger.IntOut put. LINKSR = {(Counter.IntOutput. 5 Refer to information on TinyGUYS in Section 3. CONNECT IONSA of an application A differs from LINKSR of an actor R in 4 The unique() function in nesC is a constant function that evaluates to a constant at compile time.2. STARTA ). each of which maps a global variable in GLOBALSA to a parameter (PARAMET ERSR in Equation 3. Using the tuple notation given in Equation 3.1. count). CONNECT IONSA is the set of the relations between actor input and output ports.1 shows a graphical representation of the actors.trigger. ACT ORSA . Trigger}. Trigger. .Timer. TimerC. VARMAPSA is a set of mappings.Timer[1]). Its output port output is connected to the concatenation of the component interface method SenseToInt. (Trigger. INITR = [Counter.1. . TinyGALS Application At the top level of a TinyGALS program. . A TinyGALS application A is a 5-tuple: A = (GLOBALSA .Timer[0]).StdControl. TimerActor can be defined as4 R = (INPORT SR = 0.

Trigger.StdControl. } parameter { uint16_t count.output -> count.27 actor TimerActor { port { out trigger.Timer[unique("Timer")]. Counter.IntOutput.StdControl. Photo. out output. TimerC. SenseToInt. actorControl { SenseToInt. Trigger.Timer -> TimerC. } parameter { uint16_t count.Timer -> TimerC. } implementation { components Counter. SenseToInt. count) -> output. Trigger. Trigger.ADCControl -> Photo. actor SenseActor { port { in trigger. Trigger.ADC -> Photo.3: Source code for TimerActor and SenseActor.Timer[unique("Timer")]. . Counter. } implementation { components SenseToInt.trigger. } } } (SenseToInt.IntOutput.output.StdControl.TimerControl -> TimerC. } } } Figure 3.trigger -> trigger. trigger -> SenseToInt. actorControl { Counter.

SenseActor. between components within an actor. the example application can be defined as A = (GLOBALSA = {count}. and some downstream actors.1. Section 3. VARMAPSA = {(count.. which contains TimerActor. A mapping associates application parameters (global names) with actor parameters (local names).trigger()]). 3. SenseActor.]. with a queue size of 64. A connection connects actor output ports with actor input ports. CONNECT IONSA = {(TimerActor. . Note that arguments (initial data) may also be passed to the port.. STARTA = [SenseActor..3.out put. and between actors within an application. SenseActor.2 describes which configurations of actors within an application are valid.count).)}. The appstart section declares that an initial token is to be placed in the input port of SenseActor. mappings. with an optional declaration of the port queue size (defaults to size one). whereas links inside an actor (between components) do not. TimerActor. It also includes a discussion of the conditions for well-formedness an application.28 that connections between actors contain an implicit queue. as well as an application start section.4 shows the source code for the SenseTag application..1. Using the tuple notation given in Equation 3.trigger. SenseActor.count)}. The application contains a parameter (global variable) named count. . and connections. (count.trigger). A galsC program is created by writing a galsC application file that contains zero or more parameters (global variables) and an implementation containing a list of actors. . ACT ORSA = [TimerActor. which is initialized to zero and connected to the corresponding parameters of TimerActor and SenseActor. (SenseActor. Figure 3. The output port trigger of TimerActor is connected to the corresponding input port of SenseActor.2 Execution model and language semantics This section discusses the semantics of execution within a component.

output => . which may lead to race conditions and other nondeterminacy issues. However. A piece of code is reentrant if multiple simultaneous. this section assumes that all methods may potentially access component state.trigger =[64]=> SenseActor.. which may be interrupted by the hardware. These constraints are necessary for avoiding unexpected reentrancy. but may suffer from reentrancy problems.. } } } Figure 3. SenseActor. TimerActor. or nested invocations do not interfere with each other. This section assumes that interrupt handlers are not reentrant.. interleaved. There are no other sources of preemption other than hardware interrupts. . count = SenseActor. This section discusses constraints on what constitutes a valid configuration of components within an actor when using components that contain interrupt handlers in which interrupts are enabled. To simplify the discussion. Methods that do not access component state will not suffer from race conditions.. other (different) interrupts may occur while servicing an interrupt. This discussion also assumes the existence of a clock.. A TinyGALS program runs in a single thread of execution (single stack). there is no dynamic memory allocation.trigger. Assumptions The TinyGALS architecture is intended for a platform with a single processor. } implementation { actor TimerActor.4: Source code for the SenseTag application. All memory is statically allocated.. which is used to order events. . SenseActor.count. but that interrupts are masked while servicing them (interleaved invocations of the same interrupt handler are disabled). count = TimerActor.count.trigger().29 application SenseTag { parameter { uint16_t count = 0. appstart { SenseActor.

the interrupt service routine or method finishes. or (3) another component calls one of the interface methods of C. Reentrancy problems may arise if a component is both a source component and a triggered component. This results in the third case. the component is a triggered component. (2) an event arrives on the actor input port linked to one of the interface methods of C. In the first case. In the second case. source components must not also be triggered components. where the component is a called component. a component executes to completion. The component only needs to know the type signature of Clock. Links represent synchronous communication via method calls. and vice versa. In Figure 3. The call keyword indicates that the Clock. it is necessary that source components only have outputs (required methods) and no inputs (provided methods). leading to possible race conditions if the interrupt modifies internal variables (internal state) of the same component. Section 3. While the method runs. to improve the ease of analyzability of the system and eliminate the need to make components reentrant. and component method(s).setRate(). An event on a linked actor input port may trigger the execution of a component method. Once activated. . Source components do not connect to any actor input ports.setRate() method with the values of mInterval and mScale as its arguments. A link is a relation within an actor between its port(s). TinyGALS Actors The flow of control between components within a TinyGALS actor occurs on links.init() method of TimerM is called. the component calls the Clock. and mInterval are internal variables of component TimerM. Both source components and triggered components may call other components via required methods. and the event triggers the execution of a provided method.2. the component is a source component and when activated by a hardware interrupt. but it does not matter to which component the method is linked.3 discusses the exact specifics of what types of links are valid. Therefore. parameter(s).1. That is. an interrupt may arrive. The same argument also applies to source components and called components.setRate() method is called synchronously (explained further in the next section). the corresponding interrupt service routine runs. mState. When a component calls a required method with the call keyword. When the StdControl. mScale.30 TinyGALS Components There are three cases in which a component C may begin execution: (1) an interrupt arrives from the hardware that C encapsulates. Additional rules for linking components together are detailed in the next section. Therefore.

the recursion must be bounded for the system to be live.6 The graph of the components and the links between them is an abstraction of the call graph of the methods within an actor. where the methods associated with a single component are grouped together. any valid configurations of components within an actor can be modeled as a directed acyclic graph (DAG). However. or (2) a source component is activated. preemption of the normal thread of execution by an interrupt may lead to reentrancy problems. The external method can return a value through the call just as in a normal method call. TinyGALS places some restrictions on what configurations of components within an actor are allowed.5(b). In the second case. The execution of actors is controlled by the scheduler in the TinyGALS runtime system. within components is allowed. If all interrupts are masked during interrupt handling (interrupts are disabled). the scheduler activates the component method linked to an input port of R in response to an event sent to R by another actor. Therefore. otherwise reentrant components are required. C3 ). As discussed in the previous section. Cycles within actors (between components) are not allowed.5(a).5(c).31 the flow of control in the actor is immediately transferred to the callee component or port. In Figure 3.7 Therefore. However. it is not possible for a triggered component to preempt a component in any other triggered 6 In 7 Recursion TinyOS. One can relax the restriction on cycles between components and only disallow cycles in method call chains between components by first separating the methods within a component into separate source and triggered components. as in Figure 3. then a source DAG must not be connected to any other source DAG within the same actor. if interrupts are not masked (interrupts are enabled). Triggered DAGs can be connected to other triggered DAGs. A source DAG is formed by starting with a source component and following all forward links between it and other components in the actor. Race conditions and reentrancy problems may occur if source DAGs and triggered components are connected within an actor. the return value indicates whether the command completed successfully or not. A triggered DAG is similar to a source DAG but starts with a triggered component instead. There are two cases in which an actor R may begin execution: (1) a triggered component is activated. C3 ) is connected to the triggered DAG (C2 . . R may interrupt the execution of another actor. In the first case. the source DAG (C1 . then additional restrictions on source DAGs is unneeded. Notice that in this case. R contains a source component which has received a hardware interrupt. since with a single thread of execution. An actor is considered to have finished executing when the components inside of it have finished executing and control has returned to the scheduler. Race conditions and reentrancy problems may occur if C3 is running in a scheduled context and an interrupt causes C1 to preempt C3 . as in Figure 3.

Recall that once triggered. the implementation section of the SenseActor definition declares that whenever the trigger input port is triggered (explained 8 In the existing TinyOS constructs. The interpretation is that when the caller calls.8 Provided component methods may be associated with any number or combination of required component methods and actor input ports. A combination of the callee’s return values is returned to the caller. DAG. but that components are not ordered. Figure 3. an event is produced at the trigger output port. (c) When a source DAG is connected to a triggered DAG. Then actor input ports may be associated either with one provided method of a single component C or with one or more actor output ports. the implementation section of the TimerActor declares that whenever component Trigger calls trigger(). it is supported by the galsC software tools for TinyOS compatibility.32 Actor C Actor A C1 C2 Actor B C1 C2 C2 C1 C3 (a) A source DAG is activated by a hardware interrupt. the components in a triggered DAG execute to completion. since some configurations may lead to nondeterministic component firing order. In Figure 3. Let us first assume that both actor input ports and actor output ports are totally ordered (using the order of the ports declared in the port section of the actor definition file). TinyGALS places restrictions on what connections are allowed between component methods and actor ports. all the callees are called in a possibly non-deterministic order. if we assume that neither actor input ports nor actor output ports are ordered. race conditions and reentrancy problems may occur. Likewise. (b) A triggered DAG is activated by the arrival of an event at the actor input port. Although multiple callees are not part of the TinyGALS semantics. required component methods may be associated with either one provided method of a single component C or with one or more actor output ports. Likewise. the configuration of components inside an actor must not contain cycles and must follow the rules above regarding source and triggered DAGs. then actor input ports and required component methods may only be associated with either a single method or with a single output port. As discussed earlier.5: Directed acyclic graphs (DAGs) within actors.3. However. one caller (a required component method) can have multiple callees. but they may not be associated with actor output ports. . actor output ports may be associated with any number or combination of required component methods and actor input ports.

During execution. and calls the method that is linked to the input port with the contents of the token as its arguments. each token that arrives on any input port of R corresponds to a future invocation of the component(s) in R. In this case. When the system is not responding to interrupts or events on input ports. the programmer is currently responsible for selecting the correct queue size. Tokens are placed in input port queues atomically. so other source components cannot interrupt this operation. an empty message (token) transferred between ports acts as a trigger for activation of the receiving actor. The TinyGALS scheduler passes the token to the linked component method. the trigger() method of component SenseToInt is called. the TinyGALS scheduler removes the token from the event queue of each input port connected to the output port. in Figure 3. They may generate one or more events at the output port(s) of the actor. Tokens are dropped if the input port queue is full. The execution of a TinyGALS system begins with the initialization of all methods specified in INITRi for all actors Ri . Communication between actors is also possible without the transfer of data. The order in which methods are initialized for a single actor is the same as the order in which they are listed in the actor configuration file. . After actor initialization. When a component within an actor calls a method that is linked to an output port. the call to the output port returns immediately. communication between actors occurs asynchronously through these queues. and the component within the actor can proceed. The order in which actors are initialized is the same as the order in which they are listed in the application configuration file. these are stored in the token. Later. the arguments of the call are converted into events called tokens. If initial arguments to the port were declared in the application configuration file.33 in the next section). TinyGALS Application Each input port of an actor has a FIFO (first-in. A copy of the token is placed in the event queue of each input port connected to the output port. the application starts when the runtime system places an initial token at the input port trigger of SenseActor. The scheduler processes tokens in the order in which they are generated. interrupts may occur and preempt the normal thread of execution.e.. first-out) queue. For example. and the components in the triggered DAG of the starting actor execute to completion.4. However. the system does nothing (i. sleeps). Note that since each input port of an actor R is linked to a component method. the TinyGALS runtime system places an initial token at each system start port. During execution of a TinyGALS application. The queue separates the flow of control between actors. which are input port(s) declared in the appstart section of the application configuration file.

such as ones that take care of timing and energy concerns. This does not lead to reentrancy problems because the queue on an actor input port acts as a delay in the loop. which the next paragraph discusses. then in the order in which they are declared in the actor configuration file. Actor output ports may be connected to one or more actor input ports. Connections between actors are much less restrictive.34 Actor B Actor A A_out B_in Actor C C_in Figure 3. as they appear in the application configuration file. multiple-input connection acts as a fork. the runtime system activates the actors corresponding to the tokens in the global event queue using FIFO scheduling. See Section 3. . Section 3. The previous section discussed limitations on the configuration of links between components within an actor. For example. More sophisticated scheduling algorithms can be substituted. and actor input ports may be connected to one or more actor output ports. Currently.6. The TinyGALS semantics do not define exactly when the input port is triggered.2 discusses the ramifications of token generation order on the determinacy of the system. in Figure 3. control eventually returns to the normal thread of execution. A single-output. Input ports are first ordered by actor order. every token produced by A out is duplicated and trigger both B in and C in.6: A single-output. A multiple-output. single-input connection has a merge semantics. Tokens that are produced at the same “time” are processed with respect to the global input port ordering.2 for a discussion of interrupts and their effect on the order of events in the global event queue. such that tokens from multiple sources are merged into a single stream in the order that the tokens are produced. Cycles are allowed between actors. multiple-input connection. This type of merge does not introduce any additional sources of nondeterminacy. Tokens generated at the same logical time are ordered according to the global ordering of actor input ports. The runtime system maintains a global event queue which keeps track of the tokens in all actor input port queues in the system. The current galsC implementation processes the tokens in the order that they are generated as defined by the hardware clock.

One can think of this as a write buffer of size one.. This is implemented as the parameter feature in the galsC programming language. the Counter. Several actors may access the same global variables at the same time. it may see an inconsistent state. Because there is only one buffer per global variable. writes to the parameter are asynchronous in the sense that all writes are delayed. However.e. However. without delay). It is possible that while an actor is reading the variables.e. One can think of this as a way of formalizing race conditions. When the actor resumes reading the remaining variables after handling the interrupt. after an actor finishes and before the scheduler triggers the next actor). A component interface method or an actor port can write to a parameter by calling a connected function with a single argument.e. the count parameter is passed as the last argument to the output port. A component interface method or an actor port can read a parameter when the method or port is invoked by passing the parameter value as one of the arguments. i. the last actor to write to the variable “wins”. Parameters are updated atomically by the scheduler only when it is safe (i.2.. This design does not require parameter names to appear inside the component name space.2 discusses how to eliminate race conditions. an interrupt may occur and preempt the read. In the TinyGUYS mechanism. independent of the connected parameters. each message passed triggers the scheduler and activates a receiving actor. One can develop components in their own scope. Section 3. One must be very careful when implementing global data spaces in concurrent programs.. .3. In SenseActor in Figure 3. Actors may read a parameter synchronously (i. global variables (parameters) are guarded.IntOuput.35 TinyGUYS The TinyGALS programming model has the advantages that actors become decoupled through message passing and are easy to develop independently. A write to a TinyGUYS global variable is actually a write to a copy of the global variable. The TinyGUYS (Guarded Yet Synchronous) mechanism provides a way for actors to share global data safely.output method has a single argument which is written to the count parameter whenever the method is called. which may quickly become inefficient if there is global state that must be updated frequently. The interrupt service routine may modify the global variables. the last value written will be the new value of the global variable. TinyGUYS have global names that are mapped to the local parameter names of each actor.3. In TimerActor in Figure 3.

] – f1 → f2 [When the function f1 is triggered. suppose f1 is a required method with exactly two arguments. and the type of the last argument of p1 must match that of l1 ) and if the return type of f1 matches that of p1 .] – f1 → p1 [When the function f1 is triggered. similar to the notion of record types [100]. Note that functions do not appear at the application level.e. p is an actor port name. transfer the token directly from p1 to the output port p2 . and f is a component interface function (method): source = (l)∗ (p | f ) (l)∗ target = l | p | f (3.36 3.5) A trigger is a port or function that appears as the source of a link. the discussed port directions must be reversed: a source port must be an output port and a target port must be an input port. trigger another function f2 . A link x → y is valid if the number of arguments and the types of the arguments of the source match those of the target when the arguments on each side of the arrow are concatenated separately. . l) is an abbreviation for any number of parameters appearing before or after the trigger t: • Without parameters – p1 → p2 [When the input port p1 is triggered. the types of the first two arguments of p1 must match those of f1 . A port is triggered when the scheduler invokes it with the first token in its queue. global parameter names should be used instead of local parameter names.] – p1 → f1 [When the input port p1 is triggered.1.3 Link model within actors A link x → y inside an actor consists of a source x and a target y. For example. Additionally. Using the regular expression model. where l in (t. However. create a token from the arguments of the function f1 and send it to the output port p1 . the following enumerates the valid types of links.9 The equations below use regular expressions to describe possible entities of x and y. Also. The return type of a trigger must also match that of the target. A function is triggered when it is called by another function. l1 ) → p1 is valid if p1 is an output port that has exactly three arguments whose types match those of the left hand side (i..4) (3. and a source function must be a required method and a target function must be a provided method. The link ( f1 . trigger a function f1 . where l is the local name of a parameter.] 9 This model also applies to connections at the application level. a source port must be an input port and a target port must be an output port.

and send the resulting token to the output port p1 . l) → p1 [When the function f1 is triggered. write its argument to a parameter l.] – Parameter GET/PUT ∗ (p. concatenate the arguments of f1 with the current value of the parameter(s) l. l) → f1 [When the input port p1 is triggered.] ∗ ( f1 . l) → f2 [When the function f1 is triggered. and send the resulting token directly to the output port p2 . concatenate the arguments of p1 with the current value of the parameter(s) l.1: Summary of valid types of links in TinyGALS/galsC. the parameter value(s) are Parameter GET (p1 . l) → p2 [When the input port p1 is triggered. l1 ) → l2 [When the input port p is triggered. and trigger a function f1 with the corresponding arguments.] ∗ ( f .] ∗ (p1 . write its argument to a parameter l. l) → p1 (p1 . l1 ) → l2 ( f . concatenate the arguments of p1 with the current value of the parameter(s) l. l1 ) → l2 [When the function f is triggered. l) → p2 ( f1 . and trigger another function f2 with the corresponding arguments.] ∗ ( f1 . l) → f1 ( f1 . read the current value of the source parameter l1 and write it to the target parameter l2 . the trigger either (a) triggers the connected function or (b) passes a token to the connected output port. No parameters p1 → p2 p1 → f1 f1 → p1 f1 → f2 • With parameters – Parameter GET ∗ (p1 .] For links with no parameters. l1 ) → l2 . l) → f2 Parameter PUT p→l f →l Parameter GET/PUT (p.] – Parameter PUT ∗ p → l [When the input port p is triggered. read the current value of the source parameter l1 and write it to the target parameter l2 .] ∗ f → l [When the function f is triggered. concatenate the arguments of f1 with the current value of the parameter(s) l.37 Table 3. In a parameter GET (read) link.

The output port of B is directly connected to the input port of actor C.38 τ3 Actor A call f() Actor B τ5 Actor C τ1 τ2 τ4 τ6 τ7 τ8 f() {. 10 Connections containing only functions are checked with the nesC type checker. The known types (τ1 . the trigger writes its argument to the parameter. This policy provides a consistent view of ordering in the system. What are the semantics of multiple links (i.. The buffered parameter value may then get overwritten in the later computation.} Figure 3. appended to the trigger’s argument list and passed to the connected function or port. There are two parts to the type inference system: connections with ports. actor A contains a component which has a call to function f with type signature τ1 . τ5 .1. τ3 . In Figure 3.7: Type checking example. and the trigger in a parameter GET/PUT link must have no arguments.. the write to the parameter occurs first. The actual types of ports are inferred from the connection graph of a galsC program. the trigger causes the source parameter to be read and its value stored in the target parameter.4 Type inference and type checking The galsC compiler performs high level type inferencing on the connection graph of an application. τ8 ) are shown in bold. .e.7. Note that for the number of arguments to match. ports are untyped. what is the order of computation if one has f1 → l1 and f1 → f2 ? Or if one has f1 → l1 and f1 → p? In TinyGALS.10 Ports In galsC. The input port of C is a trigger for a function with type signature τ8 . The input port of actor B is the target of the concatenation of the output port of A with a parameter with type τ3 . the trigger in a parameter PUT link must have only one argument. In a parameter GET/PUT (read/write) link. In a parameter PUT (write) link.. before any additional computation or transfer of control. fanout from a function)? For example. The output port of B is the target of the concatenation of the input port of B and a parameter with type τ5 . and connections with parameters but no ports. 3.

The hidden source aspect of these types of components may lead to TinyOS configurations with race conditions or other synchronization problems. This call returns immediately. A valid system has a unique solution to the set of equations. Although the TinyOS architecture allows components to reject concurrent requests. which means that they are actually both source and triggered components. The previous sections showed how the TinyGALS component model enables users to analyze potential sources of concurrency problems more easily by identifying source. Later. the device driver component interrupts with the ready data. it is up to the software developer to write thread-safe code. . since there are only two types of connections: (1) mappings between a global name and a local name. A higher level component can call the device driver component to ask for data. Since the types of all of these sources and targets are known. and (2) links between a function and a local name. 3. and parameters are valid. The galsC compiler derives types for all ports in the system by matching the return type and the argument types of all connected upstream and downstream functions. ports.5 Summary In TinyOS. and called components and defined what kinds of links and connections between components. the type checker merely verifies that all the types in a connection match each other. many components that are wrappers for device drivers are “split phase”. especially after components are wired together and may have interleaved events. triggered.1. Parameters The type system for parameter connections without ports is straightforward. This job is quite difficult.39 One can write a type equation for each connection in the system: τ 1 = τ2 τ2 × τ 3 = τ4 τ4 × τ 5 = τ6 τ6 = τ7 τ7 = τ 8 One can then solve the set of equations to determine the types of the ports. The galsC compiler detects a type error when the set of equations conflicts with itself or is unsolvable.

Poorly imple- mented systems may suffer from deadlock (i. a blocking read) is not part of the semantics across actors. where the system falls into deadloop and responds to no further interrupts). the code that inserts the event back into the event queue .e. such as enqueuing and dequeuing events. Deadlock is not possible across actors. where no tasks can proceed due to blocking on a shared resource). since there are critical system operations. A TinyGALS program runs in a single thread of execution (single stack). it is possible for a scheduler to retain control and disable interrupts indefinitely.. and one or more other actors are in an interrupt context. 3. The event loops back to the input port where it is inserted into the event queue. The execution activated by the scheduler is called the scheduled context. which produces an event (token) at the output port. then calls the function connected to the inside of the input port (in this case the put() function of the output port).. Thus. where shared variables are accessed by multiple threads at the same time). the only possibility for cross-actor concurrent execution is when one actor is in the scheduled context. which may be interrupted by the hardware. and race conditions (i. and the execution triggered by interrupts is called the interrupt context. Theorem 1. the Loop actor is first triggered by an internal interrupt. Can this self-loop prevent further interrupts from entering the system? Once the event is enqueued. In event-driven systems. there is no dynamic memory allocation. An actor A may begin execution when: (1) the scheduler activates A in response to an event at its input port. all memory is statically allocated. Interestingly. the scheduler first dequeues the event with interrupts disabled. In TinyGALS.40 3. livelock (i. Blocking on shared resources (e. Since all scheduled executions of actors are in the scheduled context and controlled sequentially by the scheduler.e.2 Concurrency and Determinacy Issues Concurrency management is a significant concern in event-driven systems. there is a direct link between the input port and the output port inside the actor.g. which are atomic. or (2) an interrupt service component within A is triggered by an external interrupt.8. This section only considers concurrency issues on single processor platforms. In Figure 3.e. Within the put() function.1 Concurrency There are two mechanisms for actors to communicate in TinyGALS: event queues (ports) and guarded global variables (parameters).2...

concurrency errors will not happen at the application level across actors. as discussed in the previous section. So. and access to them is atomic and controlled by the scheduler. As a result of these claims.2. will the program have a unique state trajectory independent of the execution/CPU speed? Note that single thread sequential programs.1. Two actors may also try to write to a shared variable at the same time. Thus. is also atomic.41 Actor Loop interrupt Figure 3. Thus. there is a risk of livelock. Parameters. without a careful implementation of the scheduler. whose value updates are again controlled by the scheduler (where the last value written wins). So. Race conditions are another major concurrency concern. Tokens are stored in event queues. Theorem 3. Theorem 2. These issues were discussed in Section 3. so future interrupts will not be blocked. (2) the contents of the global event queue11 and (3) the values of all global parameters. an actor may be in the midst of writing the data when another actor tries to read the data.8: A self-loop actor triggered by an interrupt. are always guarded. which is a problem with a much smaller scope. . Livelock is not possible across actors. in the galsC scheduler. Since there are shared data between actors.2 Determinacy Notice that the lack of concurrency errors does not mean TinyGALS programs are deterministic.2. interrupts are enabled between dequeuing the event and enqueuing the event. Race conditions are not possible across actors. where all inputs are 11 The global event queue is defined as the ordered sequence of tokens in the event queues of all actor ports. There are two forms of shared data across actors: tokens and parameters. programmers can focus on concurrency issues within each actor. The system state of a TinyGALS program consists of (1) the internal state of all components. The question of determinacy is that given a unique initial state of a TinyGALS program and a set of known interrupts (in terms of both interrupt time and value). However. 3.

Definition 1 (System).g. a representation of this event is also inserted into the global event queue. Thus. The values of all internal variables of all components (VCi ).3 that an application is defined as A = (GLOBALSA . for event-driven systems. This section also reviews the conditions for well-formedness of a TinyGALS system.t0 ) Figure 3.9 or 3. events that are produced earlier in time with respect to the system clock appear in the global event queue before events that are produced later in time. which is an ordered list created from the actors input ports set INPORT SR ).9: Two events are produced at the same time.t0 ) (event. This section analyzes the determinacy property of TinyGALS programs. Recall that the input port associated with a connection between actors has a FIFO queue for ordering and storing events destined for the input port. ACT ORSA . The contents of all of the queues associated with actor input ports in the application. Events that are produced at the same time (e. The contents of the global event queue.6) are ordered first by order of appearance in the application actors list (ACT ORSA ). The system state consists of four main items: 1. Recall from Equation 3. system state (including quiescent system state and active system state). such as Kahn process networks. Whenever a token is stored in an input port queue.. . Concurrent models. read into the system.42 Actor R (event. The global event queue provides an ordering for tokens in all input port queues. STARTA ). as in Figures 3. are determinate. and system execution. beginning with definitions for a TinyGALS system. A system consists of an application and a global event queue.VARMAPSA . determinacy may be sacrificed for reactiveness. Definition 2 (System state). 2. which sacrifices real-time properties. actor iteration (in response to an interrupt and in response to an event). can also be determinate [52].CONNECT IONSA . 3. then by order of appearance in the actors input ports list (INPORT SR . However.

and hence. Suppose actor R is iterated in response to interrupt I. Create a source DAG D by starting with C and following all forward links between C and other components in R. Let C be the component that contains the interrupt handler of I. Recall from Section 3.1 (Quiescent system state). An iteration of an actor R is the execution of a subset of the components inside of R in response to either an interrupt or an event at an input port. including what is meant by “subset of components. Component execution also includes execution of all external code until control returns and execution of the code body has completed. A system state is quiescent if there are no events in the global event queue. since execution begins by triggering an actor input port. Iteration of the actor consists of the execution of the components in D beginning with C. and hence. at least one event in the queue of at least one actor input port.1 (Actor iteration in response to an interrupt). Definition 3 (Component execution). A triggered or called component C is activated when one of its provided methods is called. Recall that the global event queue contains the events in the system.43 4. Note that iteration of the actor may cause it to produce one or more events on its output port(s). Note that the code executed upon component activation may call other methods in the same component or in a linked component.” Definition 4. The values of all TinyGUYS (GLOBALSA ). . no events in any of the actor input port queues in the system.2 that C therefore must be a source component. but the actor input ports contain the data associated with the event.2 (Active system state). Execution of the system can be partitioned into actor iterations based on component execution. Note that a TinyGALS system starts in an active system state. The following defines these two types of actor iterations in more detail. Definition 4 (Actor iteration). A source component is activated when the hardware it encapsulates receives an interrupt.1. encapsulated as a token. The system state is either quiescent or active: Definition 2. Definition 2. Component execution is the execution of the code in the body of the interrupt service routine or method through which the component has been activated. A system state is active if there is at least one event in the global event queue.

• Source components may neither also be triggered components nor called components. Suppose actor R is iterated in response to an event E stored at the head of one of its input port queues. but loops around actors are allowed. . Conditions for well-formedness Below is a summary of the conditions that the components within a single TinyGALS actor must satisfy to be well-formed and avoid concurrency problems.2 (Actor iteration in response to an event). but other interrupts are not masked. The following discusses how to choose the actor iteration order. The order in which actors are executed is the same as the order of events in the global event queue. or with one or more output ports.1.2 and 3.2. Iteration of the actor consists of the execution of the components in D beginning with C. Definition 5 (System execution). or with one or more output ports. As with the interrupt case. Assumes that an interrupt whose handler is running is masked. but triggered DAGs may be connected to other triggered DAGs.1.2 that C therefore must be a triggered component. • Component source DAGs and triggered DAGs must be disconnected. Given a system state and zero or more interrupts. system execution is the iteration of actors until the system reaches a quiescent state.44 Definition 4.1. iteration of the actor may cause it to produce one or more events on its output port(s). Recall from Section 3. as discussed in Sections 3. Create a triggered DAG D by starting with C and following all forward links between C and other components in R. Q. • Input ports may be associated with a single method of a single component. Let C be the component linked to the input port of Q. • Outgoing component methods may be associated with a single method of another component. • Cycles among components within an actor are not allowed. • Component source DAGs must not be connected to other source DAGs.

. and in each of the steps r0 . since the system execution path is the order in which the actors are iterated. given an initial quiescent system state and a set of interrupts that occur at known times. that is. which is part of a DAG. Figure 3. .10: A single interrupt. . for each quiescent state and a single interrupt. one can analyze the determinacy of a TinyGALS system. rn q1 quiescent state a0. as is usually true in an event-driven system? . the system always produces the same outputs and ends up in the same state after responding to the interrupts. In the intuitive notion of determinacy. . System execution proceeds until the system reaches a quiescent state. there is only one system execution path. Recall that a TinyGALS system starts in an active system state.. rn . A TinyGALS system is determinate. this section first discusses determinism of a TinyGALS system in the case of a single interrupt occurring in a quiescent state. This section then discusses determinism for one or more interrupts during actor iteration in the cases (1) where there are no global variables and (2) where there are global variables. r1 . between quiescent states. the actor selected is determined by the order of events in the global event queue.1 r2 . The component C is a triggered component. From this quiescent state. The application start port is an actor input port which is in turn linked to a component C inside the actor.45 an actor iteration interrupt I r0 q0 quiescent state r1 a0. Components in this triggered DAG execute and may generate events at the output port(s) of the actor. What if one or more interrupts occur during an actor iteration. A system is determinate if..0 a0. Determinacy Given the definitions in the previous section. Theorem 4 (Determinacy).n−1 active states Figure 3.10 depicts iteration of a TinyGALS system between two quiescent states due to activation by an interrupt I.

g. Then the system state . I2 does not interrupt the handling of I1 ). Suppose active state a1 would be the next state after an iteration of the actor corresponding 0.0 iteration of the actor corresponding to interrupt I2 from q0 . the order of events in the global event queue may not be consistent between multiple runs of the system if the same interrupts occur during the same actor iteration.k states. Determinacy of a system without global variables. . I1 . However. the interrupt(s) may cause insertion of events into other actor input port queues. . . and hence insertions into the global event queue.0 ax 0. Suppose the iteration of actor R is interrupted one or more times. In order to determine the value of active system state ax . Since source DAGs must not be connected to triggered DAGs.11.. In the TinyGALS notation. This section first examines the case where there are no TinyGUYS global variables.. .k refers to an active system state after an interrupt Ii starting from quiescent state q j and after actor iteration rk . then one can predict the state of the system after a single actor iteration even if it is interrupted one or more times. but at slightly different times..46 I1 I2 In I0 .. . . .12. This is illustrated in Figure 3. Depending on the¡ relative timing between the interrupts and the production of events by C at the output ports of R. In Figure 3. This section assumes that the handlers for interrupts I1 . I2 .0 to interrupt I1 from quiescent state q0 . Figure 3. In execute quickly enough such that they are not interleaved (e. If one knows the order of interrupts. and that active state a2 would be the next state after an 0. In . aij.11: One or more interrupts where actors have delayed output.1 ax 0. I2 . .11 shows a system execution in which a single actor iteration is interrupted by multiple interrupts.. the interrupt(s) cannot cause the production of events on output ports of R that would be used in the case of a normal uninterrupted iteration. This is a source of non-determinacy. . q0 ax 0. one can “add” the combined system j. A partial solution for reducing non-determinacy in the system is to delay producing outputs from the actor being iterated until the end of its iteration. This approach is taken by models of computation such as timed multitasking [69] and Giotto [44]. the superscript x in ax j.k is a shorthand for the sequence of interrupts I0 . Consider an actor R that contains a component C which produces events on the output ports of R.n q1 Figure 3.

where the value of this 0. is to preschedule actor iterations. a sequence of actor iterations is scheduled and executed. From a performance perspective. + an 0.0 a1 + a2 0.0 0..0 serted (or “appended”) into the corresponding actor input port queues in active system state a1 . 0. If the interrupts are interleaved. That is.0 0. I1 I0 I2 In .0 0.0 0. but after the completion of the interrupt handlers for interrupts I1 and I2 .0 a1 + a2 + . during which interrupts are masked. In . one must add the system state (append actor input port queue contents) in the order in which the interrupt handlers finish..0 Figure 3. if an interrupt occurs.13. it is also necessary that interrupt handling be fast enough that the handling of the first interrupt I0 completes in a reasonable length of time.0 One can extend this to any finite number of interrupts. + an + a0 0. Another solution.0 Figure 3.. would be a1 + a2 ..12: Active system state after one interrupt. which leads to greater predictability in the system. system execution is deterministic for a fixed sequence of interrupts.0 0.0 expression is the system state in which the new events produced in active system state a2 are in0. as shown in Figure 3. It is necessary that the number of interrupts be finite for liveness of the system. .0 0.0 0.13: Active system state determined by adding the active system state after one noninterleaved interrupt. before the completion of the iteration of actor R in response to interrupt I0 . q0 a1 0.. One can also queue interrupts in order to eliminate preemption..47 Ii q0 ai 0. both of these approaches reduces the reactiveness of the system.0 a1 + a2 + . Then. However.

no component in any source DAG can be a writer (but components in other triggered DAGs are allowed since they cannot execute at the same time).2.48 Determinacy of a system with global variables. Solution 1 Allow only one writer for each TinyGUYS global variable. An extreme version of this case is the “synchronous” assumption in synchronous/reactive models. Solution 2 Allow multiple writers. 3. if a component in a source DAG writes to a TinyGUYS global variable then no component in any triggered DAG can be a writer. but only if they can never write at the same time. Suppose that actor R writes to a global variable. Suppose that while an actor is being iterated.3 Summary A TinyGALS program is determinate in a restricted case. Solution 4 Prioritize writes such that once a high priority writer has written to the TinyGUYS global variables. interrupts occur only at quiescent states. lower priority writes are lost. the state of the system after the iteration of actor R is interrupted by one or more interrupts is highly dependent on the time at which the components in R write to the global variable(s). where there is pure reactive execution. (Note that when read. That is. Also suppose that the iteration of actor R is interrupted. Components in other source DAGs are only allowed to write if all interrupts are masked. Likewise. The source of non-determinacy is the preemptive handling of interrupts. Solution 3 Delay writes to a TinyGUYS global variable by an iterating actor until the end of the iteration. This section now discusses system determinacy in the case where there are TinyGUYS global variables. and a component in the interrupting source DAG writes to the same global variable. In general. There are several possible alternatives for eliminating this source of nondeterminacy. it is interrupted by . That is. and it takes zero time to react to external events [39]. Then without timing information. where the processing speed is infinitely fast. a TinyGALS program is non-determinate. This may require that the processing speed be quick enough to process all triggered execution before the next interrupt occurs. As currently defined. a global variable always contains the same value throughout an entire actor iteration). one cannot predict the final value of the global variable at the end of the iteration. if a component in a triggered DAG writes to a TinyGUYS global variable.

3 to check links and connections. a parameter).1 toolset. If both of these actors write to a global variable (i.1 at the beginning of this chapter. since the decoupling of execution through ports eliminates some possible sources of race conditions. and application. and functions (methods). However.1. including the Berkeley motes. In these cases. The galsC toolset is an extension of the nesC 1. interrupts should be considered as high priority events which should affect the system state as soon as possible. and to infer and check types in the system graph of ports. The galsC compiler also inherits the datarace detection feature of nesC.3 Code Generation The highly structured architecture of the TinyGALS model enables automatic generation of the communication and scheduling code for galsC programs. (2) communication between actors. Tables 3. and can compile both nesC and galsC programs. The discussion throughout this section uses the example system illustrated in Figure 3. The output of the galsC compiler can be cross-compiled for any platform used with TinyOS. Given the definitions for the components. actors. as well as data on the memory usage of TinyGALS. dead code elimination.. If both of these actors produce events at their output ports. one cannot predict the final value of the global variable at the end of the iteration. This section also gives an overview of the implementation of the TinyGALS scheduler and how it interacts with TinyOS. The galsC compiler uses the link model described in Section 3. then without exact timing information. the order of events in the global event queue may not be consistent when the system is executed at different speeds.e. 3. The detection feature is modified for galsC. and (4) system initialization and start of execution.1. parameters. . including type checking. allowing software developers to avoid writing error-prone concurrency control code.2 and 3. This is an annotated version of the SenseTag application example shown in Figure 3. and function inlining.14.3 show a summary of the generated functions and data structures for galsC. The galsC compiler takes advantage of a real compiler backend. event-driven systems are usually designed to be reactive. and (3) TinyGUYS global variable reads and writes. the galsC compiler automatically generates all the code necessary for (1) component links and actor connections.49 another actor. The galsC compiler uses traditional compiler techniques.

GALSC_eventqueue[] Event queue for the TinyGALS scheduler. actor$port$argi[] X Queue for the ith argument of the input port.13 actor$port$head X Points to the beginning of the input port queue.. GALSC_sched_start() X Put initial tokens into input port queues.13 actor$port$count X Number of tokens in the input port queue.14: Code generation for the SenseTag application. Table 3. actor$port$get() X X Get token out of input port queue. .50 GALSC_params_buffer GALSC_params TinyGALS scheduler GALSC_sched_init() GALSC_sched_start() GALSC_eventqueue[] uint16_t count = 0 count TimerActor actorControl IntOutput. actor$port$put() X X Put token into input port queue.2: Generated code for ports in galsC. output TimerC Photo SenseActor$trigger$arg0[64] SenseActor$trigger$put() SenseActor$trigger$head SenseActor$trigger$get() SenseActor$trigger$count Figure 3.. Function or variable name Per port12 Function Description GALSC_sched_init() X Initialize scheduler data structures.output trigger ADC ADCControl .output actorControl Counter StdControl Timer StdControl Timer TimerControl Trigger trigger trigger 64 trigger StdControl SenseActor count SenseToInt IntOutput.

For each input port of an actor. If not indicated. and n is the length specified by the programmer in the application definition file. the compiler does not generate a queue for the port. For the init() method of the TimerControl interface15 .2 for the source code of the TimerC and TimerM components). If not indicated. as well as the connections between actors. TimerControl is an alias for StdControl that is explicitly declared in the declaration of the Trigger component using the as keyword in nesC.e. for the links between the TimerControl interfaces of the Trigger and TimerC components. GALSC_params_buffer Copy of GALSC_params. though the called function is a put() or get() function for an actor port. The galsC compiler also generates similar aliases and mapping functions for connections between actors.3. the token contains no data). 3.3: Generated code for parameters (TinyGUYS) in galsC. The galsC compiler generates a mapping function named Trigger$TimerControl$init().2 Communication The compiler automatically generates a set of scheduler data structures and functions for each connection between actors. where m is the number of arguments in the linked component method. but it still reserves port” indicates that this function or variable is generated for each input port..51 Table 3.14. 3. there is only one instance of the function or variable for the entire galsC program. the galsC compiler generates an alias and a mapping function for each method of the interface. the alias and destination for the link is TimerM$StdControl$init() (see Figure 3. 15 Here. Function or variable name Per parameter14 Function Description GALSC_params Contains all of the parameters. 14 “Per parameter” indicates that this function or variable is generated for each parameter.3. there is only one instance of the function or variable for the entire galsC program.1 Links and connections The compiler generates a set of aliases and mapping functions that create the links between components. In the example in Figure 3. as detailed in the next section. 12 “Per . If the linked component method has no arguments. parameter$get() X X Read from parameter. which calls TimerM$StdControl$init(). 13 This variable is not generated if the port has no arguments (i. parameter$put() X X Write to parameter buffer. the compiler generates a queue of width m and length n. then as an optimization. The mapping functions for the links between components is the same as in the original nesC compiler—these are intermediate functions that call the destination function.

For each link between a component method and an actor output port. the galsC compiler generates a mapping function TimerActor$Trigger$trigger() for the trigger method of component Trigger in TimerActor. one can take one of several strategies. as described in the previous section.trigger() is a method with one argument. The scheduler also modifies SenseActor$trigger$head and SenseActor$trigger$count before 16 TimerActor. The mapping function calls the get() function of the linked input port. the system calls SenseActor$trigger$get() when the scheduler activates SenseActor to remove data queued in SenseActor$trigger$arg0[0]. The put() function handles the actual copying of data to the input port queue. the galsC compiler also generates a mapping function. the galsC compiler generates a mapping function. Yet another approach would be to place a higher priority on more recent events by deleting the oldest event in the queue to make room for the new event. The galsC compiler also generates a put() and get() function for each input port.Trigger.14. the galsC compiler generates an input port queue of length 64 called SenseActor$trigger$arg0[ ]16 . In the example in Figure 3. and generates functions SenseActor$trigger$put() and SenseActor$trigger$get() for the input port trigger of SenseActor. If the queue is full when attempting to insert data into the queue. the system first calls this generated function to remove data from the input port queue and pass it to the component method. as described in the previous section. as well as the variables SenseActor$trigger$head and SenseActor$trigger$count.52 space for events in the scheduler event queue.14. However. In the example. The compiler also generates a pointer and a counter for each input port to keep track of the location and number of tokens in the queue. When the scheduler activates an actor via an input port. . The mapping function TimerActor$Trigger$trigger() in turn calls SenseActor$trigger$put() to insert data into the queue. an alternate method is to generate a callback function which attempts to re-queue the event at a later time. For each link between a component method and an actor input port. The put() function also adds the port identifier to the scheduler event queue so that the scheduler activates the actor at a later time. In the example in Figure 3. The mapping function is called whenever a method of a component wishes to write to an output port. for the definition of the trigger input port of SenseActor. It modifies SenseActor$trigger$head and SenseActor$trigger$count to keep track of the queue contents. The galsC scheduler currently takes the simple approach of dropping events that occur when the queue is full. which in turn calls the linked input port put() function.

53 calling the trigger() method of the SenseToInt component with the newly removed data as the argument. This function places initial tokens into the input port queues specified in the appstart section of the application definition. The code generator also connects the StdControl interfaces listed in the actorControl section of each actor to the Main component used in TinyOS to initialize the system. The code generator also creates functions count$put() and count$get(). and a put() function that stores a new value for the variable in the variable’s buffer.14. In the source code shown in Figure 3.5 Scheduling Execution of a TinyGALS system begins in the scheduler. 3. The order of actors listed in the application definition determines the order in which the interfaces are connected. There is a single scheduler .3 TinyGUYS The compiler generates a pair of data structures and a pair of access functions for each TinyGUYS global variable declared in the application definition. 3.3. The mapping functions generated for the component connections to TinyGUYS parameters calls these put() and get() functions.trigger() is listed in the appstart section of the application definition.count. along with a buffer for the storage location.3. Therefore. For the example in Figure 3.4 System initialization and start of execution The code generator creates a system-level initialization function called GALSC sched init(). 3.3. the GALSC sched start() function calls the SenseActor$trigger$put() function at the start of the system. SenseActor. along with a buffer named GALSC params buffer.count.15 shows the TinyGALS scheduling algorithm. which initializes the scheduler data structures. The pair of access functions consists of a get() function that returns the value of the global variable.4. Figure 3. which performs all of the runtime initialization. The pair of data structures consists of a data storage location of the type specified in the actor definition that uses the global variable. The code generator also creates an application start function called GALSC sched start(). the galsC compiler generates a global variable named GALSC params. A generated flag indicates whether the scheduler needs to update the variables by copying data from their buffers.

in TinyGALS which checks the global event queue for events. so it is difficult for a developer wiring off-the-shelf components together to predict what non-interrupt driven computations will run in the system. Run task. which leads to programs that . allow the developer to explicitly define “tasks” at the application level. the scheduler runs any posted TinyOS tasks. TinyGALS actors. TinyOS tasks are not explicitly defined in the interface of the component. Both triggered actors in TinyGALS and tasks in TinyOS provide a method for deferring computation.54 if there is an event in the global event queue then { if any TinyGUYS have been modified Copy buffered values into variables. the only way to share data is through the internal state of a component. If TinyOS tasks are not used. However. since there is no communication between tasks. If the global event queue contains no events. The scheduler removes the token corresponding to the event from the appropriate actor input port and passes the value of the token to the component method linked to the input port. which is a more natural way to write applications. lengthy operations should be spread across multiple tasks. and TinyOS tasks run at the lowest priority. on the other hand. If the global event queue contains an event. The algorithm loops until there are no events or TinyOS tasks. In TinyOS tasks must be short. The TinyGALS scheduler is a two-level scheduler. else if there is a TinyOS task then { Take task out of task queue. Pass value to the method linked to the input port. However. the scheduler first copies buffered values into the actual storage for any modified TinyGUYS global variables. end if Get token corresponding to event out of input port. The user must write synchronization code to ensure that there are no race conditions when multiple threads of execution access this data. The TinyGALS programming model removes the need for TinyOS tasks. The asynchronous and synchronous parts of the system are clearly separated to provide a well-defined model of computation. Note that the TinyOS scheduler is included as a subset of the TinyGALS scheduler for backwards compatibility with TinyOS tasks. TinyGALS actors run at the highest priority.15: TinyGALS scheduling algorithm. end if Figure 3. the TinyGALS scheduler is about the same size as the original TinyOS scheduler. at which point the system goes to sleep.

The developer has no need to write synchronization code when using TinyGUYS to share data between tasks. The get() and put() functions for a port with one argument of type uint8 t together use 208 bytes. the leader election is achieved by having every mote periodically broadcast a packet containing the location of the mote and its sensor reading.4 Example To illustrate the effectiveness of the galsC language.3. Assume .16. To simplify the discussion. These packets also serve as beacons to establish a multi-hop routing structure. The TinyGALS communication framework is very lightweight. 3. The globally asynchronous nature of TinyGALS provides a way for tasks to communicate. the initialization and scheduling code is 662 bytes compared to 564 bytes for the original nesC code. since event queues are generated as application-specific data structures. as shown in Figure 3. memory usage of a TinyGALS application is determined mainly by the user-specified queue sizes and the total number of ports in the system. consider a classical sensor network appli- cation that detects and monitors point-source targets.55 are easier to debug. The application primarily consists of two tasks: (1) exchanging local sensor readings to determine the “leader” responsible for reporting a detection. The get() and put() functions for a parameter of type uint16 t use 30 bytes. 3. Thus.6 Memory usage TinyGALS provides an improved programming model in exchange for a minimal applicationdependent increase in code size for scheduling and communication between actors. rather than to develop sophisticated algorithms to solve the problem optimally. the galsC compiler automatically generates the code. The multi-hop routing is implemented as a routing tree rooted at the base station. assume that the motes are deployed on a perturbed grid. A set of sensor nodes (motes) are deployed in a 2-D field. The scheduler event queue size is equal to the sum of the user-allocated sizes for each port connection (depends on the size of the data type). The goal of the sensor network is to detect moving objects modeled as point signal sources. located at the lower left corner of the field. Note that the goal here is to illustrate the language. Assume that the motes know their locations on the grid and the grid size. and to report the detection to a central base station. and (2) multi-hop forwarding of the report messages to the base station. For simplicity. For a simple galsC photosensor application.

Figure 3. Two types of event sources drive the execution of a mote—clock interrupts and received messages. It then calculates its own hop count from its parent’s hop count. To compensate for the unreliable and sometimes asymmetric wireless communication links.56 base station Figure 3. every node that can overhear the message notes that it is probably one hop away from the base station.16. as illustrated by the dashed line in Figure 3. for example.1. Whenever it broadcasts a message.17 shows a high-level view of the galsC implementation of the object detection application. Every half second. which indicates the level of the sender in the routing tree. Every message contains the hop count of the sender. TimerActor emits a token that triggers the SenseAndSend actor. a mote finds out its parent in the tree by eavesdropping on other messages. Similar to the example from the beginning of the chapter in Figure 3. the TimerActor handles clock interrupts and updates the latest timer count in a parameter named timeCount. The MessageReceiver actor receives messages from the radio and chooses an action based on the message type: . a trade-off between low hop count and message repeatability) as its parent node. These messages include sensor reading broadcasts and forwarded report messages. All motes run the identical code modular to their locations. For example.16: Sensor array for object detection and reporting. that no mote has the global topology of the network. the mote directly connected to the base station has hop count 0. a mote maintains a list of senders it has heard in the past T seconds and chooses the most reliable one (measured by. The reachable nodes of a wireless broadcast may have a complicated shape.

At the local level. below. the actor sends the content of the message to the downstream MessageForwarder actor.5 Summary This chapter described the TinyGALS programming model for event-driven embedded systems such as sensor networks. The actor also compares its own reading with the latest values from its neighbors. the actor updates the neighborReadings table. and the galsC programming language that implements the programming model. it is closest to the signal source). The SenseAndSend actor activates the ADC (analog-to-digital converter) to get a sensor reading.57 • If the message is a local broadcast..e. and thus this node’s hop count. The globally asynchronous. locally synchronous model allows developers to use high-level constructs such as ports and parameters to create thread-safe. 17 Here. 3.17 If this mote has the highest sensor reading (i. multitasking programs based on the actor model. the neighbors are defined as the motes directly above. Both the LocalBroadcast actor and the MessageForwarder actor send out packets with this mote’s hopCount so that other motes can use it to build the multi-hop routing tree. software components are linked via synchronous method calls to form actors. Note that since only the latest neighbor sensor reading matters. Note that it requires the timeCount value to determine the rate of the messages heard. • If the message is a forwarding message. and right of this mote in the grid. SenseAndSend generates a report message and queues it with the MessageForwarder actor. the actor updates an internal routing table by looking at the repetition frequency of the sender node. which separates the flow of control between actors. . A complementary model called TinyGUYS is a guarded yet synchronous model designed to allow thread-safe sharing of global state between actors via parameters without explicitly passing messages. the actor queues a local broadcast of the sensor reading. • Also for each broadcast message. merged with the requests from SenseAndSend and MessageReceiver. At the global level. left. The MessageForwarder actor also takes the parentNode ID as part of its input token. Whenever there is a change of the desired parent node. the overriding semantics of TinyGUYS variables is a natural fit. actors communicate with each other asynchronously via message passing. it updates the parentNode and hopCount parameters. Once the sensor reading is available.

per-node view of the object detection application.58 timeCount LocalBroadcast TimerActor SenseAndSend neighborReadings hopCount MessageReceiver MessageForwarder parentNode Figure 3. . The galsC compiler automatically generates communication and scheduling code for programs specified in the galsC language. as well as checking for possible race conditions. and function inlining. dead code elimination. This chapter also described a type system for checking connections across synchronous and asynchronous communication boundaries. such as deadlock and race conditions. which allows developers to avoid writing error-prone task synchronization code. which allows galsC to have traditional type checking.17: Top-level. Having a well-structured concurrency model at the application level greatly reduces the risk of concurrency errors. The galsC compiler extends the nesC compiler. The language and compiler are implemented for the Berkeley motes and extend TinyOS/nesC by providing a higher programming abstraction level than the TinyOS primitives.

the quality and expedition of their very thinking was found to be improved. Jr. that move. They usually do not know what questions must be answered. Complex software systems are. things that act. Even the simple answer—“Make the new software system work like our old manual information-processing system”—is in fact too simple. The dynamics of that action are hard to imagine. and to other software systems. writes about requirements refinement and rapid prototyping: The hardest single part of building a software system is deciding precisely what to build. it is necessary to allow for an extensive iteration between the client and the designer as part of the system definition. in the twentieth-anniversary edition of The Mythical Man Month [17]: Harel argues strongly that much of the conceptual construct of software is inherently topological in nature and these relationships have natural counterparts in spatial/graphical representations: Using appropriate visual formalisms can have a spectacular effect on engineers and programmers. that work. Brooks. author of STATEMATE [42].59 Chapter 4 Viptos In The Mythical Man Month [17]. Brooks later quotes Harel. Therefore the most important function that software builders do for their clients is the iterative extraction and refinement of the product requirements. Moreover this effect is not limited to mere accidental issues. No other part is more difficult to rectify later. the clients do not know what they want. Clients never want exactly that. Frederick P. including all the interfaces to people. Successful system development in the future will revolve around . and they almost never have thought of the problem in the detail that must be specified. No other part of the conceptual work is so difficult as establishing the detailed technical requirements. to machines. For the truth is. No other part of the work so cripples the resulting system if done wrong. So in planning any software activity. moreover.

as discussed in Chapter 1. TOSSIM can efficiently model large homogeneous networks where the same nesC code is run on every simulated node. a graphical modeling and simulation environment for embedded systems. Although a large community uses TinyOS in simulation to develop and test various algorithms and protocols. each of which conjures up different kinds of mental images. A combination it must be. even though a graphical block diagram programming environment would be much more intuitive. a TinyOS simulator for the PC that can execute nesC programs designed for a mote. text-based format. most existing tools for wireless sensor networks focus on either design. which allows modeling of various hardware and other interrupt events. None of these allow extensive iteration between design and implementation. which ties in well with an actor-oriented approach. simulation. a Ptolemy II-based graphical modeling and simulation framework for wireless sensor networks that supports actor-oriented definition of sensor nodes. a joint modeling and design environment for wireless networks and sensor node software. a TinyOS program consists of a graph of mostly pre-existing nesC components. and TOSSIM. an extension to the C programming language. an interrupt-level discrete-event simulator for homogeneous TinyOS networks. Similar barriers to integrated design and deployment exist for other popular wireless sensor network development platforms. To address these problems. consider VisualSense [8]. since system models have several facets. visual manner. but it is difficult to use other models. This chapter presents Viptos (Visual Ptolemy and TinyOS). wireless communication channels. physical media such as acoustic channels. using the “proper” entities and relationships. and wired subsystems. TinyOS application developers can use TOSSIM [65]. A TinyOS program consists of a graph of components that are written in an object-oriented style using nesC [32]. and its event-driven execution model. users must write their programs in a multi-file.60 visual representations. or deployment. Users may choose from a few built-in radio connectivity models in TOSSIM. they face some key limitations when using the nesC/TinyOS/TOSSIM programming toolsuite. Additionally. We will first conceptualize. TOSSIM contains a discrete-event simulation engine. and then formulate and reformulate our conceptions as a series of increasingly more comprehensive models represented in an appropriate combination of visual languages. however. Viptos is built on Ptolemy II. does not provide a mechanism for transitioning from a sensor network application developed within the framework to an implementation for real hard- . As discussed in Chapter 1. TinyOS was chosen because of its large and active user base in the wireless sensor network community. VisualSense. especially in an intuitive. but it does not allow simulation of networks that contain different programs.

61 ware without rewriting the code from scratch for the target platform. VisualSense mainly provides an abstract, mathematically-based modeling environment, and node models must be created from scratch. Integrating TinyOS and VisualSense combines the best of both worlds. TinyOS provides a platform that works on real hardware with a library of components that implement low-level routines. VisualSense provides a graphical modeling environment that supports hierarchical, heterogeneous systems. The result, Viptos, allows networked embedded systems developers to construct block and arrow diagrams to create TinyOS programs from any standard library of TinyOS components written in nesC. Viptos automatically transforms the diagram into a nesC program that can be compiled and downloaded from within the graphical environment onto any TinyOS-supported target platform. Viptos also includes the full capabilities of VisualSense, including modeling of communication channels, networks, and non-TinyOS nodes. It presents a major improvement over VisualSense by allowing developers to refine high-level wireless sensor network simulations down to real-code simulation and deployment, and adds much-needed capabilities to TOSSIM by allowing simulation of heterogeneous networks. Viptos provides a bridge between Ptolemy II and TOSSIM by providing interrupt-level simulation of actual TinyOS programs, with packet-level simulation of the network, while allowing the developer to use other models of computation available in Ptolemy II for modeling the physical environment and other parts of the system. This framework allows application developers to easily transition between high-level design and simulation of algorithms to low-level implementation, simulation, and deployment. The work presented in this chapter has three main contributions. First, it addresses a need for a unified wireless sensor network development environment that allows abstract modeling and refinement to low-level simulation and deployment. Second, it provides insights into the integration of the semantics of two different simulation systems, with different representations of software components, programming languages, types systems, and schedulers. Third, it shows through evaluation that the implementation of the combined system is linearly scalable in the number of nodes, and even without aggressive performance tuning, can simulate moderately large, heterogeneous sensor networks effectively. Section 4.1 describes the architecture of the integrated TinyOS and Ptolemy II toolchain and investigates the semantics of this interface. Section 4.2 evaluates the performance of Viptos. Section 4.3 summarizes this chapter. Related work is presented separately, in Chapter 6 (Section 6.2).

62

4.1

Design
Viptos provides an integrated toolchain for designing, simulating, and deploying sensor net-

work applications by integrating the programming and execution models and the component libraries of two systems: Ptolemy II/VisualSense and TinyOS/TOSSIM. This section describes the architecture of this integrated system in detail, including the representation of nesC components, the transformation of the nesC components into this representation, the generation of deployment and simulation code for TinyOS programs developed in Viptos, and the simulation of sensor network models that include nodes running TinyOS.

4.1.1

Representation of nesC components

Let us review the basics of the nesC programming language used in TinyOS. A nesC component exposes a set of interfaces. An interface consists of a set of methods. A method is known as either a command or an event. A nesC component implements its provides methods and expects other components to implement its uses methods. A nesC component is either a configuration that contains a wiring of other components, or a module that contains an implementation of its interface methods. A TinyOS program consists of a set of nesC components, where the top-level file that describes the application is a nesC component that exposes no interface methods. Figure 4.1(a) shows a TinyOS program called SenseToLeds that displays the value of a photosensor in binary on the LEDs of a mote. SenseToLeds contains a wiring of the components Main, SenseToInt (whose source code is shown in Figure 4.1(b)), IntToLeds, TimerC, and DemoSensorC. These components are just a few of the nesC components that are available in the TinyOS component library. NesC interfaces can also be parameterized to provide multiple instances of the same interface in a single component. In Figure 4.1(a), the TimerC.Timer interface is parameterized. The Timer interface of SenseToInt connects to a unique instance of the corresponding interface of TimerC. If another component connects to the TimerC.Timer interface, it connects to a different instance. Each timer can be initialized with different periods. In Ptolemy II, basic executable code blocks are called actors and may contain input and/or output ports. A port may be a simple port that allows only a single connection, or it may be a multiport that allows multiple connections. Fan-in to, or fan-out from, simple ports may be achieved by placing a relation in the path of the connection. A code block is stored in a class, and an actor is an instance of the class.

63

configuration SenseToLeds { } implementation { components Main, SenseToInt, IntToLeds, TimerC, DemoSensorC as Sensor; Main.StdControl -> SenseToInt; Main.StdControl -> IntToLeds; SenseToInt.Timer -> TimerC.Timer[unique("Timer")]; SenseToInt.TimerControl -> TimerC; SenseToInt.ADC -> Sensor; SenseToInt.ADCControl -> Sensor; SenseToInt.IntOutput -> IntToLeds; }

module SenseToInt { provides { interface StdControl; } uses { interface Timer; interface StdControl as TimerControl; interface ADC; interface StdControl as ADCControl; interface IntOutput; } } implementation { ... }

(a) (b)

Figure 4.1: Sample nesC source code. Table 4.1: Representation scheme for nesC components in Viptos. NesC construct component uses interface provides interface non-parameterized interface single-index parameterized interface1 fan-in or fan-out Ptolemy II construct class output port input port simple port multiport relation Ptolemy II Graphical Icon block outward pointing triangle inward pointing triangle black triangle white triangle black diamond

Viptos uses the representation scheme shown in Table 4.1 for the various parts of nesC components. Figure 4.2(c) shows a graphical representation in Viptos of the equivalent wiring diagram for the SenseToLeds configuration shown in Figure 4.1(a). Relations are represented by diamondshaped icons. Note that the TimerC component in Figure 4.2(c) provides a parameterized interface, or input multiport, as indicated by the white triangle pointing into the block. Non-parameterized interfaces, or simple ports, are represented by black triangles. Viptos can serve as a program design and editing environment—users design programs by manipulating the Ptolemy II graphical icons on the screen, then generate code using the automatic process described later in Sections 4.1.3 and 4.1.4.
multiple-index, parameterized interfaces are allowed in nesC, Viptos does not support them, since they are not used in practice and do not appear in any existing components in the TinyOS component library.
1 Although

For nesC subcomponents. TinyOS application files in nesC do not have interfaces. an XML-based language used in Ptolemy II to specify interconnections of parameterized. Viptos uses the resulting MoML files to display TinyOS components as a library of graphical blocks.3 shows the generated MoML code for the TimerC component referenced in Figure 4.1(a). As discussed previously. ncapp2moml can also automatically embed the converted TinyOS application into a template model containing a representation of the hardware interface of the node and optionally. a default physical environment. The ncapp2moml tool uses information about the nesC wiring graph and the referenced interfaces in the XML output from the nesC 1. Figure 4. Unlike the TinyOS component files examined by nc2moml. for nesC top-level applications. the relations required at each port. Viptos provides a tool called nc2moml.1. Both versions of nc2moml generate MoML syntax that specifies the name of the component. and the links between the ports and relations such that the connections in the model correspond to the connections between interfaces in the nesC file.nc file shown in Figure 4. The current version of nc2moml uses the XML output feature of the nesC 1. The nc2moml tool harvests TinyOS nesC component files and converts them into MoML class files. The initial version of nc2moml was a modification of the source code of the nesC 1. The user may drag and drop components from the library onto the workspace and create connections between component interfaces by clicking and dragging between ports. as well as the name and input/output direction of each port. and whether they are multiports. Figure 4. or a top-level application if it does not.64 4. Viptos provides a tool called ncapp2moml. Viptos treats subcomponents and top-level applications differently when transforming nesC files into MoML. Figure 4. .2 compiler to generate MoML syntax that specifies a model containing the class corresponding to each nesC component used. The ncapp2moml tool harvests TinyOS nesC application files and converts them into Viptos MoML model files. which decouples nc2moml from nesC compiler version updates.2 Transformation of nesC components As the implementation for representing nesC components.1 compiler.2(c) shows a TinyOS program created graphically using components from the converted library.2 compiler. hierarchical components. For both nc2moml and ncapp2moml. Viptos uses MoML (Modeling Markup Language) [61]. a nesC component is either a subcomponent of an application if it exposes interface methods.1(a). Viptos uses the NDReader Java class provided in the nesC 1.4 shows an example of a portion of the MoML code generated from the SenseToLeds.2 compiler distribution to parse nesC XML output and create nesC-specific data structures.

65 a b e f d c Figure 4. .2: SenseToLeds application in Viptos.

such as the clock.0 to construct and generate XML output.edu/xml/dtd/MoML_1." /> </port> </class> Figure 4.. Viptos does this transformation by means of a director called PtinyOS Director..actor. A user can configure the PtinyOS Director (Figure 4. simulation.0"?> <!DOCTYPE plot PUBLIC "-//UC Berkeley//DTD MoML 1//EN" "http://ptolemy.3: Generated MoML by nc2moml for TimerC.nc The tools use JDOM 1.2(d)) to compile the generated nesC code to any target supported by the TinyOS make system. including directories containing the components that encapsulate the hardware components specific to the target platform.dtd"> <class name="TimerC" extends="ptolemy. Viptos does not use XSLT (Extensible Stylesheet Language Transformations) because the generated MoML files are not complex. the nesC compiler automatically searches the TinyOS component library paths for included components..ptinyos.NCComponent"> <property name="source" value="$CLASSPATH/tos/system/TimerC.3 Generation of code for target deployment When a user compiles a TinyOS program for an actual sensor node.domains.1.nc" /> <property name="_displayedName" class=". which controls code generation.actor. 4. The nesC compiler generates a pre-processed C file.2(c)) into a nesC file. radio. which means that it is possible to convert back and forth between Viptos models and nesC files." /> </port> <port name="Timer" class="ptolemy.eecs. and sensors. including cross- ..IOPort"> <property name="input" /> <property name="multiport" /> <property name="_showName" class=". which it can send to a cross compiler for the target hardware... Viptos can transform a model of a TinyOS program (as in Figure 4. and deployment to target hardware for a single node.lib. Note that this is the opposite of ncapp2moml.berkeley.66 <?xml version="1." value="TimerC" /> <port name="StdControl" class="ptolemy.IOPort"> <property name="input" /> <property name="_showName" class=".

system..nc .Timer"/> <link port="TimerC.IntToLeds" /> <relation name="relation1" class="ptolemy..domains.actor.system.4: Generated MoML by ncapp2moml for SenseToLeds. <entity name="MicaCompositeActor" class="ptolemy.ptinyos. </entity> .actor.lib.IORelation" /> <relation name="relation4" class="ptolemy.TimerC" /> <entity name="Main" class="tos.StdControl"/> <link port="IntToLeds. <entity name="DemoSensorC" class="tos.lib.MicaCompositeActor"> .StdControl" relation="relation2"/> <link relation1="relation2" relation2="relation1"/> <link port="SenseToInt.IORelation" /> <relation name="relation2" class="ptolemy.IORelation" /> <relation name="relation3" class="ptolemy.Counters..DemoSensorC" /> <entity name="TimerC" class="tos.actor. Figure 4.actor.StdControl" relation="relation3"/> <link relation1="relation3" relation2="relation1"/> <link relation="relation4" port="SenseToInt. <link relation="relation1" port="Main...Main" /> <entity name="SenseToInt" class="tos..67 ..SenseToInt" /> <entity name="IntToLeds" class="tos.micasb.IORelation" /> .Timer" relation="relation5"/> <link relation1="relation5" relation2="relation4"/> .IORelation" /> <relation name="relation5" class="ptolemy.actor.lib..Counters..sensorboards..

equivalent to that shown in Figure 4. magnetometer. but with the TinyOS scheduler and device drivers replaced with TOSSIM code. heterogeneous nature of Ptolemy II to create detailed models of physical phenomena such as light. 4. The director also generates a makefile that includes all of the paths necessary for compilation to target hardware. A common actor-oriented programming and execution model unifies these modeling capabilities.1. temperature. As a template for modeling a real wireless sensor node. In addition to simulating wireless sensor node(s) running TinyOS. The PtinyOS Director also generates a Java wrapper to load the shared library . the PtinyOS Director then compiles the nesC file against a custom version of TOSSIM to create a shared library. servers. Figure 4. Viptos provides a model of the hardware interface of a Mica mote with sensor board.2(a) shows a basic example with models of a light source and a sensor node. The user can also download code to the target hardware from the Viptos interface. as well as models of entities such as buildings. and ports for the LEDs and radio communication.68 compilation to target hardware.2(b) causes the PtinyOS Director to generate a nesC file and a makefile. Users may also interface to live data through Ptolemy II library blocks such as those that interface with the microphone or the IP (Internet Protocol) network. Developers may choose from diverse models of computation. or TOSSIM for external simulation. which Viptos uses internally and that users can run externally. Running the model in Figure 4. microphone. including nonTinyOS nodes. the nesC compiler follows the procedure described in the previous section. The Viptos simulation environment provides more capabilities than TOSSIM alone. Running the model in Figure 4. photoresistor.1(a). microservers. the TOSSIM executable image depends on the particular TinyOS program specified by the user.2(b) shows this graphically. This hardware representation includes ports for the ADC (analog-to-digital converter) channels connected to sensors that include a thermistor. radio channels. synchronous/reactive.2(c) causes the PtinyOS Director to generate a nesC component file for SenseToLeds. The user can take advantage of the hierarchical. If the user specified the ptII simulation target as the target compilation platform. and other nodes. wired subsystems. Viptos users can model and simulate the physical environment. and other wireless nodes. and sound. dataflow. timetriggered. Figure 4. Thus. such as continuous-time.4 Generation of code for simulation When a user compiles a TinyOS program for simulation with TOSSIM. and accelerometer. and Kahn process networks.

A long-running computation can be encapsulated in a task. To avoid duplicate functionality. there is a single thread of control managed by the scheduler. The smallest time resolution is equal to 1/(4 MHz). Upon initialization. If there is an event in the event queue. type system. An event in this queue has a time stamp implemented as a long long in C (a 64-bit integer on most systems). which Viptos uses to allow calls between the C-based TOSSIM environment and the Java-based Ptolemy II environment. the TOSSIM scheduler updates the simulated system time with the time stamp of the new event and then processes the event. Computation performed in a sequence of method calls must be short. as well as an ordered event queue. Scheduling Let us review the basics of the TinyOS scheduling model.5 Simulation of TinyOS in Viptos This section explains how Viptos simulates TinyOS programs and discusses the integration of the TOSSIM and Ptolemy II framework in terms of scheduling. and support for multiple nodes and multi-hop routing. Its scheduler contains a task queue similar to the regular TinyOS scheduler. TOSSIM inserts a boot-up event into the event queue. In TinyOS. In TOSSIM. Tasks are atomic with respect to other tasks and do not preempt other tasks. The processing of an event may cause new tasks to be posted to the task queue and new events to be created with time stamps possibly equal to the current time stamp. . Methods may transfer the flow of control to another component by calling a uses method. Figure 4. The TOSSIM scheduler begins its main loop by processing all tasks in the task queue in FIFO order. 4.1. TOSSIM is a discrete-event simulator for TinyOS. Viptos relies on the nesC compiler to do a complete analysis of the connected nesC interface methods at the TinyOS level to detect incorrect usage of commands or events marked with the async keyword and hence possible race conditions. The TinyOS scheduler processes the tasks in the queue in FIFO order whenever it is not executing an interrupt handler. radio and I/O. which a method posts to the scheduler task queue. NesC component methods encapsulate hardware interrupt handlers. or it may block the processing of other events. the original CPU clock period of the Rene/Mica motes. which may be interrupted by hardware events. all components call the queue insert event() function to insert new events into the event queue.69 into Viptos so that the PtinyOS Director can run the shared library via JNI (Java Native Interface) method calls.5 summarizes the scheduling algorithm.

At each event time stamp. The DE domain uses a sophisticated calendar-queue scheduler to efficiently process events in chronological order. although the DE domain also supports stochastic models for Monte Carlo simulation. end while if the event queue is not empty { Set the TOSSIM time to the time of next event. In Viptos. which compiles and loads a custom copy of TOSSIM that simulates the code for a single node. the scheduler processes that event along with any tasks that 2 The JNI call uses fireAt() with the TOSSIM system time as the argument. and then processes all tasks in the task queue. Viptos uses a specialization of the discrete-event (DE) domain of Ptolemy II [15] created for modeling wireless systems in VisualSense.5: TOSSIM scheduling algorithm.2 Thus. a node model contains an instance of PtinyOS Director. . The DE domain provides execution semantics where interactions between components occur via events with time stamps. Formal semantics ensure determinate execution of deterministic models [59]. If the TOSSIM event queue contains another event with the current TOSSIM system time. Viptos uses the same event time stamps as TOSSIM. Handle the event. end if end while Figure 4. At the top level of a model. The precision in the semantics prevents the unexpected behavior that sometimes occurs due to modeling idiosyncrasies in some modeling frameworks. the specialized DE director may control one or more node models. Viptos calls the custom TOSSIM scheduler to process the event. Viptos controls the execution of TOSSIM by using customized TOSSIM scheduler and device driver functions that notify Viptos of all TOSSIM events. In Viptos. The main loop updates the TOSSIM system time. processes an event in the TOSSIM event queue.70 while (true) { while there are TinyOS tasks { Process them. Viptos uses a modified TOSSIM queue insert event() function that also makes a JNI call to insert an event with the TOSSIM time stamp into the event queue of the Ptolemy II discrete-event scheduler (DE director) that controls the PtinyOS Director.

Thus. The results are predictable and consistent. may have been generated.6 summarizes the scheduling algorithm. parameters. since tasks may generate events with the current TOSSIM time stamp. called . end if while there are TinyOS tasks { Process them. Viptos supports models with dynamically changing interconnection topologies and treats changes in connectivity as mutations of the model structure. Type system NesC components in TinyOS and TOSSIM use the type system provided by the C programming language. A type resolution algorithm identifies the most specific types that satisfy all the constraints. new events may have a time stamp that is before the current Ptolemy II system time. end while while (the event queue is not empty and the time of the next event is the same as the current TOSSIM time) Figure 4. the C type system and the Ptolemy II type system.71 do { if the event queue of this instance of TOSSIM is not empty { Set the TOSSIM time to the time of next event. e. Note that the order in the main loop of the custom TOSSIM scheduler is opposite that of the original TOSSIM. The software is carefully architected to support multithreaded access to this mutation capability. by adding. Otherwise. This last step is repeated until there are no other events with the current TOSSIM system time. in which actors. This change is required in order to guarantee causal execution in Viptos..g. or moving actors. Viptos composes these two type systems. deleting. Figure 4. Ptolemy II provides its own type system. and ports may all impose constraints on types. or changing the connectivity between actors. so that static type analysis can be performed. which processes all tasks before updating the TOSSIM system time and processing an event in the TOSSIM event queue. Handle the event. A special Java base class created for Viptos. one thread can be executing a simulation of the model while another changes the structure of the model. Communication between actors in Ptolemy II occurs through typed tokens.6: Viptos version of TOSSIM scheduling algorithm.

Sensor data modeled in Ptolemy II typically use tokens with values of type double. called PtinyOSCompositeActor. so that the components can use the C type system. The ADC channels of a mote use 10-bit unsigned values. Viptos automatically performs the lossy conversion from a double-valued token in Ptolemy II to a masked unsigned short integer value in TOSSIM. however. the LEDs. A Viptos submodel containing nesC components uses a subclass of this base class. Viptos represents TinyOS packets using Ptolemy II string tokens. which Viptos uses to change the animation state of the simulated LEDs. TOSSIM represents an LED value with a char. and the packets sent and received over the radio. allows a Ptolemy II actor’s ports to have types. This facilitates the embedding of a different type system within Ptolemy II.72 TypeOpaqueCompositeActor. Although LED state is binary. as well as an interface for manually setting the per-node and per-link values and probabilities. Since the data communicated between TOSSIM and Ptolemy II only involve a mote’s hardware interface. TOSSIM represents an ADC value with an unsigned short integer masked for 10-bit usage. Viptos uses JNI functions in the custom copy of TOSSIM to automatically convert between the C types used in TOSSIM and the token types used in Ptolemy II. usually do not match the actual data types of the hardware interface. Viptos can limit type conversion to the data types required by the ADC interface. Viptos automatically converts the char in TOSSIM into a booleanvalued token in Ptolemy II. Radio and I/O TOSSIM has built-in models for per-node ADC values and for radio connectivity between multiple nodes. When TOSSIM updates the state of the LEDs. Both . In order to maintain a standard endian format and enable easy parsing of packets. As a result. but does not require that the actors inside use the Ptolemy II type system. Viptos automatically converts between the TOSSIM char array representation and the Ptolemy II string token representation whenever a node transmits or receives a packet. and hence can be developed by the model builder. The types provided by C. TinyOS packets are represented by a C data structure containing a char array. Viptos performs automatic type conversion between the two type systems during simulation. When TOSSIM requests an ADC value. In Viptos and VisualSense. TinyOS and TOSSIM use arbitrary data types to represent values with different bit widths. In TOSSIM. the algorithm for determining radio connectivity is itself encapsulated in a component as a channel model.

73 Viptos and VisualSense provide several built-in models, including AtomicWirelessChannel, DelayChannel, LimitedRangeChannel, ErasureChannel, and PowerLossChannel (see the lefthand pane of Figure 4.7(a)). Both tools can determine connectivity on the basis of the physical locations of the components. Viptos overrides the built-in ADC and radio models and LED device drivers in TOSSIM so that they send data to, and receive data from, the ports of the node model. This allows the simulated node to interact with user-created models, such as sources of light (e.g., Figure 4.2(e)), temperature gradients, radio channels, and other nodes. In the DE domain of Ptolemy II, tokens received at the input port of an actor cause the actor to fire at the time of the token time stamp. The actor usually consumes the token, at which point the port becomes empty. In Viptos, the node model may receive tokens at the ADC ports that represent new values. To reconcile the difference in timing between when the simulated environment makes a new ADC value available and when the simulated node reads its ADC ports, Viptos uses a Ptolemy II PortParameter instead of a Port for the ADC ports. This usage of PortParameter makes the port value persistent between updates, such that when the TinyOS program requests data from the ADC port, the program gets the value of the most recently received token. Figure 4.2(a) shows an example containing a model of a light source and a node running the SenseToLeds TinyOS program. Viptos transmits light source data to the sensor node by means of a photo port (Figure 4.2(b)) associated with a LimitedRangeChannel named PhotoChannel (Figure 4.2(a)).

Multiple nodes and multi-hop routing TOSSIM simulates one or more nodes with the same TinyOS program by maintaining a copy of the state of each component for each simulated node. The nesC compiler has built-in support for generating arrays to store these copies, so that users do not need to modify the TinyOS program source code when compiling for TOSSIM. Viptos simultaneously simulates multiple nodes with possibly different programs by embedding multiple node models, with each TinyOS node containing a different PtinyOS Director, into the Wireless domain (the specialized DE domain). To prevent namespace collision between different simulated TinyOS programs, Viptos separately compiles and loads a shared library for each node. Viptos performs this by passing a unique name for each node to the nesC compiler, which the compiler then inserts into the TOSSIM source code by means of macros. Since Viptos models have

74 a global discrete-event scheduler, all nodes operate on the same time reference. Figure 4.7 shows an example model containing two nodes that communicate over a lossless radio channel (AtomicWirelessChannel) with full connectivity. The node on the left contains the CntToLedsAndRfm TinyOS program, which maintains a counter on a 4 Hz timer, displays the counter value on the LEDs, and sends it over the radio in a TinyOS packet. The node on the right contains the RfmToLeds TinyOS program, which listens for radio packets and displays any received counter values on the LEDs. A user can easily replace the radio channel model by deleting it and dragging in a different channel model from the menu in the left-hand pane. Though the application shown in Figure 4.7 uses broadcast, Viptos also supports multi-hop routing. Viptos accomplishes this by passing a node ID to the nesC compiler for each custom copy of TOSSIM. The modified TOSSIM code uses this node ID where it would normally be used in TinyOS, instead of using the default TOSSIM value of the index of the array containing the state of the nodes. Viptos allows users to indicate globally the name of the base station in the PtinyOS Director configuration screen, as shown in 4.2(d). Viptos includes a multi-hop routing demonstration that models a network with multiple TinyOS nodes running the Surge multi-hop routing protocol application, shown in Figure 4.8, where the base station is node 0.

4.2

Performance Evaluation
This section evaluates the scalability of Viptos in terms of execution time as the number of

nodes increases. It separately evaluates the execution time of applications without radio usage, and the execution time of applications with radio usage, in order to determine the scalability of communication within the framework. I collected timing information on an Intel Pentium M 760 processor (2.0 GHz, 2 MB L2 Cache, 533 MHz FSB) with 1024 MB of SDRAM, running Ubuntu 6.06 LTS (Dapper Drake) with Linux kernel 2.6.15-27-386. The tools I used included nesC 1.2.7a, gcc 3.4.3, TinyOS 1.x, and Sun Java VM 1.4.2 13-b06 with a heap size of 512 MB. In order to run large models, I increased the maximum number of open file descriptors allowed in the Bash shell from a default of 1024 to 20000 with the ulimit -n command. To eliminate timing variance due to random boot times, I set all nodes to boot at virtual time 0.0 seconds. I did not set the TOSSIM DBG environment variable, which affects which event debug messages get generated. I sent all printed debug messages (on stdout or stderr) from all copies of

75

a

b

c

d

e

f

g

Figure 4.7: Send and receive application in Viptos.

76 a b c Figure 4.8: Multi-hop routing in Viptos. .

I used the /usr/bin/time command to measure the execution time of the SenseToLeds application from the tinyos-1. and several minutes for large models. For TOSSIM. This section does not present timing overhead in Viptos for opening files. 4.getRuntime() methods to measure elapsed time while running the SenseToLeds application displayed in Figure 4. For modeling additional nodes. To eliminate timing delay due to waiting for remaining threads to join. I discarded the timing measurement for the first run in each experiment to eliminate timing variance due to caching.1 Comparison to TOSSIM This section uses the SenseToLeds application to evaluate the scalability of Viptos as the number of nodes increases and to compare it to TOSSIM.gc() to perform garbage collection before starting the timing measurement. To reduce timing variance due to Java garbage collection. and Java compilers.0 seconds for an increasing number of nodes. This does not include the overhead of running the nesC compiler and loading the TOSSIM shared object into memory. and is on the order of a few seconds for small models. and took additional measurements. I saved the model. since nodes must wait until Viptos invokes all internal copies of TOSSIM before simulation can proceed because they all operate on the same time reference. but that both simulators scale linearly in the number . instantiation of Java objects. or loading shared objects.2. For a given number of nodes. I instrumented Viptos to call System. To measure the overhead due to integrating TOSSIM with Ptolemy II. I used the timing information from the last node to start.x CVS tree. I collected multiple runs from the same instantiation of Viptos. and caching. since TOSSIM uses random ADC values by default. The figure shows that Viptos has more overhead when compared to TOSSIM.getTime() and Runtime. I copied and pasted existing nodes into the graph. running the nesC. I instrumented the PtinyOS Director with calls to the Java Date(). I discarded the timing measurement for the first run in each experiment to eliminate timing delay due to loading of new Java classes. For Viptos. I eliminated the model of the environment in order to make a fair comparison to TOSSIM. Figure 4. For models with multiple nodes. since thread joining is only necessary for running the model multiple times within a graphical environment. restarted Viptos. to eliminate timing variance from printing to the screen under X11.9 shows the average execution time of the SenseToLeds application with a virtual run time of 300. This overhead scales linearly with the number of nodes. gcc. I started timing right before Viptos invoked the internal copy of TOSSIM.2.77 TOSSIM to /dev/null. I stopped timing at the beginning of wrapup().

0 real seconds or less. of nodes. and a varying number of senders and receivers. graphical programming environment. The plot in Figure 4. the results show that approximately 410 nodes can be simulated in 300.78 Figure 4. Using a least squares linear regression. and an interactive. This analysis used a virtual run time of 120. The model uses a lossless radio channel model with full connectivity. To eliminate timing variance due to the graphical interface.2. I created a model similar to that of the SendAndReceiveCnt application shown in Figure 4.2 Radio This section evaluates the scalability of models that use the radio using the same techniques described in the previous section. .10 shows the average execution time for this model. The exact number for any given application depends on the fidelity of simulation required and the complexity of the application. 4. the user gains increased modeling and simulation capabilities and flexibility. Senders send packets at 4 Hz.9: Execution time of the SenseToLeds application as a function of the number of nodes.0 seconds for all nodes. So. in exchange for slightly increased execution time.7. which means that Viptos can simulate networks up to this size in real time. Each simulation ran for 300.0 virtual seconds. I disabled animation of the LEDs.

79 Figure 4.0 virtual seconds. called Viptos. and even without aggressive performance tuning. The number of senders versus receivers has no noticeable effect.3 Summary This chapter described an extensible actor-oriented software framework for modeling sensor networks. . Each simulation ran for 120. Viptos allows users to easily transition from highlevel. 4. heterogeneous modeling to low-level implementation. This chapter showed that Viptos simulator performance is scalable. simulation. The execution time of the model increases linearly with the number of nodes. and execution time scales linearly as a function of the number of nodes. This tool. builds upon Ptolemy II and TinyOS. can simulate moderately large sensor networks effectively. and deployment. hierarchical.10: Execution time of a radio send and receive model in Viptos as a function of the number of senders and receivers. and provides an integrated graphical design and simulation environment. The plot shows that the main determinant of execution time is the total number of nodes. whether or not the radio is used.

80 .

81 Chapter 5 Metaprogramming for Wireless Sensor Networks In The Mythical Man Month [17]. Chapter 4 explained how to build wireless sensor network applications graphically from pre-existing. According to Wikipedia [104]. Like Sztipanovits and Karsai [93]. or objects. Brooks. asserts that “radically better software robustness and productivity are to be had only by moving up a level. I use the term “generative programming” in a broad sense: systems or components of systems are automatically generated from a specification written in one or more textual or graphical domain-specific languages [26]. and making programs by the composition of modules. metaprogramming is the writing of computer programs that write or manipulate other programs (or themselves) as their data. This chapter explains how to programmatically specify the wireless sensor network application itself through a variety of techniques that combine higher-order actors or components. The terms “generative programming” and “metaprogramming” are often used interchangeably. In this dissertation. using an actor-oriented framework called galsC. however. I differentiate between them—a metaprogram does not necessarily generate . Jr. actor-oriented components and pre-existing TinyOS/nesC components. with generative programming and metaprogramming. using an actor-oriented framework called Viptos.1 Generative Programming and Metaprogramming Generative programming and metaprogramming are very similar concepts.” Chapter 3 explained how to build wireless sensor node programs from pre-existing TinyOS/nesC components. 5. Frederick P.

as they can be used to capture patterns of computation.e. where partial evaluation is used as a way to generate more efficient programs.. a shorter development time.Higherorder functions are one of the more powerful features of functional programming languages.2 Higher-order Functions. i... In Actor-Oriented Metaprogramming by Neuendorffer [74].. a tested component. The shrink-wrapped package provides a big module of function. In the early 1960s. yet others represent various linear.. It is particularly effective in this use case. 5. actor-oriented models are viewed as descriptions of concurrent software architectures. where he discusses them in the context of using shrink-wrapped software packages as components: The metaprogramming concept is not new.Now the chunks offered by the metaprogrammer are many times larger than those macros.[S]ome higher-order functions encapsulate common types of processes. He argues that partial evaluation generally requires less explicit specification by a programmer than other metaprogramming techniques. A higher-order function takes a function argument or produces a function result. with an elaborate but proper interface. each of them captures a particular pattern of iteration.82 a new program or system. and Components Related to metaprogramming is the concept shared by higher-order functions. allowing the programmer to re-use these patterns without risk of error. . only resurgent and renamed. other higher-order functions capture common interconnection patterns. According to Reekie [82] (emphasis mine). Neuendorffer describes a metaprogramming system that transforms actor-oriented models in Ptolemy II into selfcontained Java code. The benefits of metaprogramming are best described by Brooks in The Mythical Man Month [17]. computer vendors and many big management information systems (MIS) shops had small groups of specialists who crafted whole application programming languages out of macros in assembly language. such as serial and parallel connection.... better documentation. mesh.Next-level application builders get richness of function.. and both the generic actor and specialized actor perform the same role and produce the same behavior. and tree-structured interconnection patterns. since a generic actor specification is specialized to a particular role in the model. This is one of the most persuasive arguments in favour of inclusion of higher-order functions in a programming language. structured metaprograms. In effect. Actors.Vector iterators are higher-order functions that apply a function across all elements of a vector. although it may accept other programs or systems as input.. higher-order actors. and its internal conceptual structure does not have to be designed at all. and radically lower cost. and higher-order components..

” Reekie explains higher-order actors in Ptolemy Classic [82]: Special blocks represent multiple invocations of a “replacement actor. which it applies to each element of its input channel.83 Reekie then explains how the concept of higher-order functions can be applied to actors [82]: . . is a generalised form of mapV [the vector iterator higher-order function]. the vector of input streams is divided into groups of the appropriate arity (and the number of invocations of the replacement actor reduced accordingly).. In a higher-order composition language such as Ptalon [19]... the system must support dynamic creation of functions since it will not have knowledge of f until run-time. Further work is required to explore forms of higher-order function mid-way between fully-static and fully-dynamic. Thus [the system] avoid[s] embedding unevaluated closures in streams. higher-order components are the most powerful feature of these types of languages. in this case. the structure of a system is effectively parameterizable. If f is known. since they capture patterns of instantiation and interconnection between components. and the parameters may be other systems. Like higher-order functions in Visual Haskell [82]. or languages for constructing networks of components [19]. components may serve as parameters to higher-order components in composition languages. At compile time. Map is replaced by the specified number of invocations of its replacement actor. could still execute very efficiently. if not. “mid-way between fully-static and fully-dynamic.Unlike mapV. an efficient implementation of map( f ) can be generated. An interesting aspect of Ptalon is that it is. The requirement that the number of invocations of an actor be known at compiletime ensures that static scheduling and code generation techniques will still be effective. For example. not by an input stream. The most basic use of icons in [the Ptolemy Classic] visual syntax may therefore be viewed as implementing a small set of built-in higher-order functions. for example. but with number of loop iterations unknown.the map actor [in Visual Haskell] takes a function as its parameter. Just as functions may serve as arguments to higher-order functions in functional programming languages. Map can accept a replacement actor with arity > 1. a code generator that produces a loop with an actor as its body. and could therefore be called a higher-order actor.” Higher-order actors gain their power from a key restriction: “the replacement actor is specified by a parameter.” The next section investigates Ptalon in more detail. Lee and Parks [62] explain that “dataflow processes with state cover many of the commonly used higher-order functions in Haskell. to quote Reekie.” The Map actor.. An actor of this kind mimics higher-order functions in functional languages.

an application developer can specify an actor not only with a Java file. The value of the local variable i is set by the for loop. 19] is a higher-order composition language for constructing higher-order com- ponents in Ptolemy II. but also with an XML file containing an arbitrary collection of actors. Using the definitions presented in Section 5. The following sections present an example that uses the improved version of Ptalon and explain the implementation of the parameter reconfiguration capabilities. 5. with varying values for the nodes’ range and location parameters. A developer can use Ptalon to easily generate sensor network applications and configurations. This original implementation assumes that models containing higher-order components are static.1 A simple example Ptalon code is written in a simple declarative style. whereas the value of the parameter n is specified externally. I have improved the Ptalon system for evaluating parameters such that the values of Ptalon parameters can be changed at run-time. the specified subcomponents may be different types of wireless sensor nodes running various individual programs. Ptalon is both a generative programming system and a metaprogramming language. Cataldo proved mathematically that higher-order components can lead to succinct syntactic descriptions of large systems. thus enabling a form of scalability in system design [19].84 5. Ptalon makes it easy to parameterize a component with the number and types of subcomponents that should be generated within the component. Ptalon uses the Ptolemy II expression . they are specified in Java.3. That is. Components passed as parameters to these higher-order components are atomic actors (i.1 shows a sample Ptalon file that specifies a component containing n components of type RelayNode. In Cataldo’s original Ptalon implementation for Ptolemy II [19].e. I have also improved the Ptolemy II implementation of Ptalon to allow composite actors in addition to atomic actors. arguments to a higher-order component cannot change once specified. a higher-order component is called a PtalonActor. which minimizes the amount of input a system designer must provide to create a new system. and Ptalon accepts components as arguments (inputs) to other components. since Ptalon automatically generates components from a specification written in a textual language. the underlying programming language of Ptolemy II).1.3 Ptalon Ptalon [18.. Figure 5.

In this example. and eventual validation against a real-world implementation.85 language to evaluate all values within double brackets ([[ ]]). a PtalonActor only needs to save its parameter values. The second populator phase of the Ptalon compiler begins only when the values of all parameters of the PtalonActor are known. In its initial phase.2(a). a user places a new PtalonActor in a Ptolemy II graph. Once the user sets this parameter to reference a Ptalon file. The PtalonActor parameter configuration window initially shows a blank value for the ptalonCodeLocation parameter. This allows simulation of abstract and concrete node and environment models with various parameters. then refine and replace these components with a real code implementation that uses TinyOS. Note that since the PtalonActor automatically populates itself with actors. which allow a user to change a parameter to specify different numbers of TinyOS nodes. I have implemented Ptalon-based versions of the SenseToLeds and SendAndReceiveCnt examples presented in Chapter 4. The Ptalon compiler walks the AST and creates the remaining entities. The first populator phase of the Ptalon compiler occurs next. the Ptalon compiler parses the Ptalon file and creates an abstract syntax tree (AST). Figure 5. The Ptalon compiler creates all entities as part of the PtalonActor submodel. A user specifies the value of n as a parameter of the PtalonActor. each of the components are nodes that are actually composite actors that contain other components. as shown in Figure 5. in which the Ptalon compiler instantiates any entities that do not depend on unknown parameter values. Ptalon can also be integrated with Viptos (see Chapter 4). for which the user can then give values. . Figure 5.2(c) shows the components generated inside the PtalonActor.1.2(a) shows a Ptolemy II model containing an instance of a PtalonActor called MultipleNodesMoML that references the Ptalon file in Figure 5.3 shows the XML code for the model shown in Figure 5. and not its internal configuration. An application developer can start with regular components that use pre-existing Ptolemy II domains. the PtalonActor then reconfigures its parameter configuration window to show the parameters declared in the Ptalon file. The Ptalon compiler is implemented within Ptolemy II and is invoked as soon as the PtalonActor is set to reference a particular Ptalon file. To use Ptalon within Ptolemy II. The Ptalon compiler consists of multiple phases. Figure 5.2(b). as shown in Figure 5.2(d).

streams are evaluated during the main execution phase.ptln 5.SmallWorld. parameter n.. } next [[ i + 1 ]] } Figure 5.3.2 Reconfiguration in Ptalon In his dissertation [82].domains.wireless. The Ptalon compiler implementation in Ptolemy II uses two steps to handle any change to the value of a PtalonActor parameter. First. using the newly assigned value of the parameter.demo. The value of a PtalonActor may be an actual token that has a type corresponding to one in the Ptolemy II token type lattice. for i initially [[ 1 ]] [[ i <= n ]] { node( range := [[ 40 + 10 * i ]]. The Ptalon compiler proceeds through the population phase. while preserving existing ports. Second. and reuses existing ports whenever possible during the populator phase.. but with the stream data unknown. and ii) execution of the actor on its stream arguments.Lee stresses the difference between parameter arguments and stream arguments in Ptolemy: parameters are evaluated during an initialisation phase.1: MultipleNodesMoML. as well as existing values for any other parameters. _location := [[ [100*i. which necessitates a reconfiguration of the PtalonActor.RelayNode. the compiler deletes the internal representation of all entities and relations in the PtalonActor. the Ptalon compiler restarts itself in its initial phase (as described in the previous section). or it may be a reference to a model parameter. the separation between parameters and streams—and between compile-time and run-time values—is both clear and compulsory. a change in the value of the referenced model parameter results in a change to the actual value of the PtalonActor parameter. it may cause the internal configuration of the PtalonActor to change. For the latter option. Neuendorffer .86 MultipleNodesMoML is { actor node = ptolemy. 100*i] ]] ). As a result. code generation can take place with the parameters known. What happens if a so-called “compile-time” parameter value changes at run-time? If the value of a PtalonActor parameter changes. Reekie discusses actor parameters: Execution of an actor proceeds in two distinct phases: i) instantiation of the actor with its parameters. Thus.

2: PtalonActor in Ptolemy II.87 b a c d Figure 5. .

• Modal model. A user may change parameters in Ptolemy II through interactive editing of the model. that is active in that particular state. • Reconfiguration port. or refinement.dtd"> <entity name="MultipleNodesMoML" class="ptolemy. Ptolemy II binds each reconfiguration port to a parameter of the port’s actor. The Ptolemy II user manual [16] contains more details on constructing modal models.eecs.3: MultipleNodesMoML. parameter. Ptolemy II associates this actor with a parameter of the containing model.0" standalone="no"?> <!DOCTYPE entity PUBLIC "-//UC Berkeley//DTD MoML 1//EN" "http://ptolemy. in which each state of the finite state machine contains a dataflow model. Another way reconfiguration of model parameters may occur in Ptolemy II is through the use of higher-order actors (I do not include PtalonActor as part of this discussion): .MultipleNodesMoML"> <ptalonExpressionParameter name="n" value="3"/> </ptalon> </configure> </entity> </entity> Figure 5. A modal model is an extended version of a finite state machine.demo. Essentially.ptalon. • Reconfiguration actor. or actor of interest. The SetVariable actor is a special actor that has a single input port.PtalonActor"> <configure> <ptalon file="ptolemy. The actor consumes a single token during each firing and reconfigures the associated parameter during the quiescent point after the firing. which I summarize and extend here: • Interactive editing. the active dataflow model replaces the finite state machine until the state machine makes a state transition.xml [74] enumerated the ways in which reconfiguration of model parameters may occur in Ptolemy II. and tokens received through the port reconfigure the parameter. Finite state machines transitions can reconfigure parameters of the target state’s refinement when the transition is taken. a reconfiguration port is a special form of dataflow input port. Also known as a PortParameter.edu/xml/dtd/MoML_1.MultipleNodes. usually via a dialog box associated with the model.actor.berkeley.actor.ptalon.TypedCompositeActor"> </property> <entity name="MultipleNodesMoML" class="ptolemy.88 <?xml version="1.actor.

only it is a composite actor instead of an atomic actor. if there is one. The developer can add these ports to an instance of this actor. to create animations by changing parameter values.89 • ModelReference and VisualModelReference. • RunCompositeActor. 5. The actor also uses tokens at an input port to set the value of a top-level parameter with the same name in the contained model. they suggest . an article by Andel and Yasinsac. The actor executes the contained model completely. Since publication venues have limited space. as if it were a top-level model. • ModelDisplay. use of inappropriate radio models. then on each firing. and uses it to set the value of a top-level parameter in the referenced model that has the same name as the port. lack of statistical validity. improper precision. the actor reads an input token from the input port.4. on each firing. and lack of sensitivity analysis. unrealistic application traffic. if there is one. before executing the referenced model.1 Motivation “On the Credibility of Manet Simulations” [4]. summarizes various articles that question the credibility of published simulations results in the mobile ad hoc network (MANET) research community. is to properly document all settings. for example. If the actor has input ports. The ModelReference and VisualModelReference actors are both atomic actors that can execute a model specified by a file or URL (Uniform Resource Locator). Problems cited include lack of independent repeatability. and I explain when a particular method might be most applicable. The developer can use this. 5.4 Specifying WSN Applications Programmatically In this section. A developer can use these actors to define an actor whose firing behavior is given by a complete execution of another model. This actor opens a window to display the specified model. Andel and Yasinac’s proposed solution to the first problem. improper/nonexistent validation. I present methods for specifying wireless sensor network applications program- matically by combining in various ways higher-order actors in Ptolemy II with an improved version of VisualSense/Viptos. lack of independent repeatability. The model developer can provide inputs that are MoML strings that the actor applies to the specified model. This actor is almost the same as ModelReference and VisualModelReference.

All nodes (not including the Initiator) have the same implementation as shown in Figure 5. 5.4. When the user runs the model. which keeps the expected number of recipients roughly constant. after two hops. I also discuss how these techniques can address the other problems cited. Ptolemy II is well-suited to address this problem. Figure 5.2(d). fewer hops are needed when the range increases [31]. The NodeRandomizer actor randomizes the locations of the nodes at the beginning of each run. then the probability of delivery drops according to the formula shown. (2) the modified version has an additional param- .4(b) broadcasts a message. than with a network that is more reliable but ranges are shorter. wireless sensor network simulations are easily repeatable. which should include freely available code/models and applicable data sets. both of which perform the same set of experiments. There are only a few differences between this modified version and the original version: (1) the modified version stores the histogram data in a file whose name is specified by a new parameter.4 shows the SmallWorld model as originally implemented in VisualSense. and the Ptolemy II version number with which they are built are automatically stored in the XML file. Each node in the sensor network rebroadcasts the first message it receives. where a slightly modified version of the SmallWorld model (shown in Figure 5. etc. It is an open-source tool whose source code is freely distributable and modifiable. it turns green if it receives it in more than one hop. It stays white if it never receives the message. If the user increases the range above sureRange.4 illustrates a phenomenon where ad hoc networks achieve connectivity with fewer hops on average with a network that is less reliable but where ranges are longer. A node turns red if it receives the message in one hop.2 Small World The SmallWorld example shown in Figure 5. The models described in the following sections show that with the techniques introduced in this dissertation. as well as many of the other problems cited.90 including only major settings and/or providing all settings as external references to research web pages.5) is run as a submodel with the same sets of changing parameter values. shown in Figure 5. Ptolemy II models are simple XML files that are easy to publish on the web. Franceschetti and Meester showed that on average. 5.4. The model plots a histogram of the number of nodes that receive the message after one hop.3 Parameter Sweep I now introduce two different models. an Initiator component.

The initial location of the nodes (not including the Initiator) is not significant. Both top-level models store the settings used as part of the model itself.. a modal model might be a more appropriate . the model creates an output file with the stored histogram data. Just as in the modal model.6(b)). The transitions in the modal model are used to change the counters i and j.6. Notice that for this particular application. For simulations where the parameters values are known a priori. one for each of the parameters to be changed (range. The modal model sweeps over the parameter values such that runs i number of different random node layouts are simulated. the VisualModelReference in the SDF model references the ParameterSweep version of SmallWorld (Figure 5. resetOnEachRun. This model allows application developers to create simulation scenarios that are independently repeatable. The SDF model uses dataflow actors that send the simulation parameters directly to a VisualModelReference actor with the same ports as those in the modal model.g. Dataflow Figure 5. Modal model Figure 5. a dataflow language provides a more intuitive interface for specifying these settings. and set the parameter values for each run. and for each node layout. For each run. runs j number of different ranges are simulated. simulating the ParameterSweep version of the SmallWorld model with runs i different node layouts and runs j different ranges.7 shows an SDF model which accomplishes the same objectives as the modal model in Figure 5. for the purposes of sensitivity analysis [83]. and fileName) in the ParameterSweep version of SmallWorld (Figure 5.6(a) shows a modal model in which the main state (named state and highlighted in green) contains a refinement (Figure 5. I will call this version of the SmallWorld model the ParameterSweep version. and no additional configuration files are needed.5). and to validate their algorithms by quickly creating new simulation scenarios via a few simple parameter value changes. e. the values of the parameters are more readily apparent in the SDF model than in the modal model. if the user wants to create simulation scenarios with dynamically derived parameters values. However.91 eter created to allow node location randomization to be controlled externally. That is. This refinement is an SDF (synchronous dataflow) model containing a VisualModelReference actor with three different ports. and (3) the modified version uses non-zero random seeds so that each run is repeatable.5).

4: Small World in Ptolemy II.92 a f b c d e Figure 5. .

5: ParameterSweep version of Small World in Ptolemy II.93 Figure 5. .

.94 a b c Figure 5.6: Modal model for changing parameter values of Small World model in Ptolemy II.

where a MultiInstanceComposite creates all of the nodes. the user can feed output from the SmallWorld model back into the modal model. this actor replicates itself a number of times determined by a structural parameter. Such programmatically generated structures are called higher-order components to emphasize their similarity to higher-order functions in functional languages. a user could graphically specify the number of instances of an actor in Ptolemy Classic. As described by Lee and Parks [62]. Neuendorffer introduces higher-order components (actors) (emphasis mine): In many cases is it useful to build parameterized structures in actor-oriented models. each of which has an implementation identical to that in Figure 5. For example. MultiInstanceComposite In his dissertation [74]. Ptolemy Classic also allowed the user to visually represent the replacement function in a way that is conceptually similar to using a box inside of the icon for a higher-order function. or directly (by graphically instantiating the desired number).8 shows the SmallWorld application in Ptolemy II. and the second using a PtalonActor. Just before a model is executed.4. This actor is often used in situations where a model contains repetitive structures that are awkward to build by hand. the application developer can choose the most appropriate domain-specific language to specify the metaprogram. The MultiInstanceComposite actor in Ptolemy II is one example of a simple higher order component. Ptolemy Classic took advantage of higher-order functions by allowing a user to specify the number of instances of an actor by modifying the parameters of a bus icon (a line connecting the boxes representing the actors). 5. which can then automatically select new parameter settings on the basis of noise level or network connectivity. either by implication (by graphically specifying the number of instances of upstream actors). A similar feature also existed in Ptolemy Classic. This section considers two different methods. . Figure 5. In other words. rather than the visual representation. or when the number of repetitions is specified by a parameter.95 choice. the first using a MultiInstanceComposite actor.4 Higher-order actors Since most of the nodes in the SmallWorld application have the same implementation. A parameter which is used to determine the structure of a higher-order component is a structural parameter. one might also consider using a higher-order actor to specify the nodes.2(d).

7: SDF model for changing parameter values of Small World model in Ptolemy II. .96 a b Figure 5.

range. The model shown in Figure 5. The third section declares . The first section of code declares all of the actor types needed in the model. reportChannelName. Ptalon Ptalon is a natural fit for specifying model parameters programmatically. Figure 5. One can use Ptalon to generate the SmallWorld application shown in Figure 5.9 shows the required Ptalon code.8 has the same behavior as that in Figure 5. MultiInstanceComposite generates the nodes in its container. and n. not just values of actor parameters. The second section declares four parameters: channelName. No other changes to the model are required.97 a b Figure 5.5.8: ParameterSweep version of Small World model with MultiInstanceComposite in Ptolemy II. since it can specify the structure of the model itself.5. which means that the location parameter of the generated nodes are easily accessible and remain in reference to the Initiator actor in the container.

as shown in Figure 5.9 is not explicitly declared as a Ptalon parameter. in addition to annotations and comments that were not constant across all models. the number of nodes in the model can be controlled programmatically). even if the range parameter is not declared as a Ptalon parameter. Table 5. Figure 5. Another advantage of higher-order actors is that they require fewer bytes to express the model.10. Because parameters in Ptolemy II use a form of lazy evaluation (changes to parameter values may not be propagated until they are used at run time).11 shows an excerpt of the MoML code for the model in Figure 5. The PtalonActor also contains an output port. Note that this model is similar to the model shown in Figure 5. One advantage of using higher-order actors such as MultiInstanceComposite and PtalonActor is that they enable run-time reconfiguration (e.10(b) shows the values of the PtalonActor parameters.4.10) versions of SmallWorld with either the modal model or the SDF model discussed previously. to verify visually that the ranges are correct. Figure 5. For all files. refer to model parameters with the same name.10(a) shows a Ptolemy II model containing a PtalonActor named SmallWorld that refers to the Ptalon code in Figure 5. e. the Ptalon file shown in Figure 5.. except the number of nodes.5.9. I explicitly declare these variables because they are useful for visualization. Also note that the resetOnEachRun parameter in Figure 5. wireless ports are parameterized by the name of the wireless channel on which they receive or transmit. NodeRandomizer.. I removed all extra white space (tabs.10(c).1 shows a comparison of the three different ways presented for implementing the SmallWorld application. So. However. The parameter range specifies the radio range of the nodes. Note that all of the parameters.9 uses the parameters channelName and reportChannelName to specify concrete names for the channels. and WirelessToWired converter. 5. through which the actor transmits the data to be recorded. in VisualSense and Viptos. The Ptalon model will still run correctly. The parameter n specifies the number of nodes to create. wireless channels.5 Discussion A user can control both the MultiInstanceComposite (Figure 5. Figure 5.98 an output port named output. and extra linefeeds).g. before running the model. the user must create a Ptalon parameter as a mirror of any Ptolemy parameters that should be evaluated before run time. The remainder of the file instantiates the components. spaces.g. with no modifications required. except that Ptalon generates the nodes.8) and PtalonActor (Figure 5. The first column . Ptalon automatically generates names of actor instances.

SmallWorld. parameter reportChannelName.wireless. nodeRandomizer( maxPrecision := [[ 3 ]]. name := [[ channelName ]] ). name := [[ reportChannelName ]] ). haloColor := [[ {0.actor. randomize := [[ randomize ]]. seed := [[ 1L ]]. /* Ptalon parameters */ parameter channelName.ptalon.0.5. 400. range := [[ range ]].0.wireless.wireless.domains. probability*visualDensity} ]].actor. channel( seed := [[ 1L ]]. range := [[ {{100. 0. /* Port declaration */ outport output.0}} ]]. actor initiator = ptolemy. payload := output. resetOnEachRun := [[ resetOnEachRun ]]. parameter range.0.domains. _location := [[ [0.0 * i.lib. actor wirelessToWired = ptolemy.0] ]] ). actor nodeRandomizer = ptolemy. 0.NodeRandomizer. {200. } next [[ i + 1 ]] } Figure 5.ptln).ptalon. /* Instantiation of components */ channel( defaultProperties := [[ {range=range} ]].0}. 0.0.domains. randomizeInInitialize := [[ true ]].0] ]] ).demo.LimitedRangeChannel.5.0. for i initially [[ 1 ]] [[ i <= n ]] { node( nodePropagationDelay := [[ nodePropagationDelay ]].0 . lossProbability := [[ 1. parameter n.99 SmallWorld is { /* Actor types */ actor node = ptolemy. initiator( _location := [[ [230.SmallWorld.Initiator. . 345.lib.probability ]].WirelessToWired. _location := [[ [10.9: Ptalon code for SmallWorld (SmallWorld.demo.lib. 500. seed := [[ 1L ]] ). actor channel = ptolemy.RelayNode.0 * i] ]] ). wirelessToWired( inputChannelName := [[ reportChannelName ]]. 10.

100 a b c d e f g Figure 5. .10: Ptalon version of Small World in Ptolemy II.

Note that in all of the non-Ptalon versions of the SmallWorld application (Figures 5.actor. <entity name="SmallWorld" class="ptolemy.ptalon.. The main difference between using MultiInstanceComposite and PtalonActor for this particular application is that one cannot visualize the generated components using the MultiInstanceComposite actor. 210. Even though the RelayNode code is stored external to the ParameterSweep version. the parameter values for each node must still be stored internally. 5. . wrapup()) are invoked during model . and 5. The difference in the number of lines of between the ParameterSweep and MultiInstanceComposite versions would be even greater if there were more nodes.Location" value="[240.SmallWorld. since the ParameterSweep version requires 705 bytes for each additional node to store the parameter values and the instance declaration. Also note that the code for the RelayNode actor in the ParameterSweep version is stored externally. the code for the Initiator actor is stored in the model itself. Increasing the number of nodes in the implementations in the MultiInstanceComposite and PtalonActor versions requires no extra bytes (except if the number of digits in the number of nodes exceeds two.5.util.11: Excerpt of MoML code for Ptalon version of Small World. so that its Actor interface methods (preinitialize(). have a director. as shown in Figure 5. whereas the code for the RelayNode actor in the MultiInstanceComposite version must be stored in the MultiInstanceComposite itself. Figure 5. For the Ptalon model.8. i. Additionally.PtalonActor"> <property name="_location" class="ptolemy.10. The second column is the MultiInstanceComposite implementation. is the ParameterSweep version of SmallWorld as shown in Figure 5.ptalon..demo.5. the MoML code for the Initiator actor must be stored externally so that the actor can be referenced in the Ptalon file.0. The third column is the Ptalon implementation as shown in Figure 5.SmallWorld"> <ptalonExpressionParameter name="n" value="49"/> <ptalonExpressionParameter name="channelName" value="channelName"/> <ptalonExpressionParameter name="reportChannelName" value="reportChannelName"/> <ptalonExpressionParameter name="range" value="range"/> </ptalon> </configure> </entity> . however..e.0]"> </property> <configure> <ptalon file="ptolemy. the MultiInstanceComposite actor must be opaque.kernel....4. in which case there is an extra byte for each digit).actor.101 ..8)..

xml RelayNode. flexibility in specifying simulation parameters is extremely important. Combined with generative programming and metaprogramming techniques.ptln Total 83548 48212 16882 28314 5292 1151 51639 initialization. This is not possible with MultiInstanceComposite alone (one would need to use a Case actor or other similar actor to achieve the same results). over one hundred different simulation parameters were used. With MultiInstanceComposite.1: Comparison of number of bytes between different implementations of SmallWorld. in order to test the behavior of a routing algorithm under different channel assumptions. Ptalon also allows the user to specify heterogeneous networks more easily.102 Table 5. 5. For example. IPSN/SPOTS 20071 and SenSys 20062 . In Leung’s survey of the 70 full-length papers from prominent wireless sensor networking conferences. 2 Conference on Embedded Networked Sensor Systems. These results show that parameter choices are largely application-dependent.xml SmallWorld. They can then 1 International Conference on Information Processing in Sensor Networks and Track on Sensor Platforms. dataflow.xml 48212 with Ptalon SmallWorld. and that there are few standard benchmarks.xml 28228 with MultiInstanceComposite SmallWorld. Ptalon makes no such constraints on the PtalonActor component. a user would need to create a new instance of the actor for each type of duplicated node in the network. I demonstrated how higher-order components provide a powerful way to build wireless sensor network applications.xml 55320 RelayNode.5 Summary In this chapter. Developers can choose the method that best fits the particular application. ParameterSweep SmallWorld. and higher-order actors. Ptalon has the advantage over the other methods in that it is easier to express model structure. Tools and Design Methods. . including modal models. In general. sensor network developers can easily specify experimental simulation setups programmatically using a variety of techniques. with very few repeated counts [63].xml Initiator. the application developer can modify the Ptalon code to cycle through a number of different types of radio channel models.

103 refine these simulations to real-world implementations using a technology such as Viptos (presented in Chapter 4). .

104

105

Chapter 6

Related Work
This chapter details information on work related to TinyGALS and galsC, as well as work related to Viptos and the metaprogramming techniques for wireless sensor networks discussed in earlier chapters.

6.1

TinyGALS and galsC
This section summarizes the features of several related operating systems and software ar-

chitectures, and discusses how they relate to TinyGALS and galsC. Herlihy’s method for building non-blocking operations, as well as the message passing interface (MPI) offer concurrency and communication alternatives to those used in TinyGALS. The SVAR (state variable) mechanism of PBOs (port-based objects) and FPBOs (featherweight port-based objects) influenced the design of TinyGUYS. The Click Modular Router project has interesting parallels to the TinyGALS model of computation, as do Ptolemy II, the CI (component interaction) domain, and the TM (Timed Multitasking) domain.

6.1.1

Non-blocking

Herlihy proposes a methodology in [45] for constructing non-blocking and wait-free implementations of concurrent objects. Programmers implement data objects as stylized sequential programs, with no explicit synchronization. Each sequential operation is automatically transformed into a non-blocking or wait-free operation via a collection of synchronization and memory management techniques. However, operations may not have any side-effects other than modifying the memory block occupied by the object. Unlike TinyGALS, this technique does not address the need

106 for inter-object communication when composing components. Additionally, this methodology requires additional copying of memory, which may become expensive for large objects.

6.1.2

MPI

MPI (Message Passing Interface) is the de facto standard library interface for writing message passing programs on high-performance parallel computing platforms [104]. MPI provides virtual topology, synchronization and communication functionality between a set of processes that have been mapped to processing nodes. Interface functions include point-to-point, rendezvous-type send/receive operations (including synchronous, asynchronous, buffered, and ready forms); choosing between a Cartesian or graph-like logical process topology; exchanging data between process pairs (send/receive operations); combining partial results of computations (gather and reduce operations); synchronizing nodes (barrier operation); as well as obtaining network-related information such as the number of processes in the computing session, identity of the current processor to which a process is mapped, and neighboring processes accessible in a logical topology. MPI was originally targeted for distributed memory systems, though implementations for shared memory systems have appeared as these platforms have become more popular. In MPI, all parallelism is explicit; the programmer is responsible for correctly identifying parallelism and implementing parallel algorithms using MPI constructs. The number of tasks dedicated to run a parallel program is static. New tasks cannot be dynamically spawned during run time, though the new MPI-2 standard addresses this issue. The advantages of MPI over older message passing libraries are portability (because MPI has been implemented for almost every distributed memory architecture) and speed (because each implementation is in principle optimized for the hardware on which it runs). Hempel and Walker [43] summarize MPI and its alternatives: The main function of MPI is to communicate data from one process to another. Other mechanisms, such as TCP/IP and CORBA, do essentially the same thing. MPI provides a level of abstraction appropriate for communication of data in scientific computing, whereas TCP/IP is geared to low-level network transport, and CORBA to clientserver interactions... The idea of communicating sequential processes as a model for parallel execution was developed by C.A.R. Hoare in the 1970s, and is the basis of the message passing paradigm. This paradigm assumes a distributed process memory model, i.e., each process has its own local address space. Processes co-operate to perform a task by independently computing with their local data and communicating data with other processes

inventor of Concurrent Pascal. send or receive a message. and resource management—its message passing capabilities are not very sophisticated. named pipes. There are some constraints.107 by explicitly exchanging messages. (5) currently only static topologies are supported. supports parallel distributed simulation using one of various communication mechanisms. MPI has had considerable impact on the development of middleware and other tools for wireless sensor networks. MPI and PVM [Parallel Virtual Machine] were designed for different uses. MPI has its detractors. PVM was originally intended for use on networks of workstations (NOWs) and addresses issues such as heterogeneity. asynchronous communication is dangerously insecure. However. including MPI. which (conceptually) no longer exists. The MPI routines for synchronous message passing work as expected. fault tolerance. Message passing provides the most explicit way of programming a parallel computer with physically distributed memory. Technically. In his evaluation of MPI [41]. this message passing is normally realized by calls to library functions. he states. homogeneous parallel architectures. unless the modules are mapped to the same processor. The Message-Passing Interface follows in the footsteps of the Unix threads library: both extend a sequential programming language with subroutines for parallel execution and data communication. Personally. OMNeT++ [84. discussed later in Section 6. and is well-suited to this type of machine since there is a good match between the distributed memory model and the distributed hardware. and therefore may be reused by unrelated procedure calls! Twenty years ago. such as Per Brinch Hansen. for example. Welsh and Mainland take inspiration from MPI in their approach to abstract regions [101]. 98].2. (2) no global variables are allowed. (4) lookahead must be present in the form of link delays. However. This time-dependent error may change a variable. The design of MPI focused on message passing capabilities. [Although regarded as competing standards].. which. and it is intended to attain high performance on tightly-coupled. the first concurrent programming language.. Concurrent Pascal proved that nontrivial parallel programs can be written exclusively in a secure programming language. or broadcast some data to a whole group of processes. however: (1) modules can only communicate by sending messages (no direct method call or member access) unless they are mapped to the same processor. a family of spatial operators that capture local communication within regions of a wireless sensor . (3) a module may not send directly to a submodule of another module. interoperability. It is possible to call a user procedure that inputs a message in a local variable and returns before the input has been completed. or the file system. I regard the attempt to replace a parallel programming language and its compiler with insecure procedures as a step backwards in programming technology.

which users can create with specific primitives. In their system. with example patterns including one-to-all (broadcast). It has been used in sensor network applications to parallelize data fusion processes. such as broadcast and reduction.108 network. 6.3 Port-Based Objects The port-based object (PBO) [92] is a software abstraction for designing and implementing dynamically reconfigurable real-time software. it is more comprehensive than MPI. Some of the UW-API primitives are to be invoked by a single sensor node. “structured communication” refers to a routing problem where the communication pattern is known in advance. However. Barrier synchronization is also supported for the sensor nodes that lie within a region. geographic location. MPI has been extremely successful in the parallel processing community as it is highlevel enough to shield programmers from most of the details of the underlying machine. 66] is an integrated software bundle designed for high performance cluster computing. in that the actor model specifies scheduling and execution semantics. all-to-all. Bakshi and Prasanna [6] have a similar goal in their library of structured communication primitives. Viptos. and there is no explicit synchronization with other processes. yet low-level enough to permit extensive application-specific optimizations. others are for collective communication. A PBO is an independent concurrent process. OSCAR provides the standard Message Passing Interface (MPI) for communication between the parallel computing processes. The software framework was developed for the Chimera multiprocessor real-time operating system (RTOS). all-to-one (data gather). to be invoked simultaneously by a group of nodes in a geographic region. and permutation. The actor model used by TinyGALS/galsC. The Open Source Cluster Application Resources (OSCAR) package [29. many-to-many. MPI hides the details of the communication hardware and provides efficient implementations of common collective operations. which may be defined in terms of radio connectivity. where the sensor network sends its data to the computing cluster through a gateway node. They state that [MPI] provides a unified interface for message passing across a large family of parallel machines. PBOs may execute either . or other node properties.1. We wish to provide communication interfaces that serve a similar role for sensor networks. in addition to communication primitives. and Ptolemy II uses message passing. 81] for sensor network communication is motivated by MPI. All operations take place on regions. UW-API (University of Wisconsin-Madison’s Application Programmer’s Interface) [5.

A PBO communicates with other PBOs only through its input ports and output ports. The system updates configuration constants only during initialization of the PBO. Configuration constants are used to reconfigure generic components for use with specific hardware or applications. single-processor. Although there is no explicit synchronization or communication among processes. The system performs all transfers between the local and global tables as critical sections. If the total time that a CPU is locked to transfer a state variable is small compared to the resolution of the system clock. Consistency between the global and local tables is maintained by the SVAR mechanism. Access . It is guaranteed that the task holding the global lock is on a different processor and will not be preempted. which is stored in shared memory. The system updates the state variables corresponding to input ports prior to the execution of each cycle of a periodic PBO. Since there is only one lock. or before the processing of each event for an aperiodic PBO. PBOs communicate with each other via state variables stored in global and local tables. no explicit synchronization is needed to read from or write to a state variable.109 periodically or aperiodically. A task busy-waits with the local processor locked until it obtains the lock and goes through its critical section. which is especially important since memory in embedded processors is a limited resource. which creates potential implicit blocking. whereas FPBOs all share the same context. PBOs are separate processes. The system updates these values in the global table only after the PBO completes its processing for that cycle or event. A PBO can only access its local table. a PBO may update the state variables corresponding to the PBO’s output ports at any time. During its cycle. thus it will release the lock shortly. PBOs may also have resource ports that connect to sensors and actuators via I/O device drivers. The system uses spin-locks to lock the global table. Since every PBO has its own local table. which contains only the subset of data from the global table that is needed by the PBO. then there is negligible effect on the predictability of the system due to this mechanism locking the local CPU. which are not PBOs. The Chimera PBO implementation uses data replication to maintain data integrity and avoid race conditions. and updates to the tables only occur at predetermined times. multiple accesses to the same SVAR in the global table are mutually exclusive. Echidna [9] is a related real-time operating system designed for smaller. The application programmer interface (API) for the FPBO is identical to that of the PBO. Echidna FPBO implementation takes advantage of context sharing to eliminate the need for local tables. The design is based on the featherweight port-based object (FPBO) [91]. and it assumes that the amount of data communicated via the ports on each cycle of a PBO is relatively small. there is no possibility of deadlock. embedded microcontrollers. Every input and output port and configuration constant is defined as a state variable (SVAR) in the global table. In an RTOS.

pull ports in white. there is no possibility of blocking when using the TinyGUYS mechanism. In Click diagrams. There are three types of ports: push. in both the PBO and FPBO models. 6. but elements can create and export arbitrary additional interfaces. An element can have any number of input and output ports. in TinyGALS. An element may also have an optional configuration string which contains additional arguments to pass to the element at router initialization time. At initialization time. since components within a module may be tightly coupled in terms of data dependency. This section provides a detailed description of the constructs and processing in Click and compares it to TinyGALS.110 to global data must still be performed as a critical section to maintain data integrity. Every element supports the simple packet-transfer interface. . 55] is a flexible. instead of using semaphores. and agnostic ports with a double outline. Each element belongs to a single element class. each use of a compound element is compiled into the corresponding collection of simple elements. To summarize. which are similar to global variables. Elements in Click A Click element is a software module which usually performs a simple computation as a step in packet processing. updates to TinyGUYS are buffered until a module has completed execution. The SVAR concept is the motivation behind the TinyGALS strategy of always reading the latest value of a TinyGUYS parameter. Updates to an SVAR are made atomically. Each element supports one or more method interfaces.4 Click Click [54. pull. which specifies the code that should be executed when the element processes a packet. This is more closely related to the local tables in the Chimera PBO implementation than the global tables in the Echidna FPBO implementation. where the vertices are called elements and the edges are called connections. Echidna constrains when preemption can occur. An element is implemented as a C++ object that may maintain private state. The Click configuration language allows users to define compound elements. modular software architecture for creating routers. push ports are drawn in black.1. through which they communicate at runtime. software components only communicate with other components via SVARs. However. A Click router configuration consists of a directed graph. However. which are router configuration fragments that behave like element classes. as well as the element’s initialization procedure and data layout. and agnostic. However. and the components always read the latest value of the SVAR.

Source: Eddie Kohler. and returning the packet. A Queue has a push input port (responds to pushed packets by enqueuing them) and a pull output port (responds to pull requests by dequeuing packets and returning them). However. in the case of a chain of pull connections). The element is initialized with the configuration string “2”. in the case of a chain of push connections). but each agnostic port must be used exclusively as either push or pull.111 element class input port Tee(2) configuration string output ports Figure 6. Figure 6.1: An example Click element. then both input and output must be used in the same way (either push or pull). The element has one input port. A connection between two push ports is a push connection.1 shows an example Click element that belongs to the Tee element class. When a Click router is initialized. Every push output and every pull input must be connected exactly once. the system propagates constraints until every agnostic port has been assigned to either push or pull. If the chosen input has no packets ready. where packet handoff along the connection is initiated by the destination element (or destination end. In addition. the . pulling a packet from it. which in this case configures the element to have two output ports. which sends a copy of each incoming packet to each output port. A connection is implemented as a single virtual function call. Another type of element is the Click packet scheduler. where packet handoff along the connection is initiated by the source element (or source end. Queues in Click must be defined explicitly and appear as Queue elements. There are no implicit queues on input and output ports. push inputs and pull outputs can be connected more than once. An agnostic port behaves as a push port when connected to push ports and as a pull port when connected to pull ports. if packets arriving on an agnostic input might be emitted immediately on an agnostic output. A connection between two pull ports is a pull connection. A connection between a push port and a pull port is illegal. which means that they do not carry the associated performance and complexity costs. Connections in Click A Click connection represents a possible path for packet handoff and at- taches the output port of an element to the input port of another element. This is an element with multiple pull inputs and one pull output. The element reacts to requests for packets by choosing one of its inputs.

. or a chain of pull() calls). The router continues to process each pushed packet. Click timers are implemented using Linux timer queues.2 kernel. first out) queuing. The placement of Queues in the configuration graph determines how CPU scheduling may be performed. following it from element to element along a path in the router graph (a chain of push() calls. Since Click runs in a single thread. which loops over the task queue and runs each task using stride scheduling [99]. where each timer calls an arbitrary method when it fires. FromDevice polls the device’s receive DMA (direct memory access) queue for newly arrived packets and pushes them through the configuration graph. which are compound elements that act like queues but implement behavior more complex than FIFO (first in. the device never interrupts the processor. the elements are indistinguishable. When activated. For example. device-handling elements such as FromDevice and ToDevice place themselves on Click’s task queue. a call to push() or pull() must return to its caller before another task can begin. they are implicitly scheduled when their push() or pull() methods are called. An element can have any number of active timers. so to an element downstream. Both Queue elements and scheduling elements have a single pull output. Timers are another way of activating an element besides tasks. Click runtime system Click runs as a kernel thread inside the Linux 2. scheduler usually tries other inputs. An element should place itself on the task queue if the element frequently initiates push or pull requests without receiving a corresponding request. until the packet is explicitly stored or dropped (and similarly for pull requests). ToDevice examines the device’s transmit DMA queue for empty slots and pulls packets from its input. This leads to an ability to create virtual queues. Source: Eddie Kohler. Most elements are never placed on the task queue. A task is an element that needs special access to CPU time. Click is a pure polling system.2: A simple Click configuration with sequence diagram. The kernel thread runs the Click router driver.112 FromDevice receive packet p Null Null ToDevice push(p) return push(p) return dequeue p and return it enqueue p pull() return p pull() return p ready to transmit send p Figure 6.

Source: Eddie Kohler.2.113 Poll packet from receive DMA ring Push packet to Queue Queue full? N Y Drop packet (Queue drop) Enqueue packet on Queue Pull packet from Queue Enqueue packet on transmit DMA ring Figure 6. .3: Flowchart for Click configuration shown in Figure 6.

Rules in Click on connecting elements together are similar to those for connecting components in TinyGALS: push outputs must be connected exactly once. but push inputs may be connected more than once (see Sections 3. Control flow moves forward during a push sequence. there is no fundamental difference between push processing and pull processing at the method-call level.2.1. Kohler. ToDevice calls pull() on its input port. which calls the push() method of the Queue. Figure 6. Later. and moves backward during a pull sequence. which calls the pull() method of the Queue. Both types of objects (Click elements and TinyGALS components) communicate with other objects via method calls. This leads to one or two virtual function calls. The Null element simply passes a packet from its input port to its output port. The first source of overhead comes from passing packets between elements. Note that in the sequence diagram in Figure 6. FromDevice calls push() on its output port. The calls to push() then return in the reverse order. both push and pull processing are sets of method calls that . as well as an indirect jump through that function pointer.2. Comparison of Click to TinyGALS An element in Click is comparable to a component in TinyGALS in the sense that both are objects with private state. the task corresponding to ToDevice is activated.2 shows a simple Click router configuration with a push chain (FromDevice and Null) and a pull chain (Null and ToDevice). found that element generality had a relatively small effect on Click’s performance since not many elements in a particular configuration offered much opportunity for specialization [55]. each of which involve loading the relevant function pointer from a virtual function table.114 Figure 6. The Queue element enqueues the packet if its queue is not full. The two chains are separated by a Queue element.2 and 3. the element polls the receive DMA ring for a packet. The second source of overhead comes from unnecessarily general element code. The pull() method of Null calls pull() on its input port. it performs no processing on the packet. In Click. The push() method of Null calls push() on its output port. The Queue element dequeues the packet and returns it through the return of the pull() calls. otherwise it drops the packet. Overhead in Click Modularity in Click results in two main sources of overhead. which calls the push() method of Null.2). This overhead is avoidable—the Click distribution contains a tool to eliminate all virtual function calls from a Click configuration. which calls the pull() method of Null. When the task corresponding to FromDevice is activated.3 illustrates the basic execution sequence of Figure 6. If there is an empty slot in its transmit DMA ring.2. et al. the packet p) always moves forwards. Data flow (in this case. time moves downwards.

control begins at element C1 and flows to the right and returns after it reaches the Queue. Push processing can be thought of as event-driven computation (if one ignores the polling aspect of Click). Figure 6. In Click. In Click. Data flow between components in an actor .4 provides a more detailed analysis of the difference in control and data flow between Click and TinyGALS. Data (a packet) flows to the right until it reaches the Queue. Figure 6. shows that a TinyGALS actor forms a boundary for control flow. In TinyGALS. the direction of control flow with respect to data flow in the two types of processing are opposite of each other. with elements C1 and C2 grouped into an actor A and elements C3 and C4 grouped into an actor B. data flow within an actor is not represented explicitly.115 Actor A C1 C2 Actor B C3 C4 Actor C C5 C6 Click router thread task invocation task invocation TinyGALS thread event scheduler invokes actor from event queue ??? Click push Control flow Data flow Click pull TinyGALS Figure 6. where control flows upstream in order to compute data needed downstream.4: Click vs. Note that a compound element in Click does not form the boundary of control flow. However. Visualizing this configuration as a TinyGALS model. which is connected to a pull processing chain of two elements. differ only in name. control flows to the connected element (recall that a compound element is compiled to a chain of simple elements). Pull processing can be thought of as demand-driven computation. if an element inside of a compound element calls a method on its output. TinyGALS.4 shows a push processing chain of four elements connected to a queue. where control and data flow downstream in response to an upstream event.

If one reverses the arrow directions inside of actor C.1 1 Although. However. which are long running computations placed in the task queue by a TinyOS component method. control flow in this new TinyGALS model is the same as in Click. Aside from the polling/interrupt-driven difference.3. although TinyGUYS provides a possible hidden avenue for data flow between actors. In Click. . which is interrupt-driven and allows preemption to occur in order to process events. Unlike Click. The only way of passing data between Click elements is to add annotations to a packet (information attached to the packet header. elements C5 and C6 may have to be rewritten to reflect the fact that C6 is now a source object. elements C5 and C6 are grouped into an actor C. whereas TinyGALS is motivated by powerand resource-constrained hardware platforms. since Click is a pure polling system. In Click. push processing in Click is equivalent to synchronous communication between components in a TinyGALS actor.5 for more information. elements in Click have no way of sharing global data. the TinyGALS model does not contain a task queue. where data flow has the same direction as the connection. rather than a sink object. for backwards compatibility with TinyOS. Pull processing in Click. but execution is asynchronous between chains. Data flow between actors always has the same direction as the connection arrow direction. The scheduler runs tasks in the task queue only after processing all events in the event queue. does not have a natural equivalent in TinyGALS. which are separated by a Queue element. tasks can be preempted by hardware interrupts. unlike TinyGALS. arrival of data in a queue does not cause downstream objects to be scheduled. however. Additionally. the TinyGALS runtime system implementation supports TinyOS tasks. TinyGALS does not contain timers associated with elements. the execution model of Click is quite similar to the globally asynchronous. a TinyGALS system goes to sleep when there are no external events to which to respond. From this global point of view. it does not respond to events immediately. In Figure 6. although this can be emulated by linking a CLOCK component with an arbitrary component. See Section 3. Much of this is because Click’s design is motivated by high throughput routers. Also unlike Click. as in TinyGALS. This highlights the fact that Click configurations cannot have two push chains (where the end elements are activated as tasks) separated by a Queue. Also note that the Click Queue element is not equivalent to the queue on a TinyGALS actor input port. execution is synchronous within each push (or pull) chain.4. locally synchronous execution model of TinyGALS. unlike in Click. Additionally. but which is not part of the packet data). Unlike TinyGALS.116 can have a direction different from the link arrow direction.

This example also demonstrates a way to perform distributed multitasking. Figure 6. the following example by Jie Liu given in Yang Zhao’s paper [109] illustrates a situation in which pull processing is desirable for eliminating unnecessary computation. but very unlikely to come from the east or north. Node D (and others) may be free to perform other computations while node A performs most of the intrusion detection. The center component is similar to the Click scheduler element. Node A has more power and functionality than other nodes in the system. Under these assumptions. Figure 6.5: A sensor network application.5. a configuration for the application in Figure 6. This could be an extension to the current single-node architecture of TinyGALS. so nodes should send data only when necessary. Pull processing in sensor networks Although TinyGALS does not currently use pull processing. somewhat likely to come from the south. Communication with other nodes consumes more power than performing local computations. Node B Node C Node A Node D Figure 6. . It is known that an intruder is most likely to come from the west.6 shows one possible configuration for this kind of pull processing.117 North A D B C Figure 6.6: Pull processing across multiple nodes. Each node is only capable of detecting intruders within a limited range and has a limited battery life. node A may want to pull data from other nodes only when needed.5 shows a sensor network application in which four nodes cooperate to detect intruders.

due to the implementation of TM in Ptolemy II. which is based on Ptolemy II and implements the Click model of computation. A trigger condition can be built using real-time physical events.. CI actors can be active (i.5 Click and Ptolemy II The MESCAL project has created a tool called Teepee [71]. Actors in a TM model declare their computing functionality and also specify their execution requirements in terms of trigger conditions. CI is motivated by the push/pull interaction between data producers and consumers in middleware services such as the CORBA event service. actors are never blocked on reading. CI and Click could be leveraged to implement an implementation of TinyGALS in Ptolemy II.6 Timed Multitasking Timed multitasking (TM) [69] is an event-triggered programming model that takes a timecentric approach to real-time programming but controls timing properties through deadlines and events rather than time triggers. 6. The system makes the results of the execution available to other actors and the physical world only at the deadline time. In cases where an actor cannot finish by its deadline. have their own thread of execution) or passive (triggered by an active actor). An actor represents a sequence of reactions. and/or messages from other actors. the TM model includes an overrun handler to preserve the timing determinism of all other actors and allow an actor that violates the deadline to come to a quiescent state. communication packets. Actors have state. possibly using the ClassWrapper actor to model TinyGALS components. If there are enough resources at run time.1. then the system grants the actor at least the declared execution time before it reaches its deadline. The CI (component interaction) domain [16] in Ptolemy II models systems that contain both event-driven and demand-driven styles of computation. Software components in TM are called actors. in which.1. an actor should not need any additional data to complete its finite computation. The communication among the actors has event semantics. Therefore. Triggers must be responsible. The system activates an actor when its trigger condition is satisfied. where a reaction is a finite piece of computation. which means that once triggered. Actors can only communicate with other actors and the physical world through ports. and deadlines. execution time. which carries from one reaction to another.e. unlike . interaction with the ports of an actor may not directly transfer the flow of control to another actor. There is a natural correlation between the CI domain and Click. Unlike method calls in object-oriented models.118 6.

Section 3. There are two types of actors: interrupt service routines (ISRs) respond to external events. link. though none include all of the capabilities of Viptos. and tasks are triggered entirely by events produced by peer actors. and routing layers [14]. routing. and a flag indicating whether the event has been consumed. an ISR usually appears as a source actor or a port that transfers events into the model. These two types do not intersect. Conceptually. Events on a connection between two actors are represented by a global data structure. Tasks have a much richer set of interfaces than ISRs and have a set of methods that define the split-phase reaction of a task. ISRs do not have triggering rules. the sender of a communication is never blocked on writing. and deploying wireless systems exist. open-source network simulator. Simulation. 6. Some information presented in this section is excerpted from papers on VisualSense [8] and Viptos [21. It is a discrete-event simulator with extensive support for simulating TCP/IP. In a TM model. and multicast protocols over wired and wireless (local and satellite) networks. and Deployment Environments A number of frameworks for designing.2 Design. The TM runtime system uses an event dispatcher to trigger a task when a new event is received at its port. An ISR is synthesized as an independent thread. Liu and Lee [69] describe a method for generating the interfaces and interactions among TM actors into an imperative language like C. a mutual-exclusion lock to guard the access to the variable if necessary. 6.1 Design and simulation environments ns-2 [77] is a well-established. Wireless and mobility support in ns-2 comes from the Monarch project.2.2 suggested that a partial method of reducing non-determinacy in TinyGALS programs due to one or more interrupts during an actor iteration is to delay producing outputs from an actor until the end of its iteration. every piece of data is produced and consumed exactly once. simulating. This is similar to the TM method of only producing outputs at the end of an actor’s deadline. which contains the communicating data. and outputs are made immediately available as trigger events to downstream actors.119 state semantics. which provides channel models and wireless network layer components in the physical. .2. 22]. The event semantics can be implemented by FIFO queues.

A new wireless sensor framework [90] is builds upon the autonomous component architecture (ACA) and the extensible internetworking framework (INET) of J-Sim. SensorSim is no longer under development and will not be publicly released. and sensors). it shares many concepts. An OPNET model is hierarchical. Unfortunately. in a block-diagram fashion. But instead of using FSM models for processes. called processes. (2) sensor and wireless communication channels. Each node can be constructed from software components. OPNET also supports antenna gain patterns and terrain models.120 SensorSim [79] builds on ns-2 and claims power models and sensor channel models. component-based. and each process can be constructed using finite state machine (FSM) models. The OPNET Wireless Module provides support for wireless and mobile communications. and mobility models and power models (both energy-producing and energy-consuming components). It uses a 13-stage “transceiver pipeline” to dynamically determine the connectivity and propagation effects among nodes. In conventional OPNET models. and other characteristics. bandwidth. and (3) physical media such as seismic channels. solutions. The transceiver pipeline stages use these characteristics to calculate the average power level of the received signals to determine whether the receiver can receive this signal. This new framework extends the notion of network emulation to Berkeley Mica . OMNeT++ defines a component interface for the basic module. SensorSim also claims hybrid simulation in which real sensor nodes can participate. OPNET Modeler [78] is a commercial tool that offers sophisticated modeling and simulation of communication networks. A power model consists of an energy provider (the battery) and a set of energy consumers (CPU. OMNeT++ [98] is an open source tool for discrete-event modeling. The NesCT tool of the EYES WSN project allows users to run TinyOS applications directly in OMNeT++ simulations. Users can specify transceiver frequency. J-Sim [97] is an open-source. compositional network simulation environment developed entirely in Java. radio. and provides an object-oriented definition of (1) target. It also includes a set of classes and mechanisms to realize network emulation. power. It uses a discrete-event simulator to execute the entire model. sensor and sink nodes. where the top level contains the communication nodes and the topology of the network. With the Mobility Framework extension. with an object-oriented approach similar to the abstract semantics of Ptolemy II [28]. nodes are connected by static links. and features with OPNET. An energy consumer can have several modes. Application-specific models can be defined by sub-classing classes in the simulation framework and customizing their behaviors. each corresponding to a different trade-off between performance and power. The sensor channels model the dynamic interaction between the physical environment and the sensor nodes.

Em* modules are implemented as user-space processes that communicate through message passing via device files. Bagrodia founded Scalable Network Technologies. and visualization of live systems. GloMoSim (Global Mobile system Simulator).. corresponding to the Linux jiffy clock that is part of the scheduler in the Linux 2. Although Prowler provides a generic simulation environment. and it was designed to be easily embedded into optimization algorithms. EmTOS [35] is an extension to Em* that enables an entire nesC/TinyOS application to run as a single module in an Em* system. for example. is a scalable environment for parallel simulation of wireless systems [106]. It can incorporate an arbitrary number of motes. on arbitrary (possibly dynamic) topology. GloMoSim is designed to be extensible and composable: the communication protocol stack for wireless networks is divided into a set of layers. EmTOS modules are restricted to using the Linux scheduler as the main programming model. which supports both wired and wireless networks. by setting the sensor values that simulated motes read. TinyViz supports software plugins that watch for events coming from the simulation—such as debug messages and radio messages—and react by drawing information on the display. from UCLA. TinyViz [65] is a Java-based graphical user interface for TOSSIM. Em* [34] is a toolsuite for developing sensor network applications on Linux-based hardware platforms called microservers. its current target platform is the Berkeley Mica mote running TinyOS. or actuating the simulation itself. emulation. Prowler [86] is a probabilistic wireless network simulator running under MATLAB and can simulate wireless distributed systems. each with its own API. which expanded and further developed GloMoSim into a commercial tool called QualNet. similar to the OSI (Open Systems Interconnection) seven-layer network architecture. both real and simulated. Thus. a utility distributed with TinyOS that collects TinyOS packets sent to a mote base station attached to a PC and forwards them through the serial port. It relies on Parsec. simulation. Prowler is an event-driven simulator that can be set to operate in either deterministic mode (to produce replicable results while testing the application) or in probabilistic mode (to simulate the nondeterministic nature of the communication channel and the low-level communication protocol of the motes). It supports deployment. Inc. from the application to the physical communication layer. The EmTOS wrapper library is similar to the TOSSIM simulated device library. a C-based simulation language for sequential and parallel execution of discrete-event simulation models.4 kernel. setting simulation parameters. This means that the minimum granularity of a timer is 10 milliseconds. . Physical environment data from the network is extracted with SerialForwarder.121 mote-based wireless sensor networks.

They also appear to be the only frameworks to provide a modern type system at the actor level (vs.e. the code level) [105]. All except Em* provide some form of discrete-event simulation.122 TinyViz includes a radio model plugin with two built-in models: “Empirical” (based on an outdoor trace of packet connectivity with the RFM1000 radios) and “Fixed radius” (all motes within a given fixed distance of each other have perfect connectivity. and no connectivity to other motes). and to generate an XML configuration file. Such models would have to be built with low-level code. Both support simulation of heterogeneous networks. they are not restricted to be leaf nodes [33]. They also appear to be unique among these modeling environments in that FSM models can be arbitrarily nested with other models. dataflow. or real-time software behavior. DES generates a model encompassing the full sensor network from this XML configuration and existing Ptolemy II descriptions of the required actors. This capability can be used. except that it requires modification to the nesC source code in order to use simulated sensor and other devices. i. to model the physical environment. . signal processing. along with any properties files required by external runtime environments like Em*. Avrora works at the byte level with precise timing. One interesting part of this project is the DyMND Execution Sequencer (DES) user interface. Viptos and Ptolemy II support hierarchical nesting of heterogeneous models of computation [28]. DyMND-EE is similar to Viptos. as well as the physical dynamics of mobility of sensor nodes. Other simulators used in the TinyOS community for cycle accurate simulation/emulation of the Atmel AVR (processor used in the Mica mote series) instruction set include ATEMU [80] and Avrora [96]. like Viptos. their digital circuits. None provide the ability to transition from high-level modeling to real code simulation and deployment.. Some are also open-source software. energy consumption and production. which allows users to graphically specify the deployment topology of a sensor network including target positions and trajectories. but none provide the ability that Viptos inherits from Ptolemy II to integrate diverse models of computation. DyMND-EE [110] is a wireless sensor network simulator based on Ptolemy II and uses Em* to run nesC code in a Linux environment using the FUSD kernel module to provide connections between simulated nodes and the DyMND-EE simulation manager. All of these systems provide extension points where model builders can define functionality by adding code. ATEMU simulates a byte-oriented interface to the radio and its transmissions at the bit level with precise timing. for example. and time-triggered. and its simulation speed scales much better than ATEMU for large number of nodes. This generative technique is similar to the metaprogramming techniques presented in Chapter 5. such as continuous-time. synchronous/reactive.

. which includes component hierarchy.2. interfaces and the JavaDoc style nesC documentation. reset the device. or start any of the available applications.3 Programming and deployment environments Sun Microsystems Laboratories has created a Java-based wireless sensor network platform called Sun SPOT (Small Programmable Object Technology) [88]. The TinyOS component library is available as graphical blocks within GRATIS II. 6. team development support (through Eclipse-CVS integration). deployment. SPOTWorld also has an experimental feature that allows users to drag an application from one SPOT to the next. set a persistent name property.x plugin for the Eclipse platform that implements an IDE (integrated development environment) for TinyOS/nesC development. This open source project features syntax highlighting of nesC code. SPOTWorld . TinyDT is a TinyOS 1. automatic build support. deploy code. a small J2ME-compliant Java virtual machine. to get device status information. and users can manage each device. 89]. GRATIS II was developed mainly for static analysis of TinyOS component graphs and does not support simulation. Users can graphically address individual running applications in order to pause. The Sun SPOT is based on a 32-bit 180 MHz ARM920T core with 512 KB of RAM and 4 MB of flash memory. It runs the Squawk VM [85]. and a CC2420 802. Given a valid model.2. and support for multiple TinyOS source trees. e.g. debugging and programming tool” for Sun SPOTs [87.15.123 6. the GRATIS II code generator can transform all the interface and wiring information into a set of nesC target files. support for multiple target platforms and sensor boards. However. resume. SPOTWorld is “an integrated management.4 radio with an effective range of about 80 meters. code navigation. wirings.2 TinyOS development and editing environments GRATIS II (Graphical Development Environment for TinyOS) is built on top of GME 3 (Generic Modeling Environment). or exit each one. TinyOS IDE is another Eclipse plugin that supports TinyOS project development and provides nesC syntax highlighting. This graphical tool can run stand-alone or be integrated with NetBeans. The Sun SPOT supports multiple concurrently running applications. even as the application runs. which nc2moml can then import into Viptos for simulation. TinyDT uses a Java-based nesC parser implemented using ANTLR to build an in-memory representation of the actual nesC application. code completion for interface members. Both TinyDT and TinyOS IDE complement Viptos in that they can be used to create and edit the source code for new TinyOS library components. SPOTWORLD depicts each automatically discovered Sun SPOT.

and the metaprogramming techniques presented in earlier chapters. Future versions of these tools can benefit from cross-fertilization of the techniques presented in this dissertation.3 Summary This chapter presented a number of frameworks related to TinyGALS/galsC.124 enables the user to compile a collection of applications and deploy the resulting file over the air to selected Sun SPOTs. The developers of the Sun SPOT and SPOTWorld have expressed interest in integrating features of Viptos with SPOTWorld. 6. Viptos. .

125 Chapter 7 Conclusion Developing software for wireless sensor networks today is an error-prone and tedious process that involves patching together many different tools and techniques. simulating. and between the deployment. and/or Viptos enable wireless sensor network application developers to create high-level descriptions or models and automatically . and macroprogramming layers. middleware. and scheduling and communication code generation facilities provided by the galsC compiler. and deploying wireless sensor network applications by using actor-oriented programming tools and techniques. usually using very low-level code. Ptolemy II. TinyGALS provides a globally asynchronous. and deployment environment for wireless sensor network applications. This combination balances fast response with an easy-to-understand programming model that puts application tasks first. and design stages. concurrency error detection. Application developers can use Viptos to create abstract models of their intended systems and refine them down to low-level code that can be transferred to target hardware. and this dissertation described its syntax and the high-level type checking. node-centric. Various metaprogramming and generative programming techniques described in this dissertation using higher-order actors in Ptalon. locally synchronous programming model that combines an actor-oriented (message-oriented) model with an object-oriented (procedure-oriented) model that allows application developers to use high-level actors as a first-order programming concept. simulation. but still allows them to use a low-level programming model when needed. VisualSense. galsC is a language that implements the TinyGALS programming model. Viptos provides an integrated. This dissertation discussed raising the conceptual level of designing. Actor-oriented programming provides a way to unify the layers and stages of application development—between the operating system. actor-oriented design. simulation.

The networked embedded computing community can use these tools and the knowledge shared in this dissertation to improve the way we program wireless sensor networks. All of the tools I developed and described in this dissertation are open-source and freely available on the web.126 generate sensor network simulation scenarios. .

IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems. Abstraction and modularity mechanisms for concurrent computing. 1997. 7(1):1–72. Sanjeev Kohli. IEEE Parallel and Distributed Technology: Systems and Applications. [5] Amol Bakshi and Viktor K. USA. and Daniel Sturman. Svend Frølund. Luciano Lavagno. Rajendra Panwar. [6] Amol Bakshi and Viktor K. Anna Patterson. 39(7):48–54. Andel and Alec Yasinsac. 1993. Agha. Algorithm design and synthesis for wireless sensor networks. Alberto Sangiovanni-Vincentelli. DC. A foundation for actor computation. [7] Felice Balarin. Agha. IEEE Computer Society. In IPSN’04: Proceedings of the Third International Symposium . Smith. Paolo Giusto. 18(6):834–849. 1(2):3–14. July 2006. 1986. and Yang Zhao. June 1999. 2004. [8] Philip Baldwin. Sentovich. ACTORS: A Model of Concurrent Computation in Distributed Systems. Prasanna. Mason. Journal of Functional Programming. Structured communication in single hop sensor networks. In Proceedings of the First European Workshop on Wireless Sensor Networks (EWSN 2004). January 2004. pages 423–430. Synthesis of software programs for embedded control applications. [4] Todd R. Computer. Washington. [2] Gul A. Cambridge. Ellen M. Attila Jurecska. The MIT Press Series in Artificial Intelligence. Massimiliano Chiodo. Ian A. Talcott. Prasanna. On the credibility of manet simulations. WooYoung Kim. [3] Gul A. Harry Hsieh. Edward A. MIT Press. and Kei Suzuki. Scott F. In ICPP ’04: Proceedings of the 2004 International Conference on Parallel Processing (ICPP’04).127 Bibliography [1] Gul Agha. Lee. Modeling of sensor nets in Ptolemy II. and Carolyn L. pages 138–153. Xiaojun Liu.

[14] Josh Broch. and Paul Le Guernic. Bhasker. Lee. Xiaojun Liu. Brinda Ganesh. [10] Albert Benveniste. NY. USA. and Haiyang Zheng (eds. Stephen Neuendorffer. Heterogeneous concurrent modeling and design in Java (Volume 1: Introduction to Ptolemy II). Fast-prototyping using the BTnode platform. In DATE ’06: Proceedings of the Conference on Design. Technical Report UCB/EECS-2007-7. Automation and Test in Europe. David A. pages 162–177. 2003. Christine Smit. Hui Dai. 3001 Leuven. . Star Galaxy Publishing. Mobile Networks and Applications. Second Edition. Technical Report UCB/ERL M05/23.128 on Information Processing in Sensor Networks. Chris Collins. [15] Christopher Brooks. Yang Zhao. University of California. NY. UK. Yih-Chun Hu. Eric Fiterman. Benoˆt Caillaud. Adam Torgerson. James Carlson. 2006. [12] J. and Bruce Jacob. Edward A. Xiaojun Liu. 2004. EECS Department. Heterogeneous concurrent modeling and design in Java (Volume 3: Ptolemy II domains). pages 359–368. ACM Press. 1998. London. 52(11):1454–1469. New York. From synchrony to asynchrony. Jeff Rose. Berkeley. and Jorjeta Jetcheva. 10(4):563–579. Jing Deng. European Design and Automation Association. ACM Press. [16] Christopher Brooks. Charles Gruenwald. New York. Lee. David B. The performance and energy consumption of embedded real-time operating systems. 1999. A performance comparison of multi-hop wireless ad hoc network routing protocols. [11] Jan Beutel. pages 85–97. Stephen Neuendorffer. Yang Zhao. In MobiCom ’98: Proceedings of the 4th Annual ACM/IEEE International Conference on Mobile Computing and Networking. Brian Shucker. and Richard Han.). 2005. Tiebing Zhang. IEEE Transactions on Computers. Berkeley. MANTIS OS: an embedded multithreaded operating system for wireless micro sensor platforms. pages 977–982. Belgium. Johnson. Jul 2005. [13] Shah Bhatti. 11 January 2007. Anmol Sheth. Maltz. In ı CONCUR ’99: Proceedings of the 10th International Conference on Concurrency Theory. and Haiyang Zheng (eds. Belgium. Paul Kohout. [9] Kathleen Baynes.). Edward A. A SystemC Primer. USA. 2004. EECS Department. University of California. Springer-Verlag.

Berkeley. 2005. [22] Elaine Cheong. EECS Department. Berkeley. March 2003. [24] Elaine Cheong and Jie Liu. In Proceedings of the Eighteenth Annual ACM Symposium on Applied Computing. Berkeley. volume 3566/2005 of Lecture Notes in Computer Science. Edward A. [23] Elaine Cheong. University of California. Berkeley. Viptos: A graphical development and simulation environment for TinyOS-based wireless sensor networks. Thomas Huining Feng. Design and implementation of TinyGALS: A programming model for eventdriven embedded systems. Edward A. Technical Report UCB/EECS-2006-150. The Mythical Man-Month: Essays on Software Engineering. November 2006. Technical Report UCB/EECS-2006-48. galsC: A language for event-driven embedded systems. Joint modeling and design of wireless networks and sensor node software. 7–11 March 2005. University of California. University of California. Addison Wesley Longman. 9 May 2006. Lee. Memorandum UCB/ERL M04/7. and Yang Zhao. The Power of Higher-Order Composition Languages in System Design. and Feng Zhao. Edward A. galsC: A language for event-driven embedded systems. Berkeley. Jie Liu. Elaine Cheong. In Unconventional Programming Paradigms (UPP) 2004. [19] James Adam Cataldo. A formalism for higher-order composition languages that satisfies the ChurchRosser property. pages 698–704. Published as Technical Memorandum UCB/ERL M03/14. Overview of generative software development. In Proceedings of Design. pages 326–341. and Andrew Christopher Mihal. Automation and Test in Europe (DATE05). 1995. PhD thesis. [18] Adam Cataldo. Springer Berlin / Heidelberg. TinyGALS: A programming model for event-driven embedded systems. EECS Department. February 2006. Master’s thesis. 20th Anniversary Edition. USA 94720.. Lee. [21] Elaine Cheong. Jr. [25] Elaine Cheong and Jie Liu. Inc. [26] Krzysztof Czarnecki. Brooks. [20] Elaine Cheong. EECS Department. Berkeley. 18 December 2006. April 2004. University of California. CA. EECS Department. and Yang Zhao. May 2003. Berkeley. . University of California. Judy Liebman.129 [17] Frederick P. Lee. Technical Report UCB/EECS-2006-15. University of California.

Nithya Ramanathan. pages 382–387. Lee. A. Florida. and Tom Schoellhammer. April 2005. June 1999. J. 18(6):742–760. [35] Lewis Girod. Xiaojun Liu. In HPCS ’07: Proceedings of the 21st International Symposium on High Performance Computing Systems and Applications. In SenSys ’04: Proceedings of the 2nd International . The nesC language: A holistic approach to networked embedded systems. [32] David Gay. 2006. and Edward A. Contiki . R. Bj¨ rn Gr¨ nvall. and Yuhong Xiong. Taming heterogeneity—the Ptolemy approach. Washington. and Thiemo Voigt. Eric Osterweil. IEEE Computer Society. Tampa. In Proceedings of the 4th International Conference on Information Processing in Sensor Networks (IPSN’05). [34] Lewis Girod. M. June 2003. Thanos Stathopoulos. Jeremy Elson. A middleware for OSCAR and wireless sensor network environments. A system for simulation. [33] Alain Girault. Hierarchical finite state machines with multiple concurrency models. [29] D. January 2003. Gruia-Catalin Roman. Navigation in small world networks: a scalefree continuum model. Janneck.a lightweight and flexible o o operating system for tiny networked sensors. 91(1):127–144. USENIX Association.130 [27] Adam Dunkels. emulation. [31] Massimo Franceschetti and Ronald Meester. Jozsef Ludvig. USA. Journal of Applied Probability. Mobile agent middleware for sensor networks: An application case study. Jeremy Elson. R. and David Culler. Jie Liu. IEEE. CA. Edward A. Deborah Estrin. USA. Matt Welsh. [28] Johan Eker. pages 283–296. DC. [30] Chien-Liang Fok. Sonia Sachs. and Chenyang Lu. Alberto Cerpa. 2007. and Deborah Estrin. Montez. Proceedings of the IEEE. Dantas. Stephen o Neuendorffer. In ATEC’04: Proceedings of the USENIX Annual Technical Conference 2004. In Proceedings of the First IEEE Workshop on Embedded Networked Sensors (EmNetS-I). Berkeley. IEEE Transactions On Computer-Aided Design Of Integrated Circuits and Systems. EmStar: A software environment for developing and deploying wireless sensor networks. November 2004. 2004. Nithya Ramanathan. and deployment of heterogeneous sensor networks. and Martius Rodriguez. C. Ferreira. Phil Levis. Thanos Stathopoulos. Pinto. 43(4):1173–1180. Bilung Lee. Eric Brewer. Rob von Behren. Lee. In Proceedings of Programming Language Design and Implementation (PLDI) 2003. A. USA. J¨ rn W.

NY. USA. [36] Omprakash Gnawali. New York. [41] Per Brinch Hansen. Rivi Sherman. Ramesh Govindan. and Mani Srivastava. Ram Kumar.131 Conference on Embedded Networked Sensor Systems. ACM Press. [39] Nicolas Halbwachs. Amnon Naamad. Jeongyeup Paek. Roy Shea. ACM Press. Amir Pnueli. [42] David Harel. 2005. pages 69–80. ACM SIGPLAN Notices. ACM Press. The emergence of the MPI message passing standard for parallel computing. pages 126–140. Ki-Young Jang. Synchronous Programming of Reactive Systems. 1998. and Services. The Tenet architecture for tiered sensor networks. [38] Ramakrishna Gummadi. In SenSys ’06: Proceedings of the 4th International Conference on Embedded Networked Sensor Systems. and Ramesh Govindan. USA. USA. NY. 21(1):51–62. A sensor network application construction kit (SNACK). Kluwer Academic Publishers. A dynamic operating system for sensor nodes. Computer Standards & Interfaces. August Joki. . [43] Rolf Hempel and David W. 1999. STATEMATE: A working environment for the development of complex reactive systems. 2004. pages 153–166. and Deborah Estrin. Marcos Vieira. New York. Springer Berlin / Heidelberg. Ben Greenstein. USA. Michal Politi. New York. volume 3560/2005 of Lecture Notes in Computer Science. Eddie Kohler. and Eddie Kohler. Hagi Lachover. New York. pages 201–213. April 1990. In SenSys ’04: Proceedings of the 2nd International Conference on Embedded Networked Sensor Systems. [40] Chih-Chieh Han. Walker. [37] Ben Greenstein. NY. 16(4):403–414. pages 163–176. Deborah Estrin. Aharon Shtull-Trauring. In Proceedings of the International Conference on Distributed Computing in Sensor Systems (DCOSS). 1993. 33(3):65–72. Applications. ACM Press. IEEE Transactions on Software Engineering. and Mark Trakhtenbrot. An evaluation of the message-passing interface. Omprakash Gnawali. 2004. Macro-programming wireless sensor networks using Kairos. Eddie Kohler. 2005. In MobiSys ’05: Proceedings of the 3rd International Conference on Mobile Systems. NY. 2006.

1977. In Proceedings of the ACM SIGPLAN Workshop on Languages. [45] Maurice Herlihy. November 1993. Berkeley. NY. 1974. [52] Gilles Kahn. North-Holland Publishing Company. 8(3):323–364. International Federation for Information Processing. pages 56–67. July 2003. ACM Press. [49] Christopher Hylands. and Haiyang Zheng. Compilers and Tools for Embedded Systems (LCTES’01). [46] Carl Hewitt. Directed diffusion: a scalable and robust communication paradigm for sensor networks. New York. ACM Press. NY. and Christoph Meyer Kirsch. In IPSN ’05: Proceedings of the 4th International Symposium on . pages 93–104.132 [44] Thomas A. Xiaojun Liu. Beyond event handlers: programming wireless sensors with o attributed state machines. Master’s thesis. Paris. New York. Technical Report UCB/ERL M03/25. pages 64–72. Stephen Neuendorffer. 15(5):745–770. A software architecture supporting networked sensors. University of California. [48] Jason Hill. IEEE Computer Society. and Deborah Estrin. USA. [53] Oliver Kasten and Kay R¨ mer. [51] Anoop Iyer and Diana Marculescu. In Proceedings of the Ninth International Conference on Architectural Support for Programming Languages and Operating Systems. Edward Lee. Power and performance evaluation of globally asynchronous locally synchronous processors. Jie Liu. Berkeley. [50] Chalermek Intanagonwiwat. In Proceedings of the IFIP Congress 74. ACM Press. In MobiCom ’00: Proceedings of the 6th Annual International Conference on Mobile Computing and Networking. ACM Transactions on Programming Languages and Systems. France. The semantics of a simple language for parallel programming. Seth Hollar. EECS Department. Embedded control systems development with Giotto. 2001. A methodology for implementing highly concurrent data objects. pages 471–475. In Proceedings of the 29th Annual International Symposium on Computer Architecture. 2000. USA. 2000. Alec Woo. Journal of Artificial Intelligence. 2002. Yuhong Xiong. University of California. Henzinger. Robert Szewczyk. Viewing control structures as patterns of passing messages. Ramesh Govindan. and Kristofer Pister. Benjamin Horowitz. pages 158–168. 2000. Yang Zhao. System architecture directions for networked sensors. Overview of the Ptolemy project. [47] Jason Hill. David Culler.

Benjie Chen. Technical Report UCB/ERL M00/12. . Oct 1978. Embedded software. ACM Press. 2000. [60] Edward A. Annals of Software Engineering. NY. New York. 1999. Lee and Thomas M. [58] Hugh C. Lee. Dataflow process networks. Functional and performance modeling of concurrency in VCC.2 April 1979. Springer-Verlag. IRIA. John Jannotti. pp. The Click modular router. Robert Morris. 2006. On the duality of operating system structures. Berkeley. Lee and Steve Neuendorffer. pages 1297–1300. UK. 7(1-4):25–45. Needham. 18(3):263–297. ACM Transactions on Computer Systems (TOCS). 56. pages 191–227. In Concurrency and Hardware Design. [62] Edward A. In Proc. Lee. Modeling concurrent real-time processes using discrete events. The Click Modular Router. 2005. 9 May 2007. 13. USA. and M. [55] Eddie Kohler. Parks. [56] YoungMin Kwon. Reprinted in Operating Systems Review. Massachusetts Institute of Technology. [54] Eddie Kohler. Advances in Petri Nets. and Bishnupriya Bhattacharya. Piscataway. 2002.133 Information Processing in Sensor Networks. EECS Department. Second International Symposium on Operating Systems. MoML – a modeling markup language in XML – version 0. [57] William W. University of California. Sherry Solden. In AAMAS ’06: Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems. and Gul Agha. [63] Man-Kit Leung. Spring 2007 EE290Q (Wireless Sensor Networks) Class Project Report. Kirill Mechitov. Sameer Sundresh. Advances in Computers. Proceedings of the IEEE. PhD thesis. Frans Kaashoek.4. London. 83(5):773–801. May 1995. IEEE Press. NJ. Reviving the value of WSN simulation results through Viptos extensions. November 2000. LaRue. pages 45–52. USA. 2000. 3–19. 2002. [61] Edward A. Lauer and Roger M. ActorNet: an actor platform for wireless sensor networks. [59] Edward A.

Juan Liu. Hellerstein. [69] Jie Liu and Edward A. Mapping concurrent applications onto architectural platforms. Najjar. Kluwer Academic Publishers. [72] Thomas J. 25(1):1907–1929. and Feng Zhao. New York. IEEE Pervasive Computing. State-centric programming for sensor-actuator network systems. and Richard M. Franklin. pages 39–59. William A. ACM Press. pages 65–75. USA. Addison-Wesley. and Feng Zhao. Ruh. Advances in the dataflow computational model. ACM Press. Graves. and David Culler. pages 112–117. Timed multitasking for real-time embedded software. 2003. [73] Walid A. 2003. and Wei Hong. USA. In ASPLOSe X: Proceedings of the 10th International Conference on Architectural Support for Programming Languages and Operating Systems. Steve Tanner. NY. Maurice Chu. . pages 491–502. ACM Press. 2005. January 1999. IEEE Control Systems Magazine. and Evans Criswell. 1997. John Rushing. pages 126–137. Inside CORBA: Distributed Object Standards and Applications. 2002. [68] Jie Liu. Nelson Lee. Real time target tracking with binary sensor networks and parallel computing. Michael J. [67] Jie Liu. 2(4):50–62. editors. In Proceedings of the 1st International Conference on Embedded Networked Sensor Systems (SenSys 2003). pages 85–95. New York. Matt Welsh. In EMSOFT ’05: Proceedings of the 5th ACM International Conference on Embedded Software. Soley. Sara J. Elaine Cheong. Gao. NY. Mowbray. ACM Press. Lee. 2003. Networks on Chip. James Reich. Lee. 2003. [71] Andrew Mihal and Kurt Keutzer. NY. Semantics-based optimization across uncoordinated tasks in networked embedded systems. In SIGMOD ’03: Proceedings of the 2003 ACM SIGMOD International Conference on Management of Data. pages 273–281. Edward A. [70] Samuel Madden. and Guang R.134 [64] Philip Levis and David Culler. [65] Philip Levis. chapter 3. [66] Hong Lin. Mat´ : a tiny virtual machine for sensor networks. Joseph M. February 2003. 10-12 May 2006. TOSSIM: accurate and scalable simulation of entire TinyOS applications. In Proceedings of 2006 IEEE International Conference on Granular Computing. The design of an acquisitional query processor for sensor networks. USA. New York. In Axel Jantsch and Hannu Tenhunen. Parallel Computing.

[78] OPNET Technologies. Building up to macroprogramming: an intermediate language for sensor networks. Inc. In Proceedings of the First IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks (SECON’04). USA. Brian J. UW-API: A network routing application programmer’s interface (draft version 1. EECS Department. Inc. University of Technology at Sydney.135 [74] Stephen A. 2003.ns-2. Realtime Signal Processing: Dataflow. Baras. Andreas Savvides. 2005. NJ.edu/nsnam/ns. and William I.2). [75] Ryan Newton. The Design and Analysis of Computer Experiments. Santner. Arvind. Srivastava. [79] Sung Park. 29 October 2001. Neuendorffer. ATEMU: A fine-grained sensor network simulator. 2007.opnet. Notz. pages 145–152. 2000. In IPSN ’07: Proceedings of the 6th International Conference on Information Processing in Sensor Networks. Springer Series in Statistics.com. Kewal Saluja. [77] The network simulator . ACM Press. University of California. and Matt Welsh. [76] Ryan Newton. USA. Berkeley. pages 37–44. In MSWIM ’00: Proceedings of the 3rd ACM International Workshop on Modeling. and Mani B. Actor-Oriented Metaprogramming. [81] Parmesh Ramanathan. SensorSim: a simulation framework for sensor networks. Kuang-Ching Wang. NY. John S. IEEE Press. University of Wisconsin-Madison. In IPSN ’05: Proceedings of the 4th International Symposium on Information Processing in Sensor Networks. Piscataway. Analysis and Simulation of Wireless and Mobile Systems. The Regiment macroprogramming system. http://www. PhD thesis.. Dan Rusk. 2004. Jonathan McGee. and Thomas Clouqueur. and Matt Welsh. and Manish Karir. ACM Press. Greg Morrisett. USA. http://www. Williams. Technical report. [83] Thomas J. [80] Jonathan Polley. New York. Springer-Verlag New York. NY. New York. PhD thesis. Opnet modeler. pages 489–498. 2005. Dionysys Blazakis. [82] Hideki John Reekie. Department of Electrical and Computer Engineering.isi. . Visual. pages 104–111. 1995. and Functional Programming.

pages 78–88. New York. Stewart and Robert A. Grand challenges in mission-critical systems: Dynamically reconfigurable real-time software for flight control systems. and Derek White. New York. 2006. John Daniels. JavaTM on the bare metal of wireless sensor devices: The Squawk Java virtual machine. [88] Randall B. 2006. P´ ter V¨ lgyesi. Parallel simulation made easy a with OMNeT++. In Workshop on RealTime Mission-Critical Systems in conjunction with the 1999 Real-Time Systems Symposium. IEEE Computer Society. Hyuk Lim. November 1999. USA. October 2003. 2007. and Doug Simon. pages 565–566. Enabling JavaTM for small wireless devices with Squawk and SpotWorld. [87] Randall B. and Honghai Zhang. Cristina Cifuentes. John Daniels. 8-15 March 2003. pages 175–187. In Proceedings of the 15th European Simulation Symposium (ESS’03). Simulation-based optimizae o o o e tion of communication protocols for large-scale wireless sensor networks. Programming the world with Sun SPOTs. SPOTWorld and the Sun SPOT. Smith. Hung-Ying Tyan. J-Sim: A simulation environment for wireless sensor networks. 2005. Dave Cleal. Languages. Cristina Cifuentes. Brown. [91] David B. Mikl´ s Mar´ ti. DC. NY. In OOPSLA ’06: Companion to the 21st ACM SIGPLAN Conference on Object-Oriented Programming Systems. USA. NY. and Akos L´ deczi.136 [84] Y. [89] Randall B. [90] Ahmed Sobeih. NY. USA. ACM Press. pages 493–499. In IPSN ’07: Proceedings of the 6th International Conference on Information Processing in Sensor Networks. and Applications. Jennifer C. volume 3. In VEE ’06: Proceedings of the 2nd International Conference on Virtual Execution Environments. Hou. . Andr´ s Varga. 16 October 2005. Smith. pages 3 1339–3 1346. pages 706–707. and Gregory K. ´ [86] Gyula Simon. [85] Doug Simon. Ahmet Sekercioglu. USA. Ning Li. In 2nd Workshop on Building Software for Pervasive Computing. and Dave Cleal. Lu-Chuan Kung. In ANSS ’05: Proceedings of the 38th Annual Symposium on Simulation. Smith. Bernard Horan. New York. ACM Press. Washington. In Proceedings 2003 IEEE Aerospace Conference. ACM Press. Wei-Peng Chen. Egan.

net. The Ohio State University. realization and evaluation of a component-based compositional software architecture for network simulation. pages 477–482. USENIX Association. Cambridge. Fourth International Conference on Information Processing in Sensor Networks. In GPCE ’02: Proceedings of the 1st ACM SIGPLAN/SIGSOFT Conference on Generative Programming and Component Engineering. June 1995. Design of dynamically reconfigurable real-time software using port-based objects. In International Workshop on Wireless Sensor Network Architecture (WWSNA 2007). . 23(12):759–776. Philip Levis. [94] Arsalan Tavakoli. In Proceedings of IPSN’05. In Proceedings of the Fourth Annual Symposium on Logic in Computer Science. 2004. 1989. [100] Mitchell Wand. CA. 2002. [101] Matt Welsh and Geoff Mainland. 2005. [97] Hung-Ying Tyan. pages 29–42. Joseph Hellerstein. Stride scheduling: Deterministic proportionalshare resource management. PhD thesis. An open-source OS for the networked sensor regime. Springer-Verlag. London. Berkeley. [95] TinyOS community forum: http://www. Massachusetts Institute of Technology. [98] Andr´ s Varga. and Scott Shenker. Technical Report MIT/LCS/TM-528. Generative programming for embedded systems. and Jens Palsberg. Piscataway. IEEE Press. In NSDI’04: Proceedings of the 1st Conference on Symposium on Networked Systems Design and Implementation. [93] Janos Sztipanovits and Gabor Karsai. Richard A. pages 32–49. USA. pages 92–97. USA. Volpe.tinyos. USA. Programming sensor networks using abstract regions. [96] Ben Titzer. and Pradeep K. Avrora: Scalable sensor network simulation with precise timing. David Chu. IEEE Transactions on Software Engineering. 25-27 April 2007. Daniel Lee. In Proceedings of the a European Simulation Multiconference (ESM’2001). Design. MA. Declarative sensornet architecture. Type inference for record concatenation and multiple inheritance. 6-9 June 2001. UK. 2002. December 1997. [99] Carl A. Waldspurger and William E. NJ. The OMNeT++ discrete event simulation system.137 [92] David B. Khosla. Weihl. Stewart.

[103] Kamin Whitehouse. EECS Department. April 2003. Mattern. R¨ mer. In Proceedings of the 12th Workshop on Parallel and Distributed Simulation – PADS ’98. 91(8):1199–1209. [109] Yang Zhao. PhD thesis. [104] Wikipedia. Proceedings of the IEEE. ACM Press. Jie Liu. and Mario Gerla.138 [102] Kamin Whitehouse. TinyGALS and CI. and F. and Services. Applications. New York. editors. pages 99–110. Elsevier/Morgan-Kaufmann. pages 3820–3830. and David Culler. NY. 2006. Eric Brewer. Feng Zhao. . [105] Yuhong Xiong. USA. H. [107] Feng Zhao and Leonidas Guibas. http://ptolemy. and Jie Liu. In Proceedings of 2005 IEEE Aerospace Conference. 2002. University of California. Juan Liu. 5-12 March 2005. Collaborative signal and information processing: An information directed approach. James Yang. pages 154–161.eecs. The o Third European Workshop on Wireless Sensor Networks (EWSN). Springer-Verlag Berlin Heidelberg. Hood: a neighborhood abstraction for sensor networks. [110] Andrew L.edu/˜ellen zh/click tinygals ci. August 2003. [106] Xiang Zeng. An Extensible Type System for Component-Based Design.pdf. http://www. Zimdars. In MobiSys ’04: Proceedings of the 2nd International Conference on Mobile Systems. End-to-end prototyping and validation for health management sensor networks. volume 3868 of Lecture Notes in Computer Science. In K. GloMoSim: A library for parallel simulation of large-scale wireless networks. and James Reich. Berkeley. 2004. [108] Feng Zhao. A study of Click.wikipedia.berkeley. 2004. pages 5–20. Semantic Streams: A framework for composable semantic interpretation of sensor data. 26-29 May 1998. Rajive Bagrodia. Cory Sharp. Wireless Sensor Networks: An Information Processing Approach. and Prasanta Bose. Leonidas Guibas. Karl.org/.

Sign up to vote on this title
UsefulNot useful