You are on page 1of 17

COMMON-Sense Net low-level system design (Phase 1)

Jacques Panchard (EPFL)

October 31st, 2004

Ref.: CS-HLD-042907


1 Introduction
This document aims at defining the low-level design for the prototype of COMMONSense Net to be deployed in the field by mid-2005 (first deployment in the lab in January 2005). In order to do that, it is organized along the following lines: Requirements summary Server Subsystems with APIs Embedded modules functional requirements Agenda for all parties involved from November 2004 to January 2005 with action items and milestones.

2 System Requirements
2.1 Information needs
Based on [Rao 2004], the issues most critically at stake in our context (see [Panchard 2004]) are (in each case, the relevant physical parameters are listed): Control of diseases, Yield of crop: soil moisture, temperature, solar radiation, rain falls, air humidity Water availability in bore wells: water pressure (pressure probes mounted as piezometers in the wells)

2.2 System dimensions

For deploying sensors to measure these parameters, it is necessary to define certain details like patch size selected for deployment, number of sensors / patch, frequency of reporting, etc. Below are listed the uses of the data collected and given the density of sensors and frequency of reporting needed for the different uses of the four parameters. Whenever the data collected is for potential uses only, it was put in italics and underline fonts. The group should decide if this data collection should be taken up. *At present, soil particles / material above 2mm is not considered as soil, as these particles do not conform to the soil physical properties of the fraction below 2mm. To determine soil characteristics like field capacity, bulk density etc., this fraction is ignored (sieved out). In rainfed farms of Pavagda region, the soils usually consist of about 20- 80 percent (by weight) of above 2mm soil particles and pebbles. Farmers consider this fraction as highly useful to conserve soil moisture. Similar soil types exist in large tracts of semi arid regions world over (Dr Takur ICRISAT). The behavior of soil moisture in these soils is not well understood (this should be confirmed by a quick literature survey) as most of the theories and models can not be applied to these conditions. Measurements by sensors may provide a very valuable understanding. ** In coconut orchards of this region, large quantity of dew formation is noticed on the tree canopy during October to February. Experts feel that it is necessary to measure temperature and humidity at tree canopy and ground level to understand this phenomenon.

Ref.: CS-HLD-042907


Parameter Soil moisture

Current uses To measure soil parameters for use as inputs for the models and validate soil moisture components of HEURISTIC, APSIM and CROPGRO models (refer Gadgil et al 2002 for cropgro and Prs Rao etal 2004 for apsim) Validate the values used in HEURISTIC model for farming operations and pests and diseases (refer Gadgil et al 1999) To assess / correlate farmers estimates of soil moisture to actual measurements and develop benchmarks for use. Work with farmers and schools to evolve correlation / assessments between rainfall events and soil moisture in different types of soil, crop growth, runoff, storage of runoff in tanks

Potential uses To measure soil moisture in pebble dominant soils and compare with predictions by soil moisture model*

Density of deployment 5-8 / 10ha

Measurements at site 0-15, 15-30, 30-45 cm below surface

Frequency of reporting Once in 15 minutes during rain, other wise hourly, at near 15 bar level of soil moisture (PWP) the reporting can be stopped

Evolve suitable modifications to the sol moisture models for simulating moisture in such soils


Data as input to APSIM, CROPGRO & HEURISTIC models and validation

Air humidity

Data used to establish relationship between increase in population of pests & pathogens as well as occurrence of physiological disorders(refer table 4 of user needs report) Measurements at coconut tree tops will help to correlation with dew formation** Data used to establish relationship between increase in population of pests & pathogens as well as occurrence of physiological disorders(refer table 4 of user needs report)

1/10 ha in open fields, 2-10 / 10ha for areas with trees and in orchards 3-4/ 10ha 1/10 ha in open fields, 2-10 / 10ha for areas with trees and in orchards 1/500 ha

Ground level


Ground level and tree top ground level for field crops + on tree tops in orchards

Once in 15 minutes from 6pm to 9am hourly


To validate HEURISTIC model of pests and pathogens affected by rainfall

Once in 15 minutes during rain event

Ref.: CS-HLD-042907


2.3 Deployment scenario

The COMMON-Sense system is deployed in the field area. Probes, each attached to a sensor node (possibly more than one per node), take the required measurements, store them locally and send them in a multihop fashion to the base station, connected to a PC, which stores and process them. For the test phase, the system will be under scrutiny by a number of technicians/scientists. A communication systems engineer, an hydrologist and an agronomist will make sure the system works properly and that the data collected are consistent and usable. First, the field personnel deploy the nodes and base station. The deployment field is roughly a square of 10 hectares (300mx300m). This square will be located on the IISc campus. If the area is too large, radio transmission power will be limited in order to force multi-hopping on a reduced area. The first network to be deployed will be of limited size: A total of 15 MICA2 motes (MPR410-CB) 1 Programming board (MIB510CA) 8 soil-moisture nodes with 3 ECH2O-20 probes each, organized in a rough grid with an average dispersion of 100 meters. For those nodes, an additional data acquisition board (MDA300CA) is needed. 6 temperature/air humidity nodes at the center and at the limits of the grid (sensor board MTS400) 1 node with a rain gage (CEDT proprietary).

For field deployments, it is unlikely that the base station can be connected directly to a PC over a wire, whereas this solution is feasible for the CEDT deployment. A solution consisting of a Wi-Fi connection or a mobile base station (see Luo05) will be investigated. Once the system is installed in the field, the network self-organizes in a tree topology reporting back to the sink (base station) located on the CEDT premises. From the on, the data collection and processing is automated and requires only the operation of a computer-literate operator. The sink (base station) is connected to a PC, comprising a data base where data are written when they are collected and from which they are extracted for display or data processing. A first data processing subsystem should be implemented in order to make some use of the data collected. It is designed in accordance with one or more of the following models HEURISTIC, APSIM or CROPGRO. From the PC, it is possible to draw a view of the network topology and to act on nodes (power level, sampling frequency, sleep/wake-up mode), be it individually or network-wide.

2.4 Major challenges

The whole project is based on the assumption that the costs of sensor motes will fall drastically during the project duration. This is not expected to be the case for some of the sensors we propose to use. We may need to develop low cost pragmatic sensors as mentioned above. 17-4

Ref.: CS-HLD-042907

Our system is very static. Once the sensors have been put in place, the topology should not change except for units going down due to accidental destruction or power outage or breakdown. Many or the features required for adhoc mobile networks are not highly relevant to our situation (fast automatic route updates, topology discovery, etc.) As it stands today, motes have a very limited transmission range. That makes it very unlikely that we can transmit directly from the sensor network to a Central Server (PC) as the latter will have to be housed in a safe and convenient location (private residence). We need therefore to look at the possibility to have a mixed network with a Base Station part of the sensor network, located in the field, but with access to some power source. This Base Station will in turn relay the data to the Central Server though another mean like Wi-Fi. Develop internal expertise with regard to low cost sensor development. Adaptive rate at every node: the rate adaptation is centrally driven, but every nodes can raise an alarm when it notices that the data variations increases significantly (in case of rain for soil moisture for instance). Increased range (up to 200m): new antenna design Accuracy of readings: needs to be tested for air humidity, rain gage. The soil moisture data need also to be validated. Integration of new sensors on the probes: air humidity, rain gage. Is an MDA card necessary in this case? This would increase significantly the price of the system. Packaging: rough environment, cable protection etc. Data processing subsystem: the functionalities are still to be defined precisely, and the integration to be done at the server side. Battery life: the network life-time can still be optimized

2.5 Functional requirements

Below are listed the use-cases at the system level.

2.5.1 Set-up phase

The network creates the routing tables based on a tree topology. The algorithm followed is the traditional LEPS of MultiHopRouter (see 4.4.3) Every node detects what probes are active and what are not. For this, the ADC used are specified for each parameter (see 4.2) The duty cycle is initialized for optimal power consumption (use of B-MAC primitives) The sensing begins with a default sampling frequency.

2.5.2 Normal operation phase

Nodes take measurements at adaptive rates depending on the parameter values changes. This rate is ten times the rate of transmission back to the base station. When the rate of change increases beyond a threshold at a node, this node informs the base station, which automatically broadcasts a command to the network with the new transmission rate (thus increasing also the sampling rate implicitly). For the sake of simplicity, nodes do not adapt their frequency of sampling and transmission independently; this is done in a centralized way. For the same reason, all parameters keep the same sampling rate. 17-5

Ref.: CS-HLD-042907

The default transmission rate is 1 hour for every parameter: soil moisture, temperature, precipitation, air humidity and solar radiation (light). Optional: Nodes store the data with a message counter and the sampling frequency at the time. Nodes send the data with the node id in a multihop fashion to the base station. There is no time stamp added before emission since it is the base station that add the time stamp upon arrival. Base station transmits the data to the application sitting on the PC (front-end). Results of commands are logged into a database. The results in the database can be retrieved for display by the web server, or for processing by the data processing subsystem. Nodes can be added, removed and relocated without further configuration. A route update can be requested from the web interface Parameters like transmission power, sampling frequency can be modified from the web server and downloaded to the network. Commands are sent to the nodes as a broadcast. Nodes periodically inform the front-end of their health status (battery voltage, Received Signal Strength Indicator RSSI - with parent, and/or bit or packet error-rate).

Observer initiated and event driven data models are the best suited to the COMMON-Sense network.

3 Hardware System design

3.1 Central server
Minimum requirement: a Linux PC running Red Hat 9 or Fedora 1, with tinyOS 1.1.7 installed on it ( Since the sink is directly connected to the power grid and doesnt suffer from the same power constraints as the other nodes, the possibilities to increase the range of the broadcast transmission (in order to decrease the needs for retransmissions throughout the network) should be investigated. Two possibilities can be explored here: the use of directional antennas, and the increase of transmission power (legal issues may come into the picture here)

3.2 Sensor Nodes

The sensor platform used for the first test deployment will be the mica2 motes from xbow, the version running at 433 MHz ( For the first field deployment, 15 motes are needed as sensor nodes, and one as the base station. 4 more motes will be purchased as a backup. The base station will be connected to the network central unit via a Crossbow MIB510 Programming and Serial Interface Board ( In order to improve radio range, Linx antennas will be adapted to the motes. The exact part number is ANT-433-PW-QW. An MMCX connector is also needed (from Hirose part number MMCX-J-178B/U) Ideally, a square copper plane with wavelength should be adapted to the base of the antenna (about 33 cm). Once assembled, the antenna will look as in the following figure: Ref.: CS-HLD-042907 17-6

Figure 1 mica2 antenna assembly

3.3 Sensing Probes

Temperature and air humidity probes: Crossbow MTS400 ( ($288) Soil Moisture probes : Decagon ECH2O probes ( ($150) Rain gage: developed in-house at CEDT Data acquisition boards: necessary for soil moisture readings. MDA300 ($316) (

The integration of the probes with the mica2 wireless sensor needs to be solved for the rain gage. This is the object of another design document and is beyond the scope of the present document. For the other probes, the hardware and software tools already exist. All the probes to be used will be tested in their ability to record data with a standard deviation of 5% (temperature, humidity, soil moisture). This number is somewhat arbitrary at the moment, but will be used by default until we get more precise requirements from hydrologists or agronomists involved in the project (HYDRAM, CAOS) However, ways to get by the use of the MDA300 data acquisition board should also be investigated, since it is a very expensive component. This also is the object of an upcoming design document.

3.4 Packaging (to be completed with the help of CEDT and or HYDRAM)
Packaging must be resistant to heavy rain and high temperature and humidity. It must also be protected against unintentional human damage linked with agriculture activity and animal damages (e.g. by using tubes to conceal the probes cables). Guidelines for the package are: A water proof box with small apertures to insure the ventilation of the sensor node (against temperature rise and condensation), disposed so that they prevent the rain to enter the box. A tube to protect the cables of the sensors deployed underground.

Ref.: CS-HLD-042907


The case of the sensors that are integrated on a sensor-board directly attached to a mote still need to be discussed. In particular, how is it possible to protect a sensor without impacting on the physical data it is collecting (e.g. temperature or air humidity)?

Ref.: CS-HLD-042907


4 Software System design

This design is inspired by the TASK application design, and by the sensorscope application developed at EPFL in the context of MICS (in collaboration with COMMON-Sense Net) in August-October 2004 [Schmid 2004].

4.1 Block diagram

Figure 2 Network Scheme

4.2 Message structure

Here are a few types useful to identify the type of message that is being sent:

Ref.: CS-HLD-042907



The sensor flags are used to discriminate between different types of data messages (CSN_TYPE_SENSORREADING in previous enum). Since they can overlap, we define them as bit arrays rather than incrementing integers. It is to be noted that to each type of data (or sensor) corresponds one fixed ADC port (this is used to detect what data to send during the set-up phase, when the node detects which port is active and which is not). The rule is that the SENSOR_BIT defined hereafter is 2 at the power ADC_PORT_NUMBER:

Some alarms can be raised by the sensor node and notified to the server via the base station:

The data message structure is as follows:

typedef struct CSNMsg { uint8_t type; uint8_t sensorflags; uint16_t temp;

Ref.: CS-HLD-042907


uint16_t humid1; uint16_t humid2; uint16_t moist1; uint16_t moist2; uint16_t rain; uint16_t voltage; } __attribute__ ((packed)) CSNMsg;

The command messages (from the base station) have the following structure:
typedef struct CSNCmdMsg { uint8_t type; uint16_t addr; uint32_t newvalue; } __attribute__ ((packed)) CSNCmdMsg;

The alarm message (to the base station) has the following stgructure:
typedef struct CSNAlarmMsg { uint16_t type; uint16_t addr; uint32_t value; } __attribute__ ((packed)) CSNAlarmMsg;

Finally, we define a health-status message used to update the server on the motes condition:
typedef struct CSNHealthStatusMsg { uint16_t type; uint16_t addr; uint16_t parentAddr; uint32_t voltage; uint32_t parentRSSI; } __attribute__ ((packed)) CSNHealthStatusMsg;

4.3 Server specification

The main program on the server is GeneralDataLogger, which acts as a central dispatcher and starts the different subsystems. The software components are listed in bold hereafter, and their APIs are described: 1. IOSubSystem: GeneraldataLogger creates a MoteIF (library class) for the connection with the Serial Forwarder, a CommandServer to handle the commands received from GUI clients and an ICMonNetwork to communicate with the sensor network via a MoteIF. 2. Data Processing Subsystem: to be defined 3. Query Subsystem for access to the database: creates a DBLogger. 4. NMS: ICMonQuery, which handles the generation and reception of the commands sent to the motes (either individually or network-wide)

4.3.1 IO Subsystem
This subsystem is connected to the Serial Forwarder via a MoteIF. ICMonNetwork handles the reception of the messages from the motes and the emission of the commands from the NMS. Uses query results to dispatch the results to the DBLogger, and to send to the CommandConnection objects the results of commands that need to be displayed on the corresponding terminal. The time stamping of the incoming message is done at this point. CommandConnection: handles the commands sent by a client to the sensor network. Each CommandConnection is associated with a different communication socket. Ref.: CS-HLD-042907 17-11

CommandServer: spawn a new CommandConnection process for each new command to be sent to the motes. The CommandConnection waits until the expected response is received from the sensor network. The Commandserver is typically involved by the web interface (createSocket command in the php code) and returns results to the results where it got them from. ICMonNetwork

Listens to the serial forwarder and dispatches packets. MessageReceived(int addr, Message m): called when a message is received from the Serial Forwarder. Typically formats a query result and sends it to DBLogger in order to be introduced in the database. It may also send the query back to the active CommandConnections for display. SendQuery(ICMonQuery q): Used to send a query out over the radio. CommandConnection (Thread)

Is created with an ICMonNetwork as parameter on a particular socket. run() Opens a BufferedReader to process an input streams of commands (StringTokenizer). Connects to ICMonNetwork in oder to send the commands (ICMonNetwork.sendQuery()). messageReceived(int addr, Message m) : relays back to the web interface the responses of the commands received from the sensor network. CommandServer (Thread)

Listens on a server socket for client connections and opens a new socket for every client connection.

4.3.2 Data Processing Subsystem

This is the subsystem responsible for the analysis of the incoming data. This subsystem defines and raises alarms, and triggers actions to be taken when the system reaches some threshold conditions. This subsystem still needs to be defined precisely based on input by S. Rao (purpose and description of the model to use: HEURISTIC, CROPGRO etc.).

4.3.3 NMS
ICMonQuery formats the queries to the motes and waits for the query results to send them to the DBLogger.

4.3.4 DataBase subsystem

The database is a PostgreSQL. The logged tables with their parameters are listed below:
-- table to log sensor data create table icmon_log_queries ( id SERIAL PRIMARY KEY, table_name VARCHAR(32) NOT NULL, time TIMESTAMP NOT NULL, query VARCHAR(255) ); -- table to log command messages create table icmon_log_cmd_msg ( id SERIAL PRIMARY KEY, seqno INT NOT NULL, time TIMESTAMP NOT NULL, cmd VARCHAR(255) NOT NULL,

Ref.: CS-HLD-042907


addr INT, value INT ); -- table to store information about motes create table icmon_mote_info ( nodeid INT PRIMARY KEY, location_x INT, location_y INT, type varchar(30), location VARCHAR(255), comment VARCHAR(1000), lastbatchange TIMESTAMP ); -- table to store parent statistics CREATE TABLE icmon_routing_parent ( id SERIAL PRIMARY KEY, time TIMESTAMP NOT NULL, nodeid INT REFERENCES icmon_mote_info ON DELETE CASCADE ON UPDATE CASCADE, parent INT NOT NULL, quality INT DEFAULT -1, depth INT DEFAULT -1, occupancy INT DEFAULT -1, );

4.3.5 Queries Subsystem

The DBLogger logs the results of the queries to the database. It communicates with the IO subsystem and with the NMS through a Listener. initDBConn() : Initializes the connection to the Database Subsystem logQuery(ICMonQuery queryToLog) : Does the setup work for starting to log a query createTableStmt(ICMonQuery query, String tableName) : Returns a query string which can create a table insertStmt(QueryResult qr, String tableName) : returns a query string which can insert a new element in the DB.

4.3.6 Web GUI Subsystem

The GUI subsystem interacts with the Data Processing and NMS subsystems to display data and return feedback from the user/administrator. For the sake of simplicity the first web interface is written in php. Commands are sent to the CommandServer through a socket.

4.3.7 Tools
ICMonQuery: to format command messages before sending them through the Serial Forwarder. Its methods are of the form getXXXMsg() and setXXXMsg() to format the right type of message. The output is always a message structure of the type CSNCmdMsg with the right parameters filled in (be it rate, transmission power etc.).

4.4 Embedded modules specification

We make two preliminary remarks here: In order to falicitate deployment and maintenance of the network, one unique code is uploaded on every mote. Each sensor is responsible for determining what ports are active in order to infer what data are to be sampled and sent. Ref.: CS-HLD-042907 17-13

It is important not to use the leds for any other prupose than testing and debugging, since they are using a significant amount of energy when being powered on and off.

4.4.1 Data Processing Module : contains the code to operate the different sensors connected to the node, retrieve the data and send them in a multihop fashion back to the sink. This module is started from and simply implement the following interface:
interface CSNSense { // Change the interval between sensor readings. command void setInterval(uint32_t interval); command result_t start(); // Idempotent command result_t stop(); // Idempotent command uint16_t getSensorFlags(); command void setSensorFlags(uint16_t sf); }

The getSensorFlags() and setSensorFlags() methods are used to enable/disable the relevant sensor probes (see paragraph 4.2 for more details) The main elements of CSNSense are: SensingTimer: The timer that triggers a new set of readings from the probes attached to the node. This timer is configurable. processData(): called by SensingTimer.fired(). Operates the different probes. xxxSensor.dataReady() : called when the data for the sensor xxx is ready. The data is assigned to a local variable (array of data), and when all new data are available, they are sent back to the base station. dataSendTask() : the task called to effectively send the data back to the base station. dataAverage() : compute the average of the last samples before discarding them. This is the only data processing done at the motes level at the moment. In a further release, more sophisticated processing may be included, such as differential coding when only the difference between successive samples is sent, aggregation or more complex averaging methods to be defined. checkRate(): called in xxxSensor.dataReady() to check the rate of changes of the data readings. In order to do that, it calls the SensorManager, whose interface it also uses. Any deviation of more than 10% in the data readings will be reported to the base station. Whenever possible, a SensorControl interface will be attached to each probe in order to optimize its power usage.

Ref.: CS-HLD-042907


4.4.2 SensorManager This component is used to detect what probes are active at power-up, and then to inspect periodically the sensors (typically at every sampling, or before sending a packet) in order to determine when to raise an alarm. It provides the following interface:
interface CSNManager { command result_t start(); // Idempotent command result_t stop(); // Idempotent command uint16_t getSensorActivity(); command void setSensorActivity(uint16_t sensorBA); command Result_t checkVariationRate(uint8_t sensor); asyncronous event setAlarm(uint8_t type); asynchronous event clearAlarm(uint8_t type) ; }

getSensorActivity(): verifies what sensors are active. Returns a bit array with the active sensors. setSensorActivity(): set the sensor activity based on a bit array. checkVariationRate() : is used to check if the last samples varied significantly. Return the standard deviation. If the deviation is above a threshold of 10%, it signals an event back to the module using the interface in order to trigger the emission of an alarm message. The Sensor manager also maintains an array of active and inactive alarms. setAlarm() : Signals an alarm. clearAlarm(): signals an alarm clearance.

4.4.3 Message Handler

Handles the commands received from the base station. This module is included in, the main program. The routing algorithm used in the first release is the default LEPS, such as used in the default Multihop router of tinyOS (see CEDT2004 for more details) Bcast.receive(TOS_MsgPtr pMsg, void* payload, uint16_t payloadLen): the point of entry of incoming command messages processCommand(ICMonCmdMsg* pCmdMsg) : based on the type and contents of the command message received, initiates the correct response. SendSpecificDataTask() : handles the transmission back to the server. SendAlarmTask() : handles the transmission of alarms back to the server.

4.4.4 Element Manager

Contained in It handles: Duty-Cycle (sleep/wake-up periods): setDutyCycle(uint8_t cycle) RF Power : setRFPower(uint16_t newPower) Sensor activation/deactivation : setSensorFlags() defined above in Ref.: CS-HLD-042907 17-15

Collection of alarms from the data processing module: handles the alarm and calls the SendAlarmTask() to report back to the server.

We do not used any synchronization between the different nodes of the network. The handling of the duty cycles is done with the preamble sampling done in B-MAC protocol (see CEDT2004 for more details)

4.5 Data processing subsystem

We still need to define the API. This involves input from S.Rao, to get specification of the crop models.

5 November 2004 January 2005 milestones

All the following points are to be discussed and scheduled between the project partners. The assignment of tasks indicated in parenthesis is only the authors suggestion. The staff involved at the moment is the following: CAOS: S. Rao CEDT: Vinay, Ashwath, Prabhakar, HYDRAM: Colin Schenk LCA: Jacques Panchard, 3 students working on a semester project November 7 : first version of LLD submitted for review (LCA) November 15 : Feedback to the author (CEDT, HYDRAM, CAOS). November 20 : submission of final version of LLD, with task attributions (LCA) November 15 : Crop model specifications (CAOS) November 30th: first version of embedded code (EPFL) November 30th : Integration of rain gage (CEDT) November 30th: tests on ECH2O with MDA 300 (HYDRAM) December 24th: First version Server side application (LCA) December 24th : Integration air humidity and temperature probes (LCA, HYDRAM) December 24th: Tests on range and antennas (CEDT) antennas on motes, base station antenna (single hop broadcast) December 24th: Proposal of an alternative design to the use of MDA300 data acquisition card. December 24th : packaging (HYDRAM, CEDT) December 24th: IISc test bed definition (CEDT, CAOS) January 15st : unit test results (all) January 31st : Integration of different parts (hardware and software) January 31st: Crop model implementation February 11th: Deployment at IISc

6 Predeployment Tests
Graceful degradation Temperature Life-time 17-16

Ref.: CS-HLD-042907

7 Bibliography
Rao 2004 PR Seshagiri Rao, Madhav Gadgil, Ramakrishnappa, M Gangadhar, Report of user requirement survey, CES, CAOS, IISc. BANGALORE, 2004

Schmid 2004 Thomas Schmid, Sensorscope, MICS Summer Internship, EPFL, 2004, Panchard 2004 Jacques Panchard, COMMON-Sense Net System Requirements and High-Level Design, EPFL, 2004 CEDT2004 CEDT, COMMON-Sense Net, Working draft for MICA2 motes (Phase 1), Technical Report, November 17, 2004

Ref.: CS-HLD-042907