Pennies From Heaven: a retrospective on the use of wireless sensor networks for planetary exploration

Robert Newman, Mohammad Hammoudeh
School of Computing and IT University of Wolverhampton Wolverhampton, UK R.M.Newman@coventry.ac.uk

Abstract—Wireless sensor networks are finding many applications in terrestrial sensing. It seems natural to propose their use for planetary exploration. A previous study (the Mars daisy) has put forward a scenario using thousands of millimeter scale wireless sensor nodes to undertake a complete survey of an area of a planet. This paper revisits that scenario, in the light of some of the discussions surrounding its presentation. The practicality of some of the ideas put forward is examined again, and an updated design sketched out. It is concluded that the updated design could be produced using currently available technology. Keywords-component; wireless sensor network, planetary exploration, autonomous systems.

II. THE MARS DAISY

I. INTRODUCTION This paper is a further visit to a concept which received some attention two years ago. The concept was a case study of the exploration of Mars using a wireless sensor network, in place of more conventional probes such as landers and rovers. Each probe was in itself a ‘nano-lander’, being scaled somewhat smaller than any other probes proposed before, in the scale of millimters. It was proposed that acting together, a host of these landers could perform many of the funstions associated traditionally with rovers, and cover a larger area in more detail than could a rover. It should be stated that the proposers are computer scientists, not planetary explorers, and many of the assumptions made were niaive. Nonetheless, the scenario did create some resonance amongst real plantary scientists. This paper represents an attempt to revisit the design of the scenario, and move a step closer to practicality. It is organised as follows. In the next section, the original idea is reprised. Section III discusses the perceieved advantages and disadvantages of a mission based on wireless sensor networks. Sections IV to VIII discuss various instruments to be expected to be deployed on such a probe in more detail. Section IX discusses some issues of wireless sensor network design in this context. Section X proposes the sketch of a revised design, more a ‘penny’ than a ‘daisy’.

Figure 1: The original mars daisy
The Mars daisy (Figure 1) was a speculative nanoprobe first presented in 2005 [1,2]. Its origins were as part of the storyline for a short film designed to present the potential of wireless sensor networks in a way that would capture the imagination of teenagers. The film presented a planetary exploration mission to Mars. Rather than a conventional lander, the mission used a number (9000 20g probes would have the same mass as the Spirit and Opportunity rovers) of nano-landers., each 40mm long by 7mm in diameter. The nano-landers would be deployed within atmospheric entry vehicles, and released at low altitude from where they scattered and became impaled in the regolith using a spike, containing the major mass (the battery). The probe design was envisaged to be based around a stack of chips, forming an opto-electrical-mechanical system. In addition to a processor and memory, additional chips provided power regulation, optical sensing, chemical analysis, imaging, seismographs and communication, both between probes and with a relay satellite. The precise function of the chips in the stack is described below.  A 7W semiconductor laser provides communications functions. Optical communication was selected in order to provide a

sufficiently narrow beam width to allow communication with the satellite using the very small (40mm) antenna aperture dictated by the tiny size of the probe. The laser chip also provided optical energisation for the instruments on a chip, as well as energy for vapourisation of soil samples.  A ‘lab’ on a chip, containing a number of instruments, mostly operating on the Fabry-Perot principal. These instruments included a chromatograph for chemical analysis and accelerometers for seismic analysis. The optical sensor chip included image sensors (cameras) to capture the images formed by the lenses in the ‘insect eye’ at the top of the daisy. This chip also provided sensing for the FabryPerot instruments and the receivers for the optical communication. A power management chip provided would provide power management functions as well as driver circuitry for the actuators, mostly steering mirrors supporting the various optical functions. The other major component was formed by the ‘petals’ of the daisy. This was a multipurpose component, serving as a steerable antenna, a solar concentrator and also a movable aerodynamic surface, during the descent.

reported. However, the details of how the actuators for the petals would work and how regolith samples would find their way from the spike to the chromatograph were not resolved. In particular, the petal design called for selective application of ‘unobtainium’. III. SENSOR NETWORKS VERSUS CONVENTIONAL PROBES Although much of the discussion around the original Daisy centred on the design of the individual probe, the core concept is that of a sensor network as a planetary exploration instrument. Naturally, the previous work emphasised the advantages of the approach, the major ones of which which were seen to be:  A mission with much greater robustness than one using a ‘rover’, due to the massive redundancy of indiviual micro probes. Coverage of a much larger area than a rover based mission for the same payload.

Discussion resulting from the publication of the previous papers, some with people with experience of planetary exploration, brought to light a number of ways that a sensor network would be inferior to traditional instruments, namely:   Because of the tiny size of the probes, the function of the instruments carried was severely limited. The probes were fixed, and so could not investigate any site other than the very immediate vicinity of the landin point. Conventional surface probes can be equipped with tools to allow them to take deep core samples, or drill into rocks which have been selected as being of interest. By contrast, a nano-probe is restricted to surface sampling where it lands. IV. GEOGRAPHICAL MAPPING The geographic mapping function of the daisy network was in fact the place where the thought experiment started. At the time, sensor node licalisation was one of the main research themes of the group concerned, and the question arose as to whether localisation information from a sensor network could be used to make a detailed 3-D map of the terrain on which the sensors were placed. Since the original aim of the scenario was popular science dissemination, a highly visual application was desired, the one selected being the use of 3-D maps derived from the nodes’ localisation to support a virtual presence application, along with use of the sensed data to provide environmental feedback (in the real world, the virtual presence scenario would be precluded by the 4-20 minute transmission delay between Earth and Mars). The importance of sensor localisation lies in the mapping of sensed data. It is obviously imperative to know the location within the map of an individual sensor’s data point. In terrestrial applications, it is not always possible to determine location easily. For most existing large sensor systems, sensor locations are determined as a result of a site survey. This is expensive,

After the daisy’s first appearance in the film, certain aspects of its design became detailed and a mechanism for packing and disseminating the daisies via a tape was devised, which also provided for their charging and programming before delivery into the Martian atmosphere. The scenario also provided a case study for the design of autonomous networked sensor systems, including protocols, information retrieval mechanisms and methods for constructing maps from fields of sensors, such as the daisies. The systems work described above differed markedly from most other nano-probe studies, in that it envisaged a flat, peer to peer network, as opposed to a lander which operated as a base station, used in combination with a swarm of nano-landers. The major reason for this emphasis was the direction taken in terrestrial sensor networks. This is heading towards a vision of global, ad hoc, sensor webs, perhaps best exemplified by Nokia’s ‘Sensor Planet’ [3]. At the hardware level, this had the disadvantage of requiring every node to communicate with the host satellite. The advantage is a much more robust and adaptable system, which is not vulnerable to loss of a single specialized resource, such as a ground station. Since it was never intended as a serious contribution to the canon of planetary exploration, little further work was done on the daisy itself, although a brief feasibility study was published. The conclusion was that some parts were feasible, at least in operating principle, some weren’t. It appeared that such a probe would have sufficient processing and memory to undertake the task required, the power budget balanced, and successful implementations of the instruments envisaged had been

and often problematic since the area under investigation is not always accessible. While GPS is a possible solution to localising individual sensors, it has its problems. Firstly, it is sensitive to many environmental problems, operation under tree canopies being one. Secondly, the addition of a complete system in each sensor node, simply for localiseation, is likely to add substantially to the cost of the mission, since each extra item of node cost is multiplied several thousand times. For this reason, a strong trend in sensor network localisation research has been the use of the existing radio communication resources for localisation, using a variety of techniques including time of flight, relative phase and signal strength. For extraterrestrial exploration, it is likely that systems will be provided within the overall mission for surface mapping, such as the use of synthetic aperture radar (SAR) by an orbiter. This can provide terestrial mapping to a resolution of a few metres. Against this, trilateration localisation techniques operating in the microwave transmission region can achieve resolutions of a few centimetres, but operate only where the sensor nodes are positioned. The end result is a rather sparse map of very precisely located points. One opportunity is the use of mult-modal systems, in which the location of the sensor nodes is used to add precision to a SAR map. Precisely localised nodes, equipped with cameras, could also locate precisely features on the surface of a planet, using optical triangulation. When used in combination, these techniques do provide an opportunity for very precise surface mapping, if that is required. VI. REGOLITH CHEMISTRY Regolith chemistry, in some form, is one of the primary investigation carried out by most planetary exploration vehicles. Since publication of the original work, the authors have gained practical experience of designing sensor networks for the investigation of soil chemistry, but unfortunately these do not provide a good guide as to what might be done with an extraterresrial probe. Firstly, for reasons of timeliness and cost, the devices is constrained to use off the shelf instrumentation. The most commonly available low cost devices depend on electrochemistry, and sense chemicals in aqueous solution. Typically, these networks use ion sensitive electrodes to sense particular analytes. In the absence of water, the mechanical arrangements for conveying the sample to the sensor become considerably more complex. The scenario for the Mars Daisy envisaged the use of the landing impact to force a small sample into a tube in the spike of the probe, whereby it would be vapourised and ionised using the laser and analysed using a MEMS scanning Fabry-Perot spectrometer, integrated onto the instrumentation chip. All the parts of this scenario have been reported, but a complete instrument has not been designed, stll less produced and evaluated, so it is difficult to say with any confidence that such an approach is feasible. Mass spectrometry is commonly used for chemical analysis in planetary landers. The smallest such instruments, which use MEMS technology, are still not at a scale that could be accommodated within a probe of the

size of the daisy. Blain, Cruz and Flemming [4] have produced a useful survey om micro-miniturised mass spectrometers, and conclude that the ion-trap principle of operation shows the most promise for miniaturisation. Such an instrument could be at a scale of a few centimetres, physically constructed from a stack of chips or wafers. There remains the problem of how to introduce a sample into the instrument. If a probe is static once landed, there is one sample opportunity, and the use of imact energy to excavate and move the sample seems to be an attractive option. Another element of the field sensor network scenario that was proposed as an advantage is the ability to make detailed maps of such parameters as soil chemistry. This is indeed an advantage in terrestrial applications, in which the regolith will often be obscured from overhead observation by foliage or artificial structures. When this is not the case, satellite imaging spectroscopy produces excellent soil chemistry maps. As an example of the quality obtainable, and what might be expected from a sensor network the following maps, derived from a series presented by Clarke and Swaze [5] are presented. The first is derived directly from these images and shows distribution of iron minerals around Cuprite, Nevada.

Figure 2: Fe distribution around Cuprite, NV.
The image represents an area 2 km on a side, and provides a very detailed account of the mineral disribution in that area. This image has been used as the basis for a simulation of the results that might be expected from a sensor network, sensing for evidence of the same chemicals. Using the Dingo simulator [6], a network of sensors were randomly distributed over the 2km square, and the value of the image in Figure 2 used as the output of the sensing device at that point.

The simulated sensor network was programmed to produce a map, using the Shepard interpolation method, as has been reported previously [7]. Figure 3

Trebling the number of sensors has clearly made a considerable improvement to the image quality, but it is still considerably poorer than the

Figure 3: A map produced using 1000 sensor nodes.
shows the map produced by a 1000 node network. While this is recognisably the same area, the map is clearly of much poorer quality than the original, which could have been obtained using an orbiting image specrometer. Figure 4 shows the same image, but now produced using a 3000 node network.

Figure 5: Isopleths added to the 1000 sensor map.
original. Finally, Figure 5 shows the image from the 1000 node network, with an overlaid contour map. Although this is a difference of presentation only, it does make the map more readable. These maps use a relatively primitive method of interpolation. Improved interpolation methods, together with multivariate and model based techniques, might improve the interpolated maps somewhat, but the satellite spectroscopy is a hard target to hit. In conclusion to this section, it can be seen that a sensor network, unless very densely populated, cannot compete in terms of detail with image spectrography. However it may well be a useful adjunct to it, allowing calibration of the satellite images, and determination of some species with more accuracy than is possible. Sensor networks would also come into their own in situations where spectroscopy was impracticable, for instance impenetrable atmospheres. VII. IMAGING The ‘insect eye’ on the daisy was not, in truth, a practical solution to a requirement for visual imaging. The objective of the design was to gain a 360º field of vision and, given the arbitarily determined weight and size limitations, to do it without moving parts. In practice, it would be difficult to make lenses with the required characteristics, and distribution of the imaging arrays on the chip would be difficult. In a more realistic scenario, there would be a number of requirements for visual imaging. One was mentioned in Section IV, precision mapping by sterioscopic imaging. Using widely spaced sensor nodes could provide very accurate localisation of objects using triangulation, provided that the nodes could be accurately localised and angles of incidence could be accurately determined. The ‘insect eye’ would not perform this function well. A better solution would be a camera which could be panned. A larger chip would allow a very high resolution image

Figure 4: A map produced using 3000 sensor nodes.

sensor, which could allow detailed inspection of objects, even at some distance. VII. SEISMOLOGY The original Daisy included a accelerometer. The motivation was simply that seismology is of potential importance in planetary investigations, and accelerometers are a well understood component of wireless sensor networks. Subsequently, it has become eveident that field sensing of seismic events Ithat is, monitoring the same even over a large area) provides an important additional capability – to undertake seismic tomography [8]. This means that, provided seismic activity was present, it would be possible to use the sensor network to make an image of the subterranian structure. Even in cases in which there was little seismic activity, releasing an object to impact the planet at speed could provide an event which could be used to probe underneath the surface. VIII. METEOROLOGY Another capability of sensor networks, forseen in the original scenario, is the abilty to make meteorologic measurements and map them precisely over an area. In the agricultural work, the ability to understand microclimates is very important. Whether or not microclimates are as important in extraterrestrial applications is a question for meteorologists. Certainly, a sensor network can make accurate maps, of a comparable quality to those in figures 4 and 5, of temperature, atmospheric pressure and wind speed and direction. In terrestrial applications other factors such as CO2 and H20 content are often sensed also. Since electronic gas sensing is a relatively well developed art, it would be possible to detect a number of gases using well understood and mature technology. If the microscale mass spectrometer discussed in section VI were available, it coulds be turned also to atmospheric sensing, and would provide a flexible, general purpose instrument. IX DISTRIBUTED SYSTEM DESIGN Most of the sensing modalities proposed above come into the category of field sensing – that is the use of an array of sensing devices to determine the value of some measurand over a surface or volume. There have been a number of terrestrial examples of such networks. For example, in the Microclimate Sensor and Image Acquisition Networks system reported by the Center for Embedded Network Sensors (CENS) at UCLA [9], multi function sensor nodes have been deployed to allow, amongst other functions, maps to be made of climate data. The promise of field sensing, with many data points, using a massive plurality of sensors, is to produce a detailed map directly from the sensed data, using thousands of nodes, as depicted in the simulations in section VI. The scale of networks illustrated there is around the current state of the art. A major attempt to demonstrate a system of similar size is the ExScal (short for 'Extreme Scale Networking) project [10]. This project

started with the goal of the deployment of a 10,000 node sensor network and built the world's largest deployed WSN, some 1,200 nodes, installed over an area of 1.3km by 300m. The network was designed to detect and track intruders using acoustic methods. ExScal remains the most thoroughly researched and documented massively plural network, and thus is a reference point for future applications. Bapat et.al., in their summary of the results of the ExScal project [10] cite the problems expected to be encoutered in the design of a very large network. They are:  Failure of sensor network protocols to scale: The main reason for this given is inability of protocols, which work at a small scale, to deal with node failure in the large scale.  Complexity of integration: here the issue is the interaction of the multiple protocols which deal with issues such as medium access, reliable communication, sensing, and time synchronization  Lack of sufficient fault data: given the susceptibility of networks to faults, it was argued that there was a need for more real fault data in a working context.  Unpredictability of network behavior: Essentially the consequence of the above – little is known about behaviour of networks on this scale, and it is not clear that scalable solutions have been validated in real-life use. The design approach used to address these concerns was one of use of a 'planned architecture', the imposition of various design constraints to simplify the operational complexity of the system. Nonetheless, it would seem when the results are studied, that ExScal was at the limit of network size feasible with such an approach. To simplify the localisation of nodes and the interpretation of the data from them, the system was laid out on a rectangular grid, with nodes being located on installation using a hand held GPS device. Nonetheless, 11.4% of the nodes were incorrectly located. To ensure reliable data transmission, a three-level hierarchical architecture was adopted, with different specialised hardware at each level. The end to end reliability achieved using this architecture was 85.61% for the best traffic type (low bandwidth) and 55.14% for the worst type of traffic (high bandwidth) A conservative attitude to hardware specification was adopted. Despite this 6% of the nodes were nonfunctional after the 15 day trial. Loss of a second tier node caused loss of a complete section of the network. A significant amount of node malfunction (7%) was associated with their reprogramming. Without doubt, completion of the ExScal network was a considerable achievement, but it has not established 'planned architecture' as a suitable basis for networks an order of magnitude larger that was achieved there.

Furthermore, the carefully planned and installed network of ExScal, and the attrition over a short period, would not be feasible within the planetary exploration scenario put forward. The network must be self configuring and self maintaining. This was the major reason for the adoption of a flat, peer to peer architecture. The problem of self-configuration of a network becomes somewhat simpler when all nodes are the same. Various protocols for self configuration have been published, usually designed to presever power as much as possible. Examples are Leach [11] and MuHMR [12]. It should be noted that these protocols have been verified in simulation only, to date no-one has produced a network of autonomous sensors sufficiently large or long lived to test in real life. Another point to be noted is that the vast majority of protocols and simulated results assume omnidirectional, radio data transmission. The directional, optical transmission envisaged in the Daisy is completely outside the parameters of these protocols. A safer choice would be to revert to conventional radio transmission, which gas been well characterised. For nodes on a very small scale, this probably rules out direct communication with the satellite, due to the lack of directivity at these longer wavelenths. Thus, such a system would probably use specialised ground stations to communicate back home. The vulnerability of these could be eliminated by replicating them sufficiently. The sensor network would still operate on an ad-hoc, peer to peer basis, routing messages back to one of the ground station sinks. So long as one remained operational, and a route from all nodes to it could be found, the network would remain operational. Energy saving is also the reason for localising processing to the nodes as much as is possible. Typically, in sensor networks, power consumption is dominated by data transmission. A previous study has shown the potential energy savings attainable by selection of an appropriate processor, and maximising processing so as to minimise data transmission [13]. The interpolation algorithms which produced the maps in Figures 3, 4, and 5 have been designed so as to be readily distributable and to minimise communication used in their construction. This is important in an application such as planetary exploration networks, where the aim must be to minimise the necessity to transmit data to the host satellite. Rather than every node requiring to communicate raw values, such maps can be produced within the network, and transmitted by a single node. That node, in posession of the complete output, may use effective data compression techniques to minimise the actual data transmitted. X. ALTERNATIVE NANOPROBE DESIGN While visually attractive, the daisy no longer seems to be a ver practical design. While it was based on millimetre scale chips, some of the essential components, such as the mass spectrometer, require centimetre scale chips. In designing a node for use in large quantity, it is necessary to design it to be as light as possible, and to pack in the entry vehicle as tightly as possible, since it will be necessary to deploy several thousand of them. An attractve form factor, given that many of the components are based on silicon wafers, is a disc. These can be packed within the entry vehicle in the same way

as coins are stacked. The stacks would be retained by wires, which would be used to cahrge and program the nodes while within the entry vehicle. A possible arrangement is shown below: (camera ready version will contain diagram) An advantage of the disc form factor is that power cells are conveniently manufactured in disc form, so a power cell forms the lowest layer, putting the mass at the bottom, to help ensure that the node lands the right way up. The next component is the stack of wafers that form the mass specrometer. These would be topped by several multi chip hybrids, integrating the circuitry and embedded sensors (such as accelerometers). Finally would be another hybrid containing the external sensors (image, pressure, temperature, gas sampling valves) and deployment drives. During descent the top surface would be covered with a number of layers, which would be unfolded by the deployment drives. These could be one or more solar cells, for power, a loop antenna for radio transmission and a steerable mirror for the image sensor. Although larger and heavier than the original Daisy, this coin shaped probe is still scaled small enough to allow deployment of very many for the weight of an existing orbiter. Undoubtedly if a mission is planned in earnest, the real design would be very different from this, but as a concept sketch it suggests that a multi modal planetary exploration node is feasible with current technology. ACKNOWLEDGMENT The original Mars Daisy scenario was a team effort. Contributors included Sarah Mount, Andree Woodcock, John Burns, Jim Tabor, James Shuttleworth and Elena Gaura. REFERENCES
[1] Newman R.M., Gaura, E. Tabor, J. Mount, S., (2005) Hardware Architectual Assessments for Cogent Sensors - Requirements derived from a Planetary Exploration Scenario, Proc. of Second International Workshop on Networked Sensing Systems INSS 2005 (IEEE &SICE), June 2005, San Diego, pp. 113-118 [2] Woodcock, A., Burns, J. Gaura, E. Newman, R.M. Mount, S. (2006), Daisies on Mars: disseminating scientific information to unmotivated audiences, Proceedings of IEA 2006, 16th World Congress on Ergonomics, July 2006, Maastricht. [3] Chen Canfeng, Ma Jian,Yu Ke, Designing Energy-Efficient Wireless Sensor Networks with Mobile Sinks, Workshop on World-Sensor-Web (WSW'2006), in Proc. SenSys'2006. [4] Matthew G. Blain, Dolores Cruz, James G. Flemming, Micro Mass Spectrometer on a Chip, SAND2005-6838, Sandia National Laboratories, Albuquerque, New Mexico, Nov 2005 [5] Clark and Swayze, Summaries of the 6th Annual JPL Airborne Earth Science Workshop March 4-8, 1996 [6] Mount S.N.I., Newman, R.M. Gaura, E. (2005), A simulation tool for system services in ad-hoc wireless sensor networks, Proceedings of 2005 NSTI Nanotechnology Conference and Trade Show (Nanotech'05) May 2005, Anaheim, California, USA., Vol. 3, Ch. 7, pp. 423 – 426 [7] Hammoudeh, Interpolation. [8] Stewart, R. R., Exploration Seismic Tomography: Fundamentals,

Society of Exploration Geophysicists, 1991 http://research.cens.ucla.edu/projects/2006/Terrestrial/Micro climate/default.htm [10] Sandip Bapat, Vinod Kulathumani, and Anish Arora, Analyzing the Yield of ExScal, a Large-Scale Wireless Sensor Network Experiment, 13th IEEE International Conference on Network Protocols (ICNP) 2005) [11] Leach [12] Hammoudeh, MuHMR [13] Shuttleworth, J. K., E. I. Gaura and R. M. Newman (2006), Surface Reconstruction: Hardware Requirements of a SOM implementation, Proceedings of the ACM Workshop on RealWorld Wireless Sensor Networks, (REALWSN'06), June 2006, ACM ISBN: 1-59593-431-6, 95-96 [9]