You are on page 1of 17

Abstract:

Speckled Computing is an adventurous research program, which offers radical new ideas
for information technology in the future. This will be realized by minuted, autonomous
semiconductor speckes – each speck will encapsulate capabilities for sensing,
programmable computation, and wireless networking. Computing with networks of
speckes (known as specknets) will enable linkages between the material and digital world
with a finer degree of spatial resolution than hitherto possible, which will be a
fundamental enabler for truy ubiquitous computing. The Speckled Computing Research
Consortium slices across traditional academic disciplines and organizational boundaries
to bring together physicists, electronic engineers and computer scientists from five
Scottish universities to provide an integrated technological push towards the realization
of specks and specknets. This paper outlines the vision underlying the project, and gives
an overview of some of the considerable research challenges.

Speckled Computing and Education


Liveblog from Technology Coffee Morning, Jennie Lee Labs, 9 September 2009, given
by Eileen Scanlon (IET) and D K Arvind, Director f the Speckled Computing research
consortium at Edinburgh University.

Context

For Eileen, it’s the Personal Inquiry project – large collaboration with Nottingham.
Inquiry learning in science; a little over halfway through a 3y project.

Hardware is all reasonably off-the-shelf equipment for scientific data capture. Literature-
grounded method of supporting the inquiry process by involving young people in
empirical work; technology is a way of enabling them to work through an actual cycle of
focused investigation, rather than a simulation. Examplar topics: microclimates, urban
heat islands.
A lot of previous work to support inquiry learning is about modelling phenomena and
processes, often using simulations.

Student feedback says they appreciate real data collection. Project is not tackling issues
of modelling and immediacy of feedback.

SensVest, developed at Birmingham as part of Lab of Tomorrow project – vest with


accelerometers. Results from pilot trials not very positive. Hypothesis was that this
would be better than looking at readymade or simulated graphs; but not clear that it was.
Thought could be because of delay in feedback.

So conversation here is about comparing predictions of a model with data collected in a


real-time sense.

Speckled Computing

D.K. Arvind – a high-level overview. Funded by EPSRC. Not in to the technological


detail. Work by concetrating on underlying science and technology to realise the specks,
and networks of them – specknets – working very closely with domain experts to see how
the specks can be used in applications.

Internet has 1 billion hosts today. IPv6 will support >35 trillion separate subnets, and
each one in turn can connect millions of devices. Potential capacity to name/connect
every grain of sand. Smart objects – smart meaning objects know something about their
environment, and location-aware – not necessarily absolute, but relative: who are my
neighbours.

Vision: endow persons/objects with sensing, processing and wireless networking


capabilities. Aim to bridge the physical and virtual worlds. (Just what I’m interested in!)

Sensor intelligence as a telecom service – plural services, access agnostic.

Specks: minature programmable devices which can sense, compute and network
wirelessly. Autonomous, rechargeable, energy scavenging (e.g. photovoltaic cells tuned
to internal lighting – focus on built environment). Specks non-static and unreliable –
design protocols for expected failure and intermittent connectivity.

Tens/hundreds of specks collaborate as dense programmable network – a Specknet.


Fine-grained distributed computation – the resources (energy, bandwidth, computing) are
scarce here. Thirty years ago (or more!) the integrated microprocessor replaced box of
different electronics with a single unit, led to a revolution. So here, encapsulate sensing,
processing, and networking in a single ‘device’. If these are unobtrusive, lightweight …
This is an enabler technology for Ubiquitous Computing.

Family of devices – 8-bit (med) client, can connect up to four sensors with 32-bit (large)
microserve first, miniaturising to give 8-bit 5mm cube client. Freespace optics as comms
– useful when devices are stationary. Would love to put sensors in e.g. the Jennie Lee
Labs – because they’re static, can have line-of sight. Very small, low-power lasers. When
on people, need radio – but that’s wasteful of energy because you radiate in all directions
rather than directionally.

Next device: ‘Orient’ – 3-axis gyroscopes, accelerometer, temperature – attach to the


limbs, calculate orientation on the devices themselves: leads to real-time capture of 3D
motion – liberated from the studio. Lots of applications.

Also: Energy Neutral (EN) platform – capturing energy from photovoltaic sens.

Current motion capture methods: 1. Studio based with cameras, many cameras, reflective
markers attached to person; grab info from 6-8 cameras, stitch together to get 3D view –
computationally/memory-intensive post-processing. Not real time unless very high-end.
Expensive – £30k ?per hour. Occlusion is a problem when capturing multiple subjects –
need more cameras, but makes more post-processing.

2. Motion-capture suits. Wired suits, lycra, with a bulky base station/backpack which
routes the sensor data to a high-end machine to do the processing (like Gollum).

3. Joint angle sensors. Bulky exoskeleton, cumbersome, hinders movement – not widely
used.

So want: fully wireless, real-time and interactive, easy to use, ‘banalise the technology’,
democatise its usage. Parallel with desktop publishing.

Orient Motion Capture system – currently sensors are about 30mm, need to miniaturise.
(Video using Motion Builder for capture at www.specknet.org)

Can use real avatars: telepresence; bipedal robots operating in a harsh environment – use
entire body as interface. Also in games. Unobtrusive participation in simulations
combining real and virtual players – ‘serious games’.

Applications – lots – Digital media (motion capture, games, sports); Health – with
Lothian – looking at:

• Congestive Obstructive Pulmonary Disease COPD non-invasive monitoring of


breathing (devices on the chest wall) – can do analysis/monitoring remotely, with
patient at home;
• Intensive care
• Clinical gait analysis – not just a few minutes in the hospital, but captured over,
say, a week – is there variation over the day, different surfaces, slopes and so on.
Much richer information for diagnosis.
• Physiotherapy. Program them with ideal movement, track improvement over time.
Transfer data. Can see how well they’re doing.
Videos/applications

Showed avatar control to Linden Labs (Second Life). Not keen because would flood their
network.

Edingburgh Science Festival 2006 – learning in informal settings. Put sensors on break
dancers (8-10 year old), give them ideas about physics e.g. angular momentum,
centripetal forces and so on, based on their breakdances. Competition – who can spin on
their head fastest. Not saying you’re teaching – surreptitiously getting them to do things.

Golf swing analysis – challenging, limited bandwidth, 2-3 hour tour round club. Data
coming in to mobile phone. Modelled as double pendulum – arms are one pendulum,
connected to club which is the other. Equation of motion for double pendulum using
Newton’s Laws. Get visual feedback of swinging club in the plane – angles between parts
of the arm and so on. Applied sports science unit with biomechanics people helping
interpret.

Interacting with robots – Trying to program behaviour, especially standing on one leg,
walking etc, is done with heuristics, army of programmers over weeks. Can we capture
human motion, analyse, run it on a simulator with physics engine, then select candidates
and run on a real robot. (Extend life of robot by being selective in which gaits to use!)
Get training data from human, segment in to phases. Fantastic videos of arm swinging,
standing on one leg, sit-ups: and a great walk by a robot, with no human intervention in
the learning algorithm.

“You need to demonstrate before anyone will start adopting these things” – very true.

Health scenarios – Need to validate data. Breathing rate during ventilation – breathing
rate validated all the time. Can capture coughs, overall activity – e.g. go to sleep, turn
right/left etc. Prosthetic limb adjustment – done by eye at the moment; with their capture
data, can make it much closer to the normal/optimal setup. One example – couldn’t do it
for climbing up a slope, but can now.

Speckled Computing Applications CEntre (SPACE)

Exists to evangelise! Encourage people to experiment with the technology. About fifteen
applications project, very keen – due to funding! – to see the technology applied, and
making a difference.

Example: projectile motion. Take a soft ball with Orient device inside. Instrument
thrower with three devices. Thrower throws ball, can detect instant when ball leaves the
hand, so only acceleration due to gravity thereafter. Expect an arc defined by good old
equation of motion. Study in inquiry learning: try using tangible interface to support
learning the laws of projectile motion. Masters student had a first attempt at this.
(The research question here – for me and people like us – is what can you do if motion
capture is cheap, easy and near-ubiquitous? Exciting!)

Questions

Don’t detect physical location, but can infer it. Treat human body as articulated system of
rods. Marker system requires precise placing of markers on parts of the body – here can
be anywhere. Camera-based gives you position information, but have to infer orientation
and acceleration.

256-times a second capture. Do-able because done on the devices themselves, so can be
done in real time. Base station 33g, sensors 13g. Sensors can talk to each other, but here
they all talk to base station.

Feedback not just visual, but audio – a tone – good e.g. in physiotherapy or golf swing.
Give audio feedback on how close it is.

Visual feedback on phone for golf – haven’t done any evaluation. They demonstrate they
can do it, then work with end users to evaluate it. They work on the speck inside,
improving, miniaturising. Applications are collaborations.

Ball-throwing example: very interesting question as to whether the embodied action of


doing it makes a difference versus looking at graphs/models. You find the literature says
more about the confusion in dealing with messy data. The physics ed literature believes
in immediacy and theory-building – but not proven that this is better. Could be that the
finding is that you learn better without going near to real things! Some research not
finding much difference – or more difficulties in real-world data. Lot of rhetoric about
real, authentic experiences as important for learning … but needs to be explored, and can
be now. Motivational side is a better argument than representational.

iPhone would’ve been a good bet; have worked on WindowsCE and don’t much enjoy it.
Using those in experiments with NHS Lothian. You need a load of software, it’s messy.
Happy to work with people to do stuff on more phones, but that’s not their zone.
Delighted to work with people to port it – can give you the hooks etc.

Possible applications in Formula 1, Nokia open lab network.

Dance also – tango dancers. Can get metrics about e.g. coupling of motion between
leader and follower.

Separate centre for applications, several students, is geared up to use stable versions of
their platforms, very open to collaboration.

Motor-control skills development in pre-school children. Ten-week study in a nursery,


currently analysing data. Longitudinal study, exploring whether you can spot
developmental difficulties. Previously only possible in very expensive, constrained
environment of a lab. Now can do in ecologically-sound environment – where they
normally play.

Wii only gives you acceleration; here get the biomechanics of it.

Speckled Computing: Evolution and Challenges

Speckled computing is an emerging technology in which data will be sensed in a minute,


ultimately around one cubic millimeter, semi conductor grain called speck. Wireless
network of thousands of specks called SpeckNet and distributed processing of
information on programmable network is termed as Speckled Computing. Specks are not
new, but the re-design of sensor nodes at nano scale. These nodes will operate with out
any fixed infrastructure and are intended to be deployed in large quantity to increase
overall throughput of the system. Furthermore specks can be deployed to places that are
difficult to reach. These important features impose various requirements on specks. This
paper begins with brief overview of speckled computing and continues with discussion
on the evolution of sensor technology along with key research projects. This paper also
reviews key challenges related autonomic and ad hoc nature of specks

Speckled Computing offers a radically new concept in information technology that has
the potential to revolutionise the way we communicate and exchange information.

Specks will be minute (around 1 mm3) semiconductor grains that can sense and compute
locally and communicate wirelessly. Each speck will be autonomous, with its own
captive, renewable energy source. Thousands of specks, scattered or sprayed on the
person or surfaces, will collaborate as programmable computational networks called
Specknets.
Computing with Specknets will enable linkages between the material and digital worlds
with a finer degree of spatial and temporal resolution than hitherto possible; this will be
both fundamental and enabling to the goal of truly ubiquitous computing.

Speckled Computing is the culmination of a greater trend. As the once-separate worlds of


computing and wireless communications collide, a new class of information appliances
will emerge. Where once they stood proud – the PDA bulging in the pocket, or the
mobile phone nestling in one’s palm, the post-modern equivalent might not be explicit
after all. Rather, data sensing and information processing capabilities will fragment and
disappear into everyday objects and the living environment. At present there are sharp
dislocations in information processing capability – the computer on a desk, the
PDA/laptop, mobile phone, smart cards and smart appliances. In our vision of Speckled
Computing, the sensing and processing of information will be highly diffused – the
person, the artefacts and the surrounding space, become, at the same time, computational
resources and interfaces to those resources. Surfaces, walls, floors, ceilings, articles, and
clothes, when sprayed with specks (or “speckled”), will be invested with a
“computational aura” and sensitised post hoc as props for rich interactions with the
computational resources.

Speckled Computing offers a radically new concept in information technology that has
the potential to revolutionise the way we communicate and exchange information. Specks
-- minute, autonomous, semi-conductor grains that can sense and compute locally and
communicate wirelessly -- can be sprayed into the atmosphere, onto surfaces or onto
people, and will collaborate as programmable computational networks called SpeckNets
which will pave the way to the goal of truly ubiquitous computing. Such is the vision of
the Speckled Computing Project -- however, although the technology to build such
devices is advancing at a rapid rate, the software that will enable such networks to self-
organise and function lags somewhat behind. In this paper, we present a framework for a
self-organising SpeckNet based on Cohen's model of the immune system. We further
suggest that the application of immune inspired technologies to the rapidly growing field
of pervasive computation in general, offers a distinctive niche for immune-inspired
computing which cannot be filled by another other known technology to date.

Speckled Computing Group


Specks can sense, compute and communicate information wirelessly.

Speckled Computing offers a radically new concept in information technology that has
the potential to revolutionise the way we communicate and exchange information.

Specks are small semiconductor devices which can sense and compute locally and
communicate wirelessly. Each speck is autonomous, with its own captive, renewable
energy source. Thousands of specks, scattered or sprayed on the person or surfaces, can
collaborate as programmable computational networks called Specknets.

The Speckled Computing group actively works with organisations to deploy innovative
new applications in areas such as:
Health

o Remote Patient Monitoring


o Gait Analysis

Industrial

o Industrial process control


o Building monitoring

Digital Media

 Motion capture for games and animation


 Avatar and games control

Sports & Leisure

o Body performance monitoring


o Coaching assistance & feedback

For more information please visit the Speckled Computing Group website:
http://www.specknet.org/
Abstract
Speckled computing (Arvind, D.K. and Wong, K.J., Proc. IEEE Int. Symp. on Consumer
Electronics, p.219-23, 2004) is an emerging technology in which data is sensed in minute
(eventually one cubic millimetre) semiconductor grains called specks. Information is
extracted in situ from each speck and is exchanged and processed in a collaborative
fashion in a wireless network of thousands of specks, called a specknet. Specks are not
assumed to be static, and therefore estimating and maintaining the logical location of the
mobile specks in a network is essential for a number of speckled computing and sensor
network applications. A novel lightweight distributed algorithm is introduced for this
purpose and simulation results are presented to determine the goodness of the algorithm
for different parameters, The algorithm was also successfully ported to a hardware
prototype of the speck called the ProSpeckz. The problems and issues of porting the
algorithm onto such a resource-constrained hardware platform are discussed. Finally, the
paper concludes with plans to improve the algorithm.

"Speckled" Computing

We talk about nanofactories, but first the world will be changed by Fab Labs. One day
we'll have nanobots, but on our way to that goal we'll have "speckled computing."

SCOTTISH scientists have developed a computer the size of a matchstick head,


thousands of which can be sprayed onto patients to give a comprehensive analysis of their
condition.

Speckled computing - some of the most advanced computing technology in the world -
is currently being researched and developed by a group of Scottish experts.
The individual appliances, or 'specks', will form networks that can be programmed like
ordinary computers.
Spraying them directly onto a person creates the ability to carry out different tests at the
same time, for example muscle movement and pulse rate. This allows a complete picture
of the patient's condition to be built up quickly.

The computing innovation, being developed by scientists at Edinburgh, Glasgow, St


Andrews and Strathclyde universities, will be displayed at the Edinburgh International
Science Festival next Friday as part of a talk by Damal Arvind, leading speckled
computing professor and director of the Scottish consortium.

Arvind said: "This is the new class of computing: devices which can sense and process
the data they receive. They also have a radio so they can network and there's a battery in
there as well, so they are entirely self-powered.

Spray-on computing. Wow.


Wife: "It's a hair spray."

Husband: "No, it's a web-enabled neural interface!"

Spokesman: "Calm down you two, it's both!"

This is one way that computing could begin to disappear into…everything.

UPDATE: In the comments Phil asks, "What happens when you're done with them...do
they just wash off?"

I would guess that now at the dawn of "speckled computing" that researchers will
attempt to gather these little matchhead-sized computers back up for use later.

But as these devices enter mass production and get smaller and cheaper... yeah, you'd
wear them for the day and lose them the next time you shower.

You'd get a new batch in your hair spray or deodorant.

The cool thing is the promise of what these devices (even though they are less than
nanobots or utility fog) could do. Instant communications via cell and Internet access.
Virtual reality or enhanced reality. Health monitoring. The list goes on.
Speckled Computing offers a radically new concept in information technology that has
the potential to revolutionise the way we communicate and exchange information. This
will be realised by minute autonomous specks, each of which encapsulates sensing,
programmable computation and wireless networking. Computing with minute specks will
enable linkages between the material and digital worlds with a finer degree of spatial and
temporal resolution than hitherto possible, which will be fundamental to truly ubiquitous
computing. The research consortium brings together worldclass expertise in computer
science, electronic engineering and optoelectronics, which will conduct collaborative
research in the core areas, and importantly, at the interfaces between them. While this
provides an integrated technology push, designers industrial designers and educationalists
will collaborate in exemplar projects to demonstrate the impact of this research.

Speckled computing: is it infectious?


By bdra

On September 9 a ‘coffee morning’ in the Institute of Educational Technology at the


Open University will discuss speckled computing. That’s a new term to me and possibly
to you, but this kind of computing may have educational applications, as the speakers will
explain. Bloggers may find the abstract interesting, so I attach it here.

Abstract: D.K. Arvind from the School of Informatics, University of Edinburgh and E.
Scanlon from IET, Open University will lead a discussion on speckled computing and
potential applications in education. Eileen Scanlon will discuss the experiences of the
Personal Inquiry project, the Inquiry Learning cycle and potential applications in the area
of modelling. D.K. Arvind will discuss the concepts of speckled computing and the range
of work being carried out under the banner of the Research Consortium in Speckled
Computing, based at Edinburgh.

A specknet is a collection of autonomous specks which provides distributed services:


each speck is capable of sensing the physical world and processing the sensed data under
program control; the specks themselves are connected as a wireless network and
collaborate to process information in a distributed manner.

Specknets link the digital world of computers to the physical world of sensory data. A
network of Orient specks on the person, for example, is capable of tracking the
orientation of the limbs, and this information can be stored, manipulated and accessed
remotely over the internet. Computing with specknets, or Speckled Computing, affords
new modes of unencumbered interaction with the digital world, in which the physical
environment is the primary site of interaction.

The Orient specks have been used for fully wireless, full-body 3-D motion capture of
human motion in a variety of applications:
• real-time manipulation of characters in digital animation and virtual worlds
• interaction with physical robots and teaching them behaviours such as walking
• biomechanical analysis of the golf swing, cricket strokes, Tango and break dancing
• health-care (physiotherapy, gait analysis, non-invasive wireless respiratory rate
monitoring)
• analysis of throwing and balancing skills in pre-school children
• exploratory learning of concepts in physics such as projectile motion.
Abstract:

A specknet is a collection of autonomous specks which provides


distributed services: each speck is capable of sensing and processing
the
data under program control; the specks themselves are connected as an
ad-hoc wireless network which collaborate to process information in a
distributed manner. Specknets link the digital world of computers to
the
physical world of sensory data. A specknet on the person, for example,
will be capable of tracking movements of the limbs, or the position of
the
person in the environment, and this information can be stored,
manipulated
and accessed remotely over the internet, Computing with specknets, or
Speckled Computing, affords new models of unencumbered interaction with
the digital world, in which the physical environment is the primary site
of interaction.

The future Internet Protocol Version 6 (IPv6) will support more


than 35 trillion separate subnetworks, each of which could connect
millions of devices. We are moving inevitably towards a future of
connectedness of people and objects, i.e. objects which are sensitive
to
their environment by being context- and location- aware. Specks endow
objects with the necessary sensing and processing capabilities, and the
specknet bridges wirelessly the physical world of objects and people,
i.e.
the world of sensory data, to the digital world of the internet.

The talk will give a broad overview of the research undertaken


in
the Consortium - a multidisciplinary collaboration of computer
scientists,
electronic engineers, physicists and electrochemists drawn from five
universities to realise 5Cube specks (5X5X5mm). Some results in
leaderless, distributed algorithms for localisation and zone formation
in
specknets will be presented. Video clips will be presented of
applications
using larger speck prototypes, ranging from monitoring break dancers, to
posture tracking and distributed fire alarms.

About the Speaker:

DK Arvind is a Reader in the School of Informatics at the


University of Edinburgh, Scotland, United Kingdom, and Visiting
Professor
in the EECS department, University of California, at Berkeley. He was
previously for four years a Research Scientist in the School of Computer
Science, Carnegie-Mellon University, Pittsburgh, USA. He is the founder
Director and Principal Investigator of the Research Consortium in
Speckled
Computing (www.specknet.org) a multidisciplinary grouping of computer
scientists, electronic engineers, electrochemists and physicists drawn
from five universities, to research the next generation of miniature 5mm
cube specks. The Consortium has attracted research funding in the excess
of US$9 million (2004-10) from the Scottish Funding Council, and the UK
Engineering and Physical Sciences Research Council (equivalent of the
National Science Foundation in the US). In the past his research has
been
funded by EPSRC, US Office of Naval Research, Scottish
Enterprise/Cadence
Design Systems, Sharp, Hitachi, Panasonic/Mastushita, Agilent, ARM and
SUN
Microsystems. His research interests include the design, analysis and
integration of miniature networked embedded systems which combine
sensing,
processing and wireless networking capabilities.

• The Speckled Computing Consortium brings together world-class expertise in


computer science, electronic engineering and
physics to advance theory and applications in speckled computing – a radical new
concept in information technology which has
the potential to revolutionize the way we communicate and exchange information

Abstract: A specknet is a collection of autonomous specks which provides distributed


services: each speck is capable of sensing and processing the data under program control;
the specks themselves are connected as a mobile wireless network which processes
information in a distributed manner.

Specknets link the digital world of computers to the physical world of sensory data. A
network of wearable Orient specks, for example, is capable of tracking the orientation of
the body parts, or the position of the person in the environment, and this information can
be stored, manipulated and accessed remotely over the internet.
The tutorial will cover three aspects of Speckled Computing:
• Speck platforms, including the hardware and firmware
• Application of the on-body Orient wireless motion capture system in the healthcare,
animation and sports sectors
• The science underpinning the interpretation of sensor data in these applications.
The attendees will have access to the Orient motion capture system and will get hands-on
experience in interactive animation of virtual characters and in the real-time control of a
bipedal robot performing tasks such as walking, standing on a leg and waving of arms.

Target Audience: The tutorial will benefit the following audiences:


• Decision makers in strategic technology developments in the healthcare, digital media
and sports sectors
• Researchers in academia and industry with interests in wearable computing
• Application developers in wireless sensor networks, with particular interest in on-body
sensor-based wireless motion capture systems
• Doctoral students who wish to gain hands-on experience of programming wearable
wireless motion capture systems and exposure to algorithms for analysing and
interpreting the data

Agenda:
• Scanning of the application spaces in healthcare, digital media and sports sectors for
wearable wireless motion capture systems
• Detailed case studies of deployment in clinical gait analysis, 3-D animation,
biomechanics of golf swings
• Unsupervised learning algorithms for the classification of motion data
• Hands-on experience in 3-D animation and in programming behaviour of a bipedal
robot

Brief Bio of the Presenter:


DK Arvind is a Reader in the School of Informatics at the University of Edinburgh,
Scotland, United Kingdom, and CITRIS Visiting Professor at the University of California
at Berkeley (2007-11). He was previously for four years a Research Scientist in the
School of Computer Science, Carnegie-Mellon University, Pittsburgh, USA. He is the
founder Director and Principal Investigator of the Research Consortium in Speckled
Computing (www.specknet.org) – a multidisciplinary grouping of computer scientists,
electronic engineers, electrochemists and physicists drawn from five universities, to
research the next generation of miniature wireless sensor networks. The Consortium has
attracted research funding in the excess of £5.2 Million since 2004 from the Scottish
Funding Council, and the UK Engineering and Physical Sciences Research Council
(equivalent of the National Science Foundation in the US). In the past his research has
been funded by EPSRC, US Office of Naval Research, Scottish Enterprise/Cadence
Design Systems, Sharp, Hitachi, Panasonic/Mastushita, Agilent, ARM and SUN
Microsystems. His research interests include the design, analysis and integration of
miniature networked embedded systems which combine sensing, processing and wireless
networking capabilities.

You might also like