You are on page 1of 130

ETICA Project

(GA number 230318)

Deliverable D.1.2

Emerging Technologies Report


Authors

Ikonen, Veikko
VTT

Kanerva, Minni
VTT

Kouri, Panu
VTT

Stahl, Bernd
DMU

Wakunuma, Kutoma
DMU

Quality Assurance:
Name Date Submitted /
Accepted
Review Chair Jeroen Van den Hoven
Reviewer 1 Michael Rader
Reviewer 2 Philippe Goujon

Review history
Version of Date Submitted / Review Chair Comment
Document Accepted
0,5 (outline) 22.9.2009
0,6 13.1.2010
0,7 20.1.2010
0,8 (outline 03.03.2010
revisited)
0,9 (incomplete 07.04.2010
appendices
1.0 (final) 28.07.2010

ii
Executive Summary
This deliverable describes the process of identifying emerging ICTs for the ETICA
project. It starts by explaining the approach, which is based on an analysis of
discourses on emerging ICTs. The structure and method of discourse analysis using a
shared online database is explained and the findings are outlined.
The discourse analysis led to the identification of approximately 70 technologies, 100
application examples and 40 artefacts. On the basis of this data analysis
approximately 11 candidates for technologies of high relevance were identified. These
technologies were then described in more depth constructing detailed encyclopaedia
entry-like documents which were called "meta-vignettes". These meta-vignettes
describe a number of application areas and examples for each technology and outline
core features of the technologies.
The following technologies were deemed to be of core importance and each described
in a meta-vignette:

Affective Computing

Ambient Intelligence

Artificial Intelligence

Bioelectronics

Cloud Computing

Future Internet

Human-machine symbiosis

Neuroelectronics

Quantum Computing

Robotics

Virtual / Augmented Reality


The meta-vignettes, which are available in full text in ―Appendix F: Meta-Vignettes‖
of this deliverable cover the history of each technology and give several examples of
their intended use or application. On this basis the defining features of the technology
are extracted. Those are the features that influence the way in which the technology
affects the way humans interact with the world. These defining features as well as the
examples are required for the subsequent ethical analysis. In addition, the meta-
vignettes describe the envisaged timeline where identified, for the development of the
technologies and the ethical, social, and legal issues these are expected to raise. They
also list some of the main sources and references.
In order to ensure robustness of the list of technologies, it was cross-checked and
validated using focus groups and independent foresight research. The finished
versions were also sent out to external experts for review.

iii
The overall outcome of WP1 as captured in this deliverable is a robust and well-
supported summary of current discourses on high level emerging ICTs. Due to the
nature of the area under investigation it can neither claim completeness nor truth of
prediction but it provides a sound basis for the subsequent steps of the ETICA project
as well the resulting policy recommendations.

iv
Table of Contents

EXECUTIVE SUMMARY ................................................................................ III

TABLE OF CONTENTS ................................................................................. V

1 INTRODUCTION ...................................................................................... 1

2 BACKGROUND ....................................................................................... 2

2.1 Aim of the Technology Identification .......................................................................................... 2

2.2 Principles of Data Analysis........................................................................................................... 2


2.2.1 Concept of Technology ......................................................................................................... 3

2.3 Emerging Technologies Workshop .............................................................................................. 4

3 DATA ANALYSIS AND FINDINGS .......................................................... 5

3.1 Steps of Data Analysis .................................................................................................................. 5

3.2 Documents Analysed ..................................................................................................................... 6

3.3 Validation of Data Analysis .......................................................................................................... 6

3.4 Limitations of the Approach ........................................................................................................ 6

4 EMERGING INFORMATION AND COMMUNICATION TECHNOLOGIES


8
4.1 Meta-vignettes ............................................................................................................................... 8

4.2 Construction of the List of Emerging Technologies ................................................................... 9

4.3 Validation of the List of Emerging Technologies ..................................................................... 11


4.3.1 Validation by Forecasting Research Review ....................................................................... 11
4.3.2 Validation by Focus Group ................................................................................................. 13
4.3.3 Consequences of Validation ................................................................................................ 14

4.4 Quality Control ........................................................................................................................... 14

4.5 Technical Implementation .......................................................................................................... 14

5 CONTEXT OF THE EMERGING TECHNOLOGIES WORK PACKAGE


WITHIN ETICA .............................................................................................. 15

6 APPENDICES ........................................................................................ 16

v
6.1 Appendix A: Sources Analysed .................................................................................................. 17

6.2 Appendix B: Critical Issues ........................................................................................................ 22


6.2.1 Social Impact ....................................................................................................................... 22
6.2.2 Issues of Potential Ethical Relevance .................................................................................. 22
6.2.3 Capabilities .......................................................................................................................... 24
6.2.4 Constraints ........................................................................................................................... 24

6.3 Appendix C: Findings of Data Analysis .................................................................................... 25


6.3.1 Emerging Technologies ....................................................................................................... 25
6.3.2 Application Examples ......................................................................................................... 27
6.3.3 Artefacts .............................................................................................................................. 29

6.4 Appendix D: List of Technologies from a Review of Forecasting Studies ............................. 31

6.5 Appendix E: List of Main Technologies - Meta-Vignettes ...................................................... 37

6.6 Appendix F: Meta-Vignettes ...................................................................................................... 42


6.6.1 Affective Computing ........................................................................................................... 42
6.6.2 Ambient Intelligence ........................................................................................................... 50
6.6.3 Artificial Intelligence .......................................................................................................... 57
6.6.4 Bioelectronics ...................................................................................................................... 64
6.6.5 Cloud Computing ................................................................................................................ 69
6.6.6 Future Internet ..................................................................................................................... 74
6.6.7 Human-Machine Symbiosis ................................................................................................ 85
6.6.8 Neuroelectronics .................................................................................................................. 91
6.6.9 Quantum Computing ........................................................................................................... 95
6.6.10 Robotics ........................................................................................................................ 105
6.6.11 Virtual/Augmented Reality ........................................................................................... 111
1 Introduction
Work Package (WP) 1 identifies emerging ICTs and likely areas of application of
these technologies.
This deliverable describes how the emerging ICTs were identified, how the findings
were validated and outlines the structure and content of the findings.
The deliverable builds on the heuristics and methodology report (D.1.1) as well as the
discussion and decisions by the consortium as noted in the reports on "Theoretical,
Methodological, and Collaborative Issues of Stage 1 of the ETICA Project" (D.5.4
and D.5.4 addendum)1.
The principles described in the earlier deliverables are briefly summarised in the
background section (section 2) below. Following the outline of the background, the
current document then discusses the data analysis and presents findings in section 3.
Section 4 then discusses how and why the data was used to develop a list of emerging
ICTs and the structure in which these are described. Section 5 briefly summarises the
context of the ETICA project in which the present deliverable needs to be understood.
Section 6 is dedicated to Appendices to the deliverable which contains all results of
data analysis, syntheses, and validation of the list of technologies as well as the
individual descriptions of the technologies (meta-vignettes).

1
All ETICA deliverables are available from the project website at: www.etica-project.eu.

1
2 Background
The overall aim of the ETICA project is to identify emerging ICTs and the ethical
issues these are likely to raise with a view to providing policy recommendations to the
EU as well as other policy makers.
The project is split into three main stages: identification, evaluation, and governance.
WP1 and the current deliverable are part of the first, the identification stage.

2.1 Aim of the Technology Identification


The aim of the identification stage is to identify emerging ICTs and their ethical
consequences. The present deliverable concentrates on the first half of this task,
namely the identification of emerging ICTs. According to the Description of Work,
the outcome of the identification stage was the "normative issues matrix", which
summarised the findings of WP1 and 2.

technology 1 technology 2 technology 3 … technology m

applicati issue 1,1 issue 1,2 … … issue 1,m


on 1

applicati issue 2,1 …


on 2

applicati … …
on 3

… … …

applicati issue n,1 … … … issue n, m


on n
Table 1: Matrix identifying emerging normative issues in ICT

The main contribution of WP1 was to identify the technologies and application areas.

2.2 Principles of Data Analysis


Some of the conceptual and practical problems arising during the identification stage
have been discussed in previous deliverables (notably D.5.4 and D.5.4 addendum).
The consortium decided to answer the question of how to identify emerging ICTs by
undertaking a structured discourse analysis of a range of different sources on
emerging ICTs.
The analytical grid (see figure 1, below) used to analyse the sources was described
and justified in D.5.4 addendum.

2
Is implementation of / is application of

Examples:
Technologies - future internet
-id - quantum informatics
-name - embedded systems
-description - autonomous ICT
-technical system
Is implemented in -field of application Is demonstrated by
-target audience
-budget
-time scale
Artefacts -source ApplicationExamples
-id -id
-name -name
-description -description
-technical system -technical system
-field of application -field of application
-target audience -target audience
-budget -budget
-time scale -timescale
-source -source

Examples:
- Concurrent tera-device computing Examples:
- wireless sensor networks - internet of things
- intelligent vehicles - trustworthy ICT
- open systems reference architectures - adaptive systems for learning
- self-aware systems - service robots for elderly
- self-powered autonomous nano-scale - quantum cryptography
devices

All entries (technology, application example, artefact) defined by these items

Social Impacts Critical Issues Capabilities Constraints


-id -id -id -id
-name -name -name -name
-description -description -description -description
-source -source -source -source

Figure 1: ETICA analytical grid

The present deliverable describes how this analytical grid and the methodology linked
to it were put into practice and how it led to the identification of emerging ICTs.
It is important at this stage to reiterate one of the core conceptual problems that
ETICA had to contend with. This problem is the distinction of technology and
application. In the analytical grid, a third category is added, namely that of the
artefact. The problems of the definition of technologies, in particular technologies that
are not completely visible yet, have been discussed in D.5.4. The pragmatic definition
used by the consortium for the data analysis was that "technology" was used for high-
level socio-technical systems, "application examples" for clearly recognisable uses of
such technologies in delimited areas, and "artefacts" for the physical objects that
embody the technology. While this distinction was intuitively plausible, it proved to
be difficult to consistently apply throughout the data analysis.

2.2.1 Concept of Technology


During the analysis of forecasting texts as well as during the discussion about which
technologies to include in the final list of technologies to be analysed and evaluated in
more detail, much time was used to determine the detail of what was to be considered
a technology. As indicated earlier (see also D.5.4 and D.5.4 addendum), a technology
in the sense of the word used here is a high level system that affects the way humans
interact with the world. This means that one technology in most cases can comprise

3
numerous artefacts and be applied in many different situations. It needs to be
associated with a vision that embodies specific views of humans and their role in the
world.
An example may help explain this. One core technology that has been widely
described and researched is that of Ambient Intelligence (AmI). AmI is characterised
by a number of defining features such as embeddedness, interconnectedness,
invisibility, adaptivity, personalisation, and others (see AmI description in Appendix
F: Meta-Vignettes). This technology implies a view of the world, in which humans
require and desire support, are seamlessly connected and have an adaptive
environment. To realise this vision, numerous artefacts such as sensors, networks,
algorithms etc can be used. The technology does not depend on any one of these to
become reality.

2.3 Emerging Technologies Workshop


The structure of data analysis, identification of technologies and their description is
the result of a long collaborative process involving all members of the consortium. A
crucial step in this discussion was the first ETICA workshop, held in conjunction with
a General Assembly (GA) meeting in Tampere, Finland from 21st to 23rd September
2009. During this workshop the main issues of the identification of emerging ICTs
were discussed and the principal approach of the ETICA project was agreed upon.
Some results of these discussions can be found in D.5.4 and D.5.4 addendum. These
decisions were refined and developed during a number of subsequent meetings of the
Steering Committee (SC), notably a meeting in Leicester in October 2009 and one in
Namur in January 2010 where the analytical grid and detailed approach were defined.
This led to the overall approach and implementation as outlined in this deliverable.
The list of technologies as detailed in this deliverable was discussed, modified and
agreed during the ETICA General Assembly in April 2010 in Tarragona, Spain and
the Steering Committee meeting in May 2010 in Tampere, Finland.

4
3 Data Analysis and Findings
This section describes how the principles of data analysis as described above were
implemented and what results this led to.
In order to facilitate the collaborative analysis of the data, an online database was
built that allowed the collection of the relevant aspects outlined in the analytical grid.
This database was installed on the same server that the ETICA website and project
management system were installed although it is technically independent of these.
Access was given to all ETICA members involved in the analysis. The system can be
found here:
http://moriarty.tech.dmu.ac.uk/webapps/etica/login.php
Access can be made available for verification purposes via the Project Coordinator.

3.1 Steps of Data Analysis


The data analysis was done in four steps:
1. Upon the identification of a technology / application example / artefact, the
analyser first checked whether this particular item was already in the database.
If so, the extant entry could be modified or an additional entry could be made.
If not, a new item was added.
2. For the newly created item, a general description was to be provided. Features
of the item that could be provided were:
a. Technical System (e.g. ICT, biotech, nanotech)
b. Field of Application (e.g. ageing, automotive industry, environment)
c. Target Audience (e.g. children, consumers, scientists)
d. Budget (numerical, if available)
e. Time Scale (numerical, if available)
f. Source (see Appendix A: Sources Analysed).
3. Critical issues could be attributed to the item. These were distinguished in four
main types:
a. Social Impact (e.g. access, security, economic consequences)
b. Critical Issues (i.e. ethical issues, such as data protection, freedom,
employment)
c. Capabilities (e.g. connectivity, interoperability, miniaturisation)
d. Constraints (e.g. complexity, reliability, safety).
For each of these, there were a number of options that could be chosen.
In addition there was a possibility to enter free text to explain the exact
meaning. (See Appendix B: Critical Issues for detailed lists of
identified critical issues).
4. In the final step, each entry could be linked to other items in the database by
defining relationships. Three relationships were possible to choose:
a. Is application of

5
b. Is condition of
c. Is similar to
Step 4 concluded the data analysis of the particular item allowing the researcher to
move to the next one to be found in the source text.

3.2 Documents Analysed


As outlined in D.5.4 addendum, the consortium decided to analyse views of emerging
ICTs originating from two main types of sources: governmental / funding sources and
scientific / implementation sources. These two types of sources were chosen on the
basis that they included or implied visions of technologies and their impact and
expected consequences on society. These aspects of the technology are crucial for the
ethical analysis undertaken by ETICA.
The list of sources was compiled collaboratively using a wiki in the project
management section of the project:
http://moriarty.tech.dmu.ac.uk:8080/pnet/wiki/ETICA/sources
A full list of sources analysed can be found in "Appendix A: Sources Analysed".

3.3 Validation of Data Analysis


The analysed documents varied greatly in style, purpose, length, detail and other
aspects. An important problem was therefore to ensure consistency and
reproducibility of the findings. In order to avoid the injection of bias into the data
analysis, it was therefore decided to cross-validate the data analysis. For this purpose
each work package analysed three different texts. This analysis was then reviewed by
and discussed with one of the other work packages. The results of these discussions
were again captured in a wiki:
http://moriarty.tech.dmu.ac.uk:8080/pnet/wiki/ETICA/data_analysis
It turned out that some of the foreseen problems occurred; for instance there was a
lack of agreement on what exactly should count as a technology, application example,
or artefact. In addition, analysis by different individuals was done with differing
levels of granularity and detail.
Despite these differences in detail, the consortium found that the main items and
issues were identified in similar ways by different analysers. Given that it was
expected that the data analysis would include a large amount of redundancy (identical
technologies being discussed by a number of documents) and that further steps in the
project would evaluate and contextualise findings, it was decided that the approach
was sufficient to provide the results expected, namely an identification of
technologies, applications, and ethical issues as viewed by discourses on emerging
ICTs.
The findings of the data analysis can be found in "Appendix C: Findings of Data
Analysis". The appendix lists the technologies, application examples and artefacts that
were identified during data analysis.

3.4 Limitations of the Approach


The lists of findings arising from the data analysis demonstrate a significant problem
with the original ideas of the identification stage. Given that at this stage 107

6
technologies were identified and 70 applications, the matrix constructed using these
would contain 7490 fields. It would be practically impossible to analyse all of these.
More importantly, there is no guarantee that this matrix would contain all relevant
combinations.
Furthermore, a more detailed look at the lists reveals that there is some overlap. In
addition, there are items on the artefacts list which are not covered elsewhere and
would be ignored if the project concentrated on technologies and applications only.
One possible way out of this would have been to decrease the level of detail of
technologies and applications. However, this would have lost the richness and detail
of the data analysis while not guaranteeing completeness of results either.
For these reasons it was impossible to follow the original idea as laid down in the
description of work. An alternative way had to be found that would allow fulfilling
the aim of the project, namely to identify emerging ICTs and their ethical issues with
a view to providing policy advice. This solution is discussed in the following section.

7
4 Emerging Information and Communication
Technologies
The core question for WP1 was how to translate the different activities having taken
place so far with regards to the identification of emerging ICTs into an output that
would be conducive to the overall aims of the project while allowing a seamless
connection with subsequent work. Criteria for evaluating different options were:
Scholarly contribution
Academic robustness
Link to produced results
Practical feasibility
During a series of discussions of the Steering Committee and the General Assembly
of ETICA, it was decided to condense the findings from the data analysis into a list of
central emerging technologies, which would be described in a way that would allow a
relevant ethical analysis.

4.1 Meta-vignettes
The technologies selected would be those emerging ICTs that had a high level of
generality and therefore had a significant possibility of having an effect on the way
humans interact with their world. In other words, they should be the technologies that
are likely to be of practical and ethical relevance.
For each of these technologies it was decided to construct what was termed a "meta-
vignette". This term stands for an encyclopaedia-like entry that provides the reader
with sufficient information for their particular purpose, in our case for ethical
analysis. The term was chosen because it developed from the earlier idea to provide
"vignettes" or illustrative examples of the technologies. While these vignettes might
have covered a particular application example, the idea now is to move to a higher
level of abstraction and illustrate a high level technology. This move from the
concrete level of application or artefact to the higher level of socio-technical system is
represented by the use of the term "meta".
The initial source of the meta-vignettes is the data as analysed from the source texts
and stored in the database. Given that the database cannot claim to be comprehensive,
the construction of the meta-vignettes relies on additional sources. After the decision
to include a particular technology into the list of technologies to be analysed, the
consortium member responsible for it therefore has to conduct an additional review of
the pertinent literature.
The meta-vignettes, in order to fulfil the requirement of giving a general introduction
to the technology at hand, follow this general structure:
- History of the technology
- Around 5 ―cases‖. These are examples of important applications of the
technology from different fields of application (they are taken from the
database and literature review)
- Defining features as induced from the cases and applications. These
concentrate on emerging features

8
- Related technologies (from database and possibly other sources)
- Critical issues (social, ethical, legal, capacities, constraints) as raised in the
literature on the technology.
The meta-vignettes provide a good starting point for the subsequent work packages
(WPs). They go beyond descriptions of technology that can be found in existing
publications on future research. The particular attention paid to defining features
contributes to the starting point for the subsequent ethical analysis. The term "defining
features" in the ETICA project stands for those aspects of a technology that affect the
relationship between technology and humans. These defining features are of crucial
importance because they represent the aspect of the respective technology that
determines its ethical relevance. In addition the structure of the meta-vignettes allows
for their evaluation in WP 3 and a critical consideration of governance issues as
provided by WP 4. The actual meta-vignettes developed by the consortium under the
leadership of WP 1 can be found in Appendix F: Meta-Vignettes ―of this deliverable.

4.2 Construction of the List of Emerging Technologies


The meta-vignettes represent the core of the research and analysis activities of WP 1
and provide the basis of all further activities of the project. It is therefore of high
importance to ensure that appropriate choices are made with regards to the
technologies that are developed into meta-vignettes. Therefore as stated earlier, one of
the activities undertaken by the project was to identify emerging technologies from
forecasting studies. This is highlighted in Appendix D: List of technologies from a
review of forecasting studies. This contains a list of emerging ICTs that was compiled
independently of the data analysis undertaken in the ETICA project. The Appendix
lists the technologies extracted from forecasting publications. Subsequently,

Appendix D: List of Technologies from a Review of Forecasting Studies


This appendix contains a list of emerging ICTs that was compiled independently of
the data analysis undertaken in the ETICA project. The table below lists the
technologies extracted from forecasting publications. The sources from which these
technologies were extracted are provided in the subsequent table.

3-D Flat-Panel Displays 5


3D media 3
3-D Printing 5
Advanced Displays 2
Affective Computing 2
Ambient computing and communication Environments: 8
Ambient intelligence (13 / 626) 9
Artificial intelligence 1
Atomic-Scale Information Technologies 7
Augmented Reality 5
Automated and remote shopping systems; 10

9
Awareness and Feedback Technologies 8
Behavioural Economics 5
Biofuels and Bio-Based Chemicals 4
Biogerontechnology 4
Bio-ICT Synergies 7
Calm computing (0 / 3) 9
Calm technology (0 / 13) 9
Cash-free payment systems; 10
c-Etiquette 6
Clean Coal Technologies 4
Cloud Computing 5
Cognitive architectures 8
Cognitive systems 3
Complex Systems 7
Context awareness (8126 / 1251) 9
Context Delivery Architecture 5
Cooperativity 6
Corporate Blogging 5
C-Pod (collaboration-supporting-device) 6
Crowd and Swarm Based Interaction 8
Device interoperability (215 / 57) 9
Distributed computing and wireless (243 / 378) 9
E-Book Readers 5
Electronic Paper 5
Embedded systems 3
Embedded Systems 8
Emotion Processing 8
Energy Storage Materials 4
Evolveable Systems 7
Exploitation of complexity 1
Future Internet 3
Globalisation 8
Green IT 5
Grid Computing 2
GRID computing 1
High-Temperature Superconductors 2
Holographic Storage 2
Home Health Monitoring 5
Human Augmentation 5
Human Behavioural Modelling 2
Human Computer Confluence 7
Human tissue engineering. 1
Human-Machine Interaction 8
Human-Machine Interfaces 2
Humanoid Robots 8
Hybrid Symmetric Interaction 8

10
ICT-bio-nano convergence 3
Idea Management 5
Integration in components and systems 3
Intelligent and Cognitive Systems 7
Intelligent and Cognitive Systems: 8
Intelligent transport systems 10
Interconnects 7
Interference solution 1
Internet II 2
Internet of Things 2
Internet of things 3
Internet of things (907 / 53) 9
Internet TV 5
Lab-on-a-Chip/Microreactors 2
Language and Communication 8
Location based services (1941 / 1585) 9
Location-Aware Applications 5
Location-based services using ambient intelligence 10
Mechatronics 1
Mesh Networks: Sensor 5
Micro Energy Sources 2
Microactuators 2
Microblogging 5
Miniaturisation in components 3
Miniaturization 2
Mixed Reality 2
Mobile Robots 5
Molecular Electronics 2
Molecular engineering understanding thought 1
Molecular Manufacturing 2
Monitoring and Control of Large Systems 8
Multi-modal interfaces 3
Nano-Bio Interfacing 7
Nano-electromechanical Systems 7
Nano-Electronics and Nanotechnologies 7
Nanofabrication 1
Nanomachining 2
Nanorobots 2
Networked Societies of Artefacts 7
Neuroscience/technology 2
NFC (4994 / 136) 9
Online Video 5
Optical Data Processing 2
Organic electronics 3
Other 3
Over-the-Air Mobile Phone Payment Systems, Developed Markets 5

11
Pay-as-you-go systems for transport, insurance etc; 10
Perception-action coupling 8
Perpetual Energy (Space Electricity) 2
Personal Factory 2
Personal/Service Robotics 2
Pervasive Computing and Communications: 7
Pharma 1
Photonics 3
Plug-in electric Vehicles 2
Post-CMOS devices and storage 7
Printed Electronics 2
Public Virtual Worlds 5
Quantum Computers 2
Quantum computing 1
Quantum Computing 5
Quantum Energy 2
Quantum Information Processing and Communication. 7
Rationale for the Humane City 8
Realization and User Experience of Privacy and Trust 8
Recycling Technologies 2
Remote tele-services; and 10
RFID (Case/Pallet) 5
Robot Ensembles 8
Robotics 3
Security 8
Security of ICT 3
Security, Dependability and Trust 7
Self Organization in Socially Aware Systems 8
Self-X Systems 8
Semantic representations 8
Semantic-based systems 3
Sensor fusion (1127 / 1656) 9
Serious games 6
Service Robotics 4
Small devices for energy 1
SOA 5
Social Network Analysis 5
Social Networks and Collective Intelligence 8
Social Software Suites 5
Software as a service 3
Software intensive systems and new computing paradigms: 8
Software-Intensive Systems 7
Software-Intensive Systems. 8
Space-Time Dispersed Interfaces 8
Spatial and Embodied Smartness 8
Speech Recognition 5

12
Spintronics 2
storage and production 1
Surface Computers 5
Systemability 7
Tablet PC 5
Tangible Interaction and Implicit vs. Explicit Interaction 8
Technologies for independent living including embedded sensors; 10
Teleportation 2
Tera-Device Computing 7
The growing use of biometrics for health and security purposes; 10
The Internet of Things 4
The management of distribution and freight systems 10
The Scaling Issue 8
The virtualisation of work. 10
Thin-film Solar Cell Technology 2
Transparency and security of supplies, for example, food security and traceability 10
Ubiquitous or pervasive) computing (1172 / 8454) 9
Universal Translator 2
Unlimited Connectivity 8
Video Search 5
Video Telepresence 5
Virtual Reality 2
Web 2.0 5
Web 2.0 6
Wikis 5
Wireless Power 5
Wireless sensor network (841 / 17142) 9
Wireless sensors networks 1
Table 4: Technologies Identified from Forecasting Studies

Sources
ID Author/ Title/Year Bibliographic Info
Organisation

13
1 IST –2001-37627 WP 1 – Review and http://fistera.jrc.ec.europa.eu/docs/D13
FISTERA – Analysis of National final.pdf
Thematic Network Foresight D1.3 –
on Foresight on Conclusions from the
Information Society Synthesis of National
Technologies in the Foresight Studies A
European Research Summary (2005)
Area

2 SRI Consulting A Reality Check: http://oske-net-


Business Intelligence Technology to Business bin.directo.fi/@Bin/acfed95d3d937ee6
Understanding the 6ce38c3d484cc9d9/1270283598/applic
Implications of Change ation/pdf/132013/SRIC-BI.pdf
UBI Summit 2009 -
HelsinkiSRIC-BI:
Current Disruptive and
Next-Generation
Technologies (2009)

3 European Report on the European


Commission Commission's Public http://ec.europa.eu/information_society
On-line Consultation /tl/research/key_docs/documents/report
"Shaping the ICT _public_consultation.pdf
research and innovation
agenda for the next
decade" (2009)
4 The National Disruptive Civil http://www.dni.gov/nic/confreports_di
Intelligence Council Technologies: Six sruptive_tech.html
Technologies with
Potential Impacts on US
Interests out to 2025
(2008)

5 Gartner Hype Cycle of Emerging http://www.gartner.com/it/page.jsp?id=


Technologies (2009) 1124212
6 5th Future and Emerging http://www.ami-
Collaboration@Work Technologies and communities.eu/www.amiatwork.com/
Expert Group Report Paradigms for ec.europa.eu/information_society/activ
Collaborative Working ities/atwork/hot_news/publications/doc
Environments‖ (July uments/experts_group_report_5.pdf
2006)

7 Coordination Action Beyond the Horizon: http://beyond-the-


BEYOND-THE- Anticipating Future and horizon.ics.forth.gr/central.aspx?sId=2
HORIZON - project Emerging Information 9I88I204I323I118338
Society Technologies
(2006)

8 INTERLINK : Deliverable number D4.2 http://interlink.ics.forth.gr/central.aspx


International Road-Mapping Research ?ReturnUrl=%2fDefault.aspx
Cooperation in Ambient Computing
activities in future and Communication http://interlink.ics.forth.gr/staticPages/
and emerging ICTs Environments (2009) Files/InterLink_D4.1_V2.pdf

14
9 Tekes: Tekes – the Tekes – http://www.innovaatiomaisemat.fi/wiki
Finnish Funding Innovaatiomaisemat /rdi/index.php/Toiminnot:FileGuardPa
Agency for UBICOM (2009) ge/UbicomInnovaatiomaisemat.pdf
Technology and
Innovation

10 The Forfás report Sharing Our Future: http://www.forfas.ie/publication/search


Ireland 2025 Strategic .jsp?ft=/publications/2009/title,4403,en
Policy Requirements for .php
Enterprise Development
(2009)

Table 5: Sources of List of Technologies from Forecasting Research

15
Appendix E: List of Main Technologies - Meta-Vignettes") which was derived from
the list of technologies that resulted from the data analysed. These technologies were
categorised and compared, which led to an initial list of high-level technologies. In
the first step it was decided to include a large number of technologies and err on the
side of caution, in order to avoid overlooking important technologies. This led to an
initial list that contained a certain amount of overlap and redundancy. This list was
then revised and the technologies contained in it were each described in a meta-
vignette.
The development of the list of meta-vignettes whose current version is available in
"Appendix E: List of Main Technologies - Meta-Vignettes" was an iterative process.
The initial list was reviewed by the consortium on several occasions. Following an
initial validation (see next section), the list was discussed in detail during the GA
meeting on 14 April 2010 in Tarragona and the SC meeting on 27th and 28th May
2010. As a result of these meetings, several earlier candidates were eliminated
because they were either constituent parts of other technologies or because they were
tools or application examples, not technologies in the sense defined earlier. These
included the following:

Original technology Reason for removing it from the list

Nano-ICT Despite numerous references to nano-ICT, there was no


unifying feature (apart from its nano-scale) that covered all
nano-ICTs. Nano-ICTs can be seen at work in numerous
other technologies but failed to be covered by a unifying
vision that would render them suitable as a technology.

Surveillance Surveillance is a widely referenced application of many


technologies ICTs. Many of the technologies found in ETICA have
significant potential to serve as means of surveillance.
However, surveillance is an application area rather than a
technology in the ETHICS meaning of the terms.

Personal health Similar to the previous entry on surveillance technologies,


information systems personal health information systems are a field of application
which can employ numerous other technologies but does not
imply a view of the way humans interact with the world.

Verified software Verified software would enable or support many ICT


applications and technologies but are only a supporting tool
of these.
Table 2: Table of technologies rejected because they did not fit the ETICA definition of the term

Where relevant, application examples and artefacts that were identified during the
data analysis were also compared with the technologies to ensure that these lists did
not include relevant high level technologies that were overlooked in the list of
technologies. The meta-vignettes contain a section that offers space to explain their

16
relationship with other technologies, which also allows for the determination of
relationships between artefacts and technologies.
In a number of instances technologies were removed from the original list because it
became clear during further analysis that they were either sub-technologies of others
or could be used synonymously.

Original technology Reason for removing it from the list

Embedded Is covered in Artificial Intelligence and Ambient Intelligence


intelligence

Smart Environments Covered by Ambient Intelligence

Grid Computing Synonymous for Cloud Computing

Wearable Computing Aspect of Ambient Intelligence

Cognitive Systems Covered in Artificial Intelligence and Robotics


Table 3: Table of technologies rejected because they overlapped with other technologies

Practitioners and experts in any of the fields / technologies / areas that were
eliminated from the list might object that they cover unique aspects that are not
covered in any of the remaining technologies. The ETICA consortium concedes this
possibility but would reply that in the spirit of the project, this is not necessarily a
serious problem. ETICA set out to explore plausible views of ICTs in the medium
term future. It aims to contribute to societal discourses around these technologies and
provide policy makers with a better foundation for decision making. As a
consequence, it will be important for ETICA to correctly represent what is currently
seen as possible futures. The analysis of the remaining technologies shows that they
overlap to some degree and that they often raise similar predictable ethical problems.
Leaving out individual technologies, even if they may become socially relevant, is
therefore no particular problem as long as the overall project can raise awareness of
the ethical issues they are likely to raise, albeit in different contexts.
While this approach is based on the general underpinnings and methodological
choices project, it has to contend with the fallibility of the data analysis caused by the
choice of sources. To put it differently, there is no guarantee that this approach will
not overlook a core technology already being discussed. It was therefore necessary to
validate the list of technologies using independent sources.

4.3 Validation of the List of Emerging Technologies


In order to ensure the validity of the list of meta-vignettes, two different methods of
validation were employed: a review of forecasting research and focus groups.

4.3.1 Validation by Forecasting Research Review


The ETICA project did not rely heavily on forecasting and other types of future-
related research. As outlined above, it was felt that governmental sources and research

17
sources would provide a more reliable picture of emerging ICTs. Such sources are
related to research and development activities and therefore less likely to be
speculative than other future-oriented research, which can be motivated by aims to
raise attention or create markets.
However, the project did review forecasting research in the area of ICT as explained
in deliverable D.1.1. The results of this review were synthesised into a list of
emerging ICTs that are explored by forecasting studies. This list (see "

Appendix D: List of Technologies from a Review of Forecasting Studies") was


compared with the list of technologies to ensure that no major and relevant
technologies were overlooked.
Figure 2 represents an attempt to map some of the discussion of forecasting research.
It shows the complexity of discourse and the overlapping nature of technologies,
applications, and consequences. This visual analysis was eventually given up in
favour of a more manageable list of emerging ICTs.
This initial type of validation ensured that the results of the ETICA project were
compatible with the greater discourse on future research on ICT.

18
Figure 2: Map of Forecasting Research on ICT

19
4.3.2 Validation by Focus Group
The second type of validation of the list of meta-vignettes was conducted using focus
groups. A set of focus groups is run by the consortium alongside the entire project.
The idea of the focus groups was to include ordinary members of the public in the
research process in order to understand their experiences of technologies and how
they are affected by it. The first set of focus groups concentrated on the question of
what end users perceive to be emerging ICTs.
A detailed report along with transcripts of the first focus group can be found in D.5.7
"Focus Group Report", to be submitted in month 22 (February 2011). As becomes
clear from this, the focus group only had limited value as a tool for validating the list
of emerging technologies. Reasons for this are twofold,
i) at the time of the discussion, the list of technologies was still under
consideration and
ii) the discussants had limited knowledge of emerging technologies.
Because they had limited knowledge of emerging technologies, participants felt that
their main role as far as future and emerging technologies were concerned was that of
passive recipients. This is demonstrated in the following quotes from the discussions:
There are loads of emerging technologies that will come in the next 10 to 20
years. Obviously, at this point we don’t even know about them so we can’t
even think about what might be down the road. But what you buy today as in
say a TV becomes obsolete tomorrow because something better comes out. I
mean technology moves so fast that your work environment would be
technology based to give you more leisure time, that’s what I think. (TT)
In much the same line of thought another participant stated the following:
Obviously you have got to have a great understanding of that technology that
is being passed to you so then you can relate it to laymen or in simple terms to
make sure that other people understand that. (OP)
Despite taking such a position, some participants did identify some emerging
technologies that they felt would be available to every home in the not too distant
future. These were 3D and virtual reality technologies.
The focus group discussions provided further value with insights into their
understanding of future and emerging technologies. This included the fact that
technologies were about:
i. growth
ii. efficiency
iii. knowledge development
Discussants expanded on the above by outlining areas where they thought future
technologies would have a positive impact. These included:

i. Health care
ii. Climate change

20
iii. Entertainment and Leisure
In considering the insight provided by the focus groups, it can be concluded that
although the groups provided a subtle and perhaps imprecise validation of the list of
meta-vignettes, their contribution is nonetheless valuable in terms of how they view
emerging technologies as lay people.

4.3.3 Consequences of Validation


As a result of the different validation steps, ETICA arrived at a list of technologies for
which meta-vignettes were to be developed, which would then be analysed from an
ethical perspective. At the time of submission of this deliverable, this list was
regarded as sufficiently stable by the consortium.
It is, however, in the nature of research on emerging ICTs that new developments will
continue to arise and a list such as the present one is of a temporary nature. As the
ETICA project aims to give timely policy advice, further data analysis will continue
to be undertaken, which may lead to a modification of the list of technologies.

4.4 Quality Control


The majority of the work was done by the only member of WP1, VTT. However,
significant aspects of data analysis, synthesis of technologies and construction of
technologies was done by other consortium members. This had the advantage of
cross-validating and peer reviewing all steps of the work.
The final quality check for the meta-vignettes as the central outcome of the work
package was done by WP2, TUD. This was agreed because TUD needed to undertake
the ethical analysis of the meta-vignettes and therefore had to be satisfied by their
quality. All steering committee members constructed at least one meta-vignette,
which was reviewed and accepted by WP2.
In addition to this internal quality control by the consortium, all meta-vignettes were
sent to experts in the respective technical area to ensure correctness. Given that most
of the consortium members are not technologists it was important that the meta-
vignettes did indeed reflect the state of the art of discussion.

4.5 Technical Implementation


In order to ensure consistency of the entries, transparency of development and
facilitate collaboration and review, the list of technologies to be developed into meta-
vignettes was captured in a wiki that was accessible on the internal project
management website at:
http://moriarty.tech.dmu.ac.uk:8080/pnet/wiki/ETICA/technologies
This wiki reflected the dynamic of the decision process with regards to the
technologies to be included in further analysis and was linked to the individual
documents that contained the meta-vignettes. These were also centrally collected on
the project management website and accessible to all project members. The direct link
to the folder is:
http://moriarty.tech.dmu.ac.uk:8080/pnet/document/TraverseFolderProcessing.jsp?id=
452981&module=10

21
5 Context of the Emerging Technologies Work
Package within ETICA
The identification of emerging ICTs is the first step in the sequence of the ETICA
project research. As described above, it is specifically geared to lend itself to an
ethical analysis in the subsequent step by WP 2. It will also provide a sound basis for
the later evaluation and ranking of issues and a starting point for considerations of
governance.
Due to the rapid changes in the field, the findings of WP1 are likely to be outdated
quickly. The methodology of identifying the technologies remains current, however.
For this reason the actual main findings, namely the list of meta-vignettes was put in
an appendix of this deliverable. The consortium had to make a choice and agreed on
the list of technologies, in order to allow for the further research steps to be
undertaken. At the same time, continued environmental scanning and updating of
possible data sources will continue. As a result, there may be developments and
further technologies that are not reflected here. In order to address this problem it is
planned to keep an updated version of the findings including the full text of meta-
vignettes on the project website.

22
6 Appendices

23
6.1 Appendix A: Sources Analysed
Date: 19.03.2010

ID Author Title Bibliographic Info

The Global
RAND In-Depth Analyses Bio/Nano/Materials/Information Trends, Drivers, Barriers, and Social Implications
1 Technology
corporation http://www.rand.org/pubs/technical_reports/2006/RAND_TR303.pdf
Revolution 2020

NICTA 2008
2 NICTA Research Report From http://www.nicta.com.au/__data/assets/pdf_file/0003/19362/2008NICTA_ResearchReport.pdf
imagination to impact

Future Internet
Campolargo, M Research in Europe:
3 ETSI (ed): ICT Shaping the World: A Scientific View
& da Silva. J Drivers and
Expectation

4 Philip Ball Champing at the bits Nature Vol 440 23 March 2006

Everything,
5 Declan Butler Nature Vol 440 23 March 2006
Everywhere

Roco, Mihail C., Converging


Bainbridge, Technologies for
6 http://www.wtec.org/ConvergingTechnologies/1/NBIC_report.pdf
William Sims Improving Human
(eds) Performance

24
Being Human:
Richard Harper Human-Computer Microsoft Research: http://research.microsoft.com/en-
7
et al Interaction in the year us/um/cambridge/projects/hci2020/downloads/BeingHuman_A4.pdf
2020

Roco, Mihail C., Converging


Bainbridge, Technologies for
8 http://www.wtec.org/ConvergingTechnologies/1/NBIC_report.pdf
William Sims Improving Human
(eds) Performance

MFG Baden-
9 Wuerttemberg Creative Regions http://www.lets-create.eu/fileadmin/_create/downloads/CReATE_global_synthesis_report_final.pdf
mbH

VISION 2025:
VISION 2025
Korea's Long-term
Taskforce,
10 Plan for Science and http://www.inovasyon.org/pdf/Korea.Vision2025.pdf
Korean
Technology
Government
Development

An Internet for
11 Cerf, V Everyone and ETSI (eds.) ICT Shaping the World
Everything

12 Siemens Picture of the Future Siemens: http://w1.siemens.com/innovation/en/strategie/results_future_study/information_communications.htm

13 Tokyo Institute Challenges in the http://www.iri.titech.ac.jp/english/iri/pdf/pamph_02.pdf


Future that is waiting

25
of Technology for a solution

The Future of the


Internet: A
Compendium of
European Projects on http://www.future-internet.eu/fileadmin/documents/reports/FI_Rep_final__281108_.pdf
14 EC
ICT Research ftp://ftp.cordis.europa.eu/pub/fp7/ict/docs/ch1-g848-280-future-internet_en.pdf
Supported by the EU
7th Framework
Programme for RTD

Alexander Huw Future Technologies,


15 http://www.greenpeace.org.uk/MultimediaFiles/Live/FullReport/5886.pdf
Arnall Today's Choices

Strategic Research
Agenda of The
EPoSS
16 European Technology http://www.smart-systems-integration.org/public/documents/publications
community
Platform on Smart
Systems Integration

FIDIS - Future of
Fidis consortium Identity in the
(Mark Gasson Information Society;
17 http://www.fidis.net/resources/deliverables/hightechid/#c1869
and Kevin D12.1: Study On
Warwick eds.) Emerging AmI
Technologies

18 Masao Takeuchi Science and http://www.nistep.go.jp/achiev/ftx/eng/stfc/stt032e/qr32pdf/STTqr3202.pdf

26
Technology Trends

Ambient intelligence -
19 John Gill http://www.tiresias.org/cost219ter/ambient_intelligence/index.htm
paving the way

Jan Gerrit
Schuurman,
Ambient Intelligence
Ferial Moelaert
20 Viable future or Rathenau Institute: http://www.rathenau.nl/files/Ambient%20Intelligence_ENG.pdf
El-Hadidy,
dangerous illusion?
André Krom,
Bart Walhout

Holtmannspötter,
Aktuelle
D., Rijkers-
Technologieprognosen
Defrasne, S.,
21 im internationalen http://www.vditz.de/uploads/media/Aktuelle_Technologieprognosen_im_internationalen_Vergleich.pdf
Glauner, C.,
Vergleich-
Korte, S.,
Übersichsstudie
Zweck, A.

Beckert, B.,
R&D Trends in Annex A in: Beckert, B., Bluemel, C., Friedewald, M., Thielmann, A.: R&D Trends in Converging
Bluemel, C.,
22 Converging Technologies. Appendix A of CONTECS Consortium: Converging Technologies and their impact on
Friedewald, M.,
Technologies. the Social Sciences and Humanities. Karlsruhe 2008
Thielmann, A.

Ofcom - Tomorrow's
23 Ofcom http://stakeholders.ofcom.org.uk/binaries/research/technology-research/randd0708.pdf
wireless world

24 John Kavanagh Grand Challenges in http://www.ukcrc.org.uk/press/news/challenge08/gccr08final.cfm?type=pdf

27
& Wendy Hall, Computing Research
UK Computing Conference 2008
Research
Committee

European Technology
Technology Assessment on European Technology Assessment Group:
25
Assessment Converging http://www.europarl.europa.eu/stoa/publications/studies/stoa183_en.pdf
Group Technologies

Federal Ministry
of Education and ICT 2020 - Research
26 http://www.bmbf.de/pot/download.php/M:0+ICT+2020/~DOM;/pub/ict_2020.pdf
Research for Innovations
(BMBF)

The Future of the


Internet Report from
European the National ICT
27 http://www.future-internet.eu/fileadmin/documents/reports/FI_Rep_final__281108_.pdf
Commission Research Directors
Working Group on
Future Internet (FI)

28
6.2 Appendix B: Critical Issues
This appendix contains the four lists of critical issues identified during the data analysis:

6.2.1 Social Impact


ID Title
1060 Access
541 Automation
640 Autonomy
770 Cheaper and fast drug development
660 Cognition
540 Comfort
923 Connectivity
105 Democracy
619 Dependability
52 Dignity
286 Economic consequences
588 Education
106 Equality
117 Freedom
108 Freedom of Expression
337 Health
53 Human Improvement
54 Identity
183 Information society
109 Knowledge
620 Liberty
955 Loneliness
111 Other
840 Productivity
1148 Quality of life
466 Safety
110 Security
618 Trust

6.2.2 Issues of Potential Ethical Relevance


ID Title
625 Authentication
59 Autonomy

29
60 Confidentiality
621 Data handling/management
79 Data Protection
80 Discrimination
84 Education
85 Employment
545 Environment
61 Equality
62 Freedom
1509 Freedom of speech
86 Gender Issues
338 Health
81 Health & Safety
63 Human Dignity
64 Identity
290 Inclusion
184 Information society
82 Intellectual Property
65 Justice
66 N/A
78 Other
67 Precautionary Principle
68 Privacy
529 Quality of Life
69 Reliability
70 Respect
71 Responsibility
467 Safety
72 Security
87 Skills / Knowledge
73 Surveillance
74 Sustainability
748 Training
75 Transparency
76 Trust
77 Well-being

30
6.2.3 Capabilities
ID Title
539 Automation
121 Connectivity
458 Democracy
339 Health
185 Information society
1161 Interoperability
112 Invisibility
533 Marks Test
113 Miniaturisation
125 Other
118 Persuasiveness
114 Proactivity
176 Quantum simulations
561 Real-time feedback
119 Remote Access
468 Safety
123 Scale
115 Seamlessness
850 Sensory Perception
116 Ubiquity
120 User Friendliness
645 Wearability

6.2.4 Constraints
ID Title
563 Available technology
178 Complexity
459 Democracy
192 Financial
340 Health
186 Information society
1514 Other
1423 Reliability
469 Safety

31
6.3 Appendix C: Findings of Data Analysis
The following subsections present the findings from the data analysis. They rely on the
analysis tools provided in the database. While only the core findings are listed here, it should
be noted that the database included sophisticated analysis tools which allowed searching for
any combination of features and aspects.
The following lists contain the technologies, application examples and artefacts found during
the data analysis, presented in alphabetical order:

6.3.1 Emerging Technologies


ID Name
7 3D Internet
61 Adaptive systems
69 Affective computing
27 Ambient
28 Ambient Intelligence
12 artificial intelligence
8 Automation
68 Autonomous sensor networks
46 Biochips
45 Bioelectronics
52 Bioinformatics and computational biology
56 biological computer
37 Biological Language Modelling
53 Biosensors
40 Context-Aware Wireless Connections
4 Converging Technologies for Improving Human Performance
21 Cyber Life Technology
35 Data Mining
9 digital distribution
10 Digital distribution
51 Digitisation of History
67 Embedded systems
65 Extended reality
19 Future computer technologies
2 Future Internet
13 Future Internet
72 Future Internet
70 Grid computing
30 high-frequency wireless technology
17 human sensitivity gene interfaces

32
23 Human-friendly information processing technology
15 Improved public transport
25 Info-protection and security
22 Intelligent multimedia contents technology
26 Intelligent Robots
29 Internet of Things
59 Invasive interfaces
11 Mobility and interoperability
44 Moore‘s Law
54 Nano-electronics
55 Nanophotonics
57 Nanorobots
14 Nanotechnology
47 Neuro/Brain Enhancement
63 Neuroinformatrics and Computational Neuroscience
20 Next-generation telecommunications network
58 Non-invasive interfaces
38 Optimising and Modelling in Constraints
66 Organic electronics
64 Pattern recognition
16 Personal Health Information system
48 Physical enhancement and biomedicine
1 Quantum computing
32 Recyling
33 Recyling
71 Robotics
3 Smart Dust
39 Special equipment for production of integrated circuits (<45nm)
36 Spectral Imaging
34 Surveillance
49 Ubiquitous Computing
50 Verified Software
18 Very Large Scale Integrated semi-conductor technology
62 Virtual and augmented reality
24 Virtual Reality
31 Wearable Computing
60 Wireless sensor networks

33
6.3.2 Application Examples
ID Name
9 3D Internet
45 Aircraft of the 21st Century
52 Ambient Intelligence and Health Care
35 ambient intelligence for disabled persons
61 Application of biologically inspired complex adaptive systems
54 Application of ICT Systems to Increase Airport Capacity
42 Artificial Brains
81 Assistive technologies for the disabled
90 Aviation
91 Aviation
58 Basic right to free access to the information infrastructure
102 BCIs for identification
103 BCIs for profiling
96 biological computer
59 Biometrical recognition systems
79 Body Area Network
48 Brain-Machine Interface
41 Brain-Machine Interface Via A Neurovascular Approach
87 Buses
49 Data Linkage and Threat Anticipation Tools
51 Data Linkage and Threat Anticipation Tools
98 Data transfer
67 Decision support systems for healthcare workers
63 Development of CPU Chips
60 Devices for video conferencing
68 Distant diagnosis and distant therapy
88 E-ticketing
22 Electronic Government
27 Emotional kitty
31 Full bodied smart garments
56 Fully automatic factories with no human workers
36 GPS
15 Health monitoring device
46 High Performance War fighter
24 High-frequency wireless technology
70 Highly specific molecular imagery
25 HPI

34
21 Human-friendly information processing technology
57 Hybrid optical data networks
30 ICT and education
104 ICT implants
53 Improvements in transport and traffic through ICT
43 Improving Quality of Life of Disabled People Using Converging Technologies
99 Initiative Automotive Electronics
69 Injection of nanorobots into blood circulation
65 Integrated photononic lab on a chip
55 Intelligent buildings and dwellings
23 Intelligent Integrated Manufacturing Systems
105 Intelligent simulation system
93 Intermodal e-ticketing
19 mach 3 speed plane
62 Magnetic data storage
92 Maritime
3 Medical monitoring devices
17 Mobile conference
4 Mobile devices to interact with objects in the real world
73 Mobile wireless technologies in tourism
75 My Daily Assistant
78 My Life Manager
64 Nanoscale Bioelectronics and electronic implants
11 Neuro-computers
80 Neuro/Brain Enhancement
95 Neuroimaging
32 Next generation Mobile WiMax network
47 Non-Drug Treatments for Enhancement of Human Performance
94 Pattern-recognition technologies for language and speech recognition
5 Personal Area Sensory and Social Communication
29 Pet robots
50 Plant disease and pest infestation detection
86 Railways
101 RFID
89 Roads
106 Robot teams
28 Robot vacuum cleaner
12 Robotic webcam
39 Scenario an ambient intelligence

35
84 Scenario on assisted living
38 Screens
13 SenseCam
72 Sensor Network
97 Sensors for biomedical purposes
76 Sensors monitoring body variables
40 Sensory Replacement
34 Sensory Replacement and Sensory Substitution
6 Service robots
10 Service-oriented cyber-businesses
1 Smart dust to monitor glacier dynamics
37 Smart tags
2 Socio-Tech
71 Space based earth and environment monitoring
20 super-computers with sensory perception
33 Surveillance, facial recognition
44 The Communicator
7 The Consequences of Fully Understanding the Brain
77 The Little Acrobat
14 Ubi-learning
100 ubiquitous
18 Unwanted Commercials
66 Use of ubiquitous computing in healthcare
8 User-Interface Olympics
16 Vehicles of the future
82 Wearable brain imaging tool.
85 Wearable computing for healthcare
26 Wearable computing for sport

6.3.3 Artefacts
ID Name
7 3D video processing
18 Aircraft of the 21st Century (Vision)
2 Artificial Intelligence chips
39 Artificial neural networks
33 Biodevices and artificial organs
38 Brain-computer interfaces
32 Brain-Machine Interface

36
31 Deep Brain Stimulation
26 Deep brain stimulators.
12 Displays
27 Exoskeleton
25 External memory extensions
24 Gene Chips
16 Gesture recognition
23 Graphics technology for embedded displays
19 High Performance War fighter
8 High-speed wireless data communication
20 Human-systems integration for optimal decision making
30 Humanoid Robot
10 Intelligence Microsensors for Artificial Sensory Systems
13 Interactive tabletop
3 IPv6
11 Low-cost traffic light sensors
6 Optics/bio-computer
4 Portable multi-media terminals
22 Processor Development according to ITRS
35 RFID
28 Robotic swarms
34 Robots with sense of touch
17 Screens
36 Sensor systems
37 Software agents
15 Surveillance Data Mining Techniques
21 Swarming autonomous multi-agent systems
1 Tangible interfaces
5 terabytes (10^12) flops computers
29 Virtual Plant
14 Wearable video processing
9 Wireless Network Sensors

37
6.4 Appendix D: List of Technologies from a Review of Forecasting
Studies
This appendix contains a list of emerging ICTs that was compiled independently of the data
analysis undertaken in the ETICA project. The table below lists the technologies extracted
from forecasting publications. The sources from which these technologies were extracted are
provided in the subsequent table.

3-D Flat-Panel Displays 5


3D media 3
3-D Printing 5
Advanced Displays 2
Affective Computing 2
Ambient computing and communication Environments: 8
Ambient intelligence (13 / 626) 9
Artificial intelligence 1
Atomic-Scale Information Technologies 7
Augmented Reality 5
Automated and remote shopping systems; 10
Awareness and Feedback Technologies 8
Behavioural Economics 5
Biofuels and Bio-Based Chemicals 4
Biogerontechnology 4
Bio-ICT Synergies 7
Calm computing (0 / 3) 9
Calm technology (0 / 13) 9
Cash-free payment systems; 10
c-Etiquette 6
Clean Coal Technologies 4
Cloud Computing 5
Cognitive architectures 8
Cognitive systems 3
Complex Systems 7
Context awareness (8126 / 1251) 9
Context Delivery Architecture 5
Cooperativity 6
Corporate Blogging 5
C-Pod (collaboration-supporting-device) 6
Crowd and Swarm Based Interaction 8
Device interoperability (215 / 57) 9
Distributed computing and wireless (243 / 378) 9
E-Book Readers 5
Electronic Paper 5
Embedded systems 3
Embedded Systems 8
Emotion Processing 8
Energy Storage Materials 4
Evolveable Systems 7

38
Exploitation of complexity 1
Future Internet 3
Globalisation 8
Green IT 5
Grid Computing 2
GRID computing 1
High-Temperature Superconductors 2
Holographic Storage 2
Home Health Monitoring 5
Human Augmentation 5
Human Behavioural Modelling 2
Human Computer Confluence 7
Human tissue engineering. 1
Human-Machine Interaction 8
Human-Machine Interfaces 2
Humanoid Robots 8
Hybrid Symmetric Interaction 8
ICT-bio-nano convergence 3
Idea Management 5
Integration in components and systems 3
Intelligent and Cognitive Systems 7
Intelligent and Cognitive Systems: 8
Intelligent transport systems 10
Interconnects 7
Interference solution 1
Internet II 2
Internet of Things 2
Internet of things 3
Internet of things (907 / 53) 9
Internet TV 5
Lab-on-a-Chip/Microreactors 2
Language and Communication 8
Location based services (1941 / 1585) 9
Location-Aware Applications 5
Location-based services using ambient intelligence 10
Mechatronics 1
Mesh Networks: Sensor 5
Micro Energy Sources 2
Microactuators 2
Microblogging 5
Miniaturisation in components 3
Miniaturization 2
Mixed Reality 2
Mobile Robots 5
Molecular Electronics 2
Molecular engineering understanding thought 1
Molecular Manufacturing 2
Monitoring and Control of Large Systems 8

39
Multi-modal interfaces 3
Nano-Bio Interfacing 7
Nano-electromechanical Systems 7
Nano-Electronics and Nanotechnologies 7
Nanofabrication 1
Nanomachining 2
Nanorobots 2
Networked Societies of Artefacts 7
Neuroscience/technology 2
NFC (4994 / 136) 9
Online Video 5
Optical Data Processing 2
Organic electronics 3
Other 3
Over-the-Air Mobile Phone Payment Systems, Developed Markets 5
Pay-as-you-go systems for transport, insurance etc; 10
Perception-action coupling 8
Perpetual Energy (Space Electricity) 2
Personal Factory 2
Personal/Service Robotics 2
Pervasive Computing and Communications: 7
Pharma 1
Photonics 3
Plug-in electric Vehicles 2
Post-CMOS devices and storage 7
Printed Electronics 2
Public Virtual Worlds 5
Quantum Computers 2
Quantum computing 1
Quantum Computing 5
Quantum Energy 2
Quantum Information Processing and Communication. 7
Rationale for the Humane City 8
Realization and User Experience of Privacy and Trust 8
Recycling Technologies 2
Remote tele-services; and 10
RFID (Case/Pallet) 5
Robot Ensembles 8
Robotics 3
Security 8
Security of ICT 3
Security, Dependability and Trust 7
Self Organization in Socially Aware Systems 8
Self-X Systems 8
Semantic representations 8
Semantic-based systems 3
Sensor fusion (1127 / 1656) 9
Serious games 6

40
Service Robotics 4
Small devices for energy 1
SOA 5
Social Network Analysis 5
Social Networks and Collective Intelligence 8
Social Software Suites 5
Software as a service 3
Software intensive systems and new computing paradigms: 8
Software-Intensive Systems 7
Software-Intensive Systems. 8
Space-Time Dispersed Interfaces 8
Spatial and Embodied Smartness 8
Speech Recognition 5
Spintronics 2
storage and production 1
Surface Computers 5
Systemability 7
Tablet PC 5
Tangible Interaction and Implicit vs. Explicit Interaction 8
Technologies for independent living including embedded sensors; 10
Teleportation 2
Tera-Device Computing 7
The growing use of biometrics for health and security purposes; 10
The Internet of Things 4
The management of distribution and freight systems 10
The Scaling Issue 8
The virtualisation of work. 10
Thin-film Solar Cell Technology 2
Transparency and security of supplies, for example, food security and traceability 10
Ubiquitous or pervasive) computing (1172 / 8454) 9
Universal Translator 2
Unlimited Connectivity 8
Video Search 5
Video Telepresence 5
Virtual Reality 2
Web 2.0 5
Web 2.0 6
Wikis 5
Wireless Power 5
Wireless sensor network (841 / 17142) 9
Wireless sensors networks 1
Table 4: Technologies Identified from Forecasting Studies

41
Sources
ID Author/ Title/Year Bibliographic Info
Organisation
1 IST –2001-37627 WP 1 – Review and http://fistera.jrc.ec.europa.eu/docs/D13
FISTERA – Analysis of National final.pdf
Thematic Network Foresight D1.3 –
on Foresight on Conclusions from the
Information Society Synthesis of National
Technologies in the Foresight Studies A
European Research Summary (2005)
Area

2 SRI Consulting A Reality Check: http://oske-net-


Business Intelligence Technology to Business bin.directo.fi/@Bin/acfed95d3d937ee6
Understanding the 6ce38c3d484cc9d9/1270283598/applic
Implications of Change ation/pdf/132013/SRIC-BI.pdf
UBI Summit 2009 -
HelsinkiSRIC-BI:
Current Disruptive and
Next-Generation
Technologies (2009)

3 European Report on the European


Commission Commission's Public http://ec.europa.eu/information_society
On-line Consultation /tl/research/key_docs/documents/report
"Shaping the ICT _public_consultation.pdf
research and innovation
agenda for the next
decade" (2009)
4 The National Disruptive Civil http://www.dni.gov/nic/confreports_di
Intelligence Council Technologies: Six sruptive_tech.html
Technologies with
Potential Impacts on US
Interests out to 2025
(2008)

5 Gartner Hype Cycle of Emerging http://www.gartner.com/it/page.jsp?id=


Technologies (2009) 1124212
6 5th Future and Emerging http://www.ami-
Collaboration@Work Technologies and communities.eu/www.amiatwork.com/
Expert Group Report Paradigms for ec.europa.eu/information_society/activ
Collaborative Working ities/atwork/hot_news/publications/doc
Environments‖ (July uments/experts_group_report_5.pdf
2006)

7 Coordination Action Beyond the Horizon: http://beyond-the-


BEYOND-THE- Anticipating Future and horizon.ics.forth.gr/central.aspx?sId=2
HORIZON - project Emerging Information 9I88I204I323I118338
Society Technologies
(2006)

8 INTERLINK : Deliverable number D4.2 http://interlink.ics.forth.gr/central.aspx


International Road-Mapping Research ?ReturnUrl=%2fDefault.aspx
Cooperation in Ambient Computing
activities in future and Communication http://interlink.ics.forth.gr/staticPages/
and emerging ICTs Environments (2009) Files/InterLink_D4.1_V2.pdf

42
9 Tekes: Tekes – the Tekes – http://www.innovaatiomaisemat.fi/wiki
Finnish Funding Innovaatiomaisemat /rdi/index.php/Toiminnot:FileGuardPa
Agency for UBICOM (2009) ge/UbicomInnovaatiomaisemat.pdf
Technology and
Innovation

10 The Forfás report Sharing Our Future: http://www.forfas.ie/publication/search


Ireland 2025 Strategic .jsp?ft=/publications/2009/title,4403,en
Policy Requirements for .php
Enterprise Development
(2009)

Table 5: Sources of List of Technologies from Forecasting Research

43
6.5 Appendix E: List of Main Technologies - Meta-Vignettes
This list represents those technologies that arose from the analysis of texts and that were
deemed to be central to our understanding of emerging technologies. They represent high-
level technologies that have the potential of affecting the way humans interact with the world.
The current version of this list is as follows:
1. Affective computing
2. Ambient intelligence
3. Artificial intelligence
4. Bioelectronics
5. Cloud computing
6. Future internet
7. Human/machine symbiosis
8. Neuroelectronics
9. Quantum computing
10. Robotics
11. Virtual / Augmented Reality

In this appendix, each of these technologies is contrasted with the findings of the data analysis
as listed above as well as the list of technologies derived from the overview of forecasting
studies. The purpose of this is to demonstrate that the list is a reasonable representation of
high level technologies that are currently being discussed in a range of sources.
The structure of this comparison will be as follows: Each technology will be listed in its own
table. The two columns below the name of the technology map the results of the data analysis
and the review of forecasting research against this technology to ensure that it is discussed in
the literature and that all technologies identified are represented in the list of meta-vignettes.

Name of Technology

Related technologies from data analysis Related technologies from list of


technologies from forecasting studies

Affective computing

No related technologies in database. Affective Computing


Emotion Processing
Human Behavioural Modelling

44
Ambient Intelligence

Adaptive systems (Technology) Ambient computing and communication


Environments
Autonomous sensor networks
(Artefact/Technique) Ambient intelligence
Context-Aware Wireless Connections Embedded systems
(Technology) Location-based services using ambient
Embedded systems (Technology) intelligence
Wireless sensor networks (Technology) Pervasive Computing and
Communications:
Wearable Computing (Technology)
Wireless sensors networks
Ubiquitous Computing (Technology)
Mobility and interoperability
Smart environments
Smart Dust (Artefact/Technique)
High-Frequency Wireless Technology
(Artefact/Technique) (see remark 19)
Human-friendly information processing
technology

Artificial intelligence

Pattern recognition Artificial intelligence


Cyber Life Technology Context awareness
Biological Language Modelling Cooperativity
Embedded intelligence Crowd and Swarm Based Interaction
Intelligent transport systems
Semantic representations
Semantic-based systems
Social Networks and Collective
Intelligence

Bioelectronics

Biochips Biogerontechnology
Bioinformatics and computational Bio-ICT Synergies
biology (this is a discipline or field for ICT-bio-nano convergence
interdisciplinary cooperation)
Neuroscience/technology
biological computer (a basic technology,

45
on a par with quantum computing) Organic electronics
Biological Language Modelling
(probably a sub-field of computer
programming, possibly AI)
Biosensors (see remark 10 on smart dust)
Organic Electronics (maybe an output of
bioinformatics)
Neuro/Brain Enhancement (this is
perhaps a vision or should be subsumed
under a vision called
Neuroinformatrics and Computational
Neuroscience (again a discipline or field
for interdisciplinary cooperation)
Neuroelectronics
Invasive interfaces (put under heading of
human-machine interfaces)
Non-invasive interfaces(put under
heading of human-machine interfaces)
Physical enhancement and biomedicine
(biomedicine is a discipline, physical
enhancement a goal of applications
resulting from biomedicine

Cloud computing

Grid computing Cloud Computing


Grid Computing

Future internet

3D Internet Device interoperability


Next-generation telecommunications Distributed computing and wireless
network Future Internet
Internet of things Integration in components and systems
Internet II
Internet of Things

46
Human/machine symbiosis

Cyber Life Technology Human Augmentation


Converging Technologies for Improving Human Computer Confluence
Human Performance Human-Machine Interaction
Physical enhancement and biomedicine Human-Machine Interfaces
Non-invasive interfaces
Human sensitivity gene interfaces

Quantum computing

Quantum Computers
Quantum computing
Quantum Information Processing and
Communication.

Robotics

Intelligent Robots Humanoid Robots


Mobile Robots
Personal/Service Robotics
Robot Ensembles
Robotics
Service Robotics

Virtual / Augmented Reality

Virtual Reality Augmented Reality


Extended reality Mixed Reality
Cyber Life Technology The virtualisation of work.
Digitisation of History Video Telepresence
Virtual Reality

47
The purpose of the lists in this appendix is to come to an agreement of which technologies are
being discussed in different discourses on emerging ICTs. It does not claim to be
comprehensive or exhaustive. A number of the technologies listed here under the titles of the
meta-vignettes could have been placed under more than one heading. In many cases a more
detailed analysis of the different texts may show that the relevant aspects of the technologies
may not be compatible. For the purposes of this appendix such problems are secondary. The
aim is to identify the most generally discussed and relevant technologies for further analysis
and development into meta-vignettes.

Concerning the technologies identified during the data analysis, this list of technologies
covers most of them. The following technologies do not fit easily under any of the headings
but do not appear to be candidates for a high level technology to be developed into a meta-
vignette, either because they are too general (e.g. Moore's law, automation) or because they
do not provide an insight into how they would affect humans' interaction with their
environment:
Automation
Digital distribution
Future computer technologies
High-frequency wireless technology
Human-friendly information processing technology
Improved public transport
Info-protection and security
Intelligent multimedia contents technology
Mobility and interoperability
Moore‘s Law
Optimising and Modelling in Constraints
Spectral Imaging

The analysis of the list derived from forecasting studies showed that there were a number of
additional technologies which did not obviously fit under the heading of any of the main
technologies identified during the data analysis. This can partly be explained by the fact that
these studies did not focus on ICT and therefore contained a number of technologies that are
more closely aligned to different base technologies (e.g. energy). In addition, the list
contained a number of items that, in the terminology of the ETICA data analysis could better
be described as application examples or artefacts.
Overall, the list from forecasting studies led to the refinement of the original list and
supported the contention that ETICA successfully identified core emerging ICTs.

48
6.6 Appendix F: Meta-Vignettes
This section contains all meta-vignettes that were constructed by the end of April 2010. They
offer a detailed description of the emerging technologies identified by the ETICA project. The
latest and up-to-date version of all meta-vignettes or technology descriptions will be
published on the project website (www.etica-project.eu).

6.6.1 Affective Computing

Also known as

Emotional computing
Emotion-oriented computing

1. History

Research into affect can be traced to the 19th century. However, it used to be linked to living
organisms. Interest in the computational aspects of affects (or emotions, the terms seem to be
used interchangeably) has only developed in the last decade of the 20th century, mostly due to
the increased capacities of computers that can now be used to model emotions. "Picard‘s
‗Affective Computing‘ in 1997 established the field, and several European Commission (EC)
projects such as Net Environments for Embodied Emotional Conversational Agents (NECA),
Emotionally Rich Man–Machine Intelligent System (ERMIS), Supporting Affective
Interactions In Real-Time Applications (SAFIRA) among others made substantial
contributions to the field. The EC Human-Machine Interaction Network on Emotion
(HUMAINE), a Network of Excellence, now brings together many of the most active teams"
(Cowie, 2005).

2. Application Areas/Examples

Affective computing can play a role in any application area where human and computers
interact, i.e. in almost any application area imaginable. Interestingly, there seems to be little
agreement in the affective computing community on what the eventual applications of this
technology will be or should be. There seems to be no "killer app" that will force a general
breakthrough of the technology. There are a number of examples that can be found in the
literature such as:

Emotional kitty (emotions in decision making)


BCI for profiling (data collection for affective computing)
Wearable brain imaging tool (detection of emotions)
Sensor systems (detection of emotions)

Robinson & el Kaliouby (2009) discuss a number of application areas. These include:

Communication with humans with difficulties in their affective system (e.g. autism
spectrum disorder, anxiety and others)
Increase social-emotional intelligence in agents and robots
Facilitate real-world testing of models and theories of social cognition

Tao & Tan (2005) give a description of the main research groups in the area and their work,
which covers a number of application examples. These include:

49
Incorporation of emotions into decision making
Creation and presentation of interactive drama
Emotion, Stress and Coping in Families with Adolescents: Assessing Personality
Factors and Situational Aspects in an Experimental Computer Game
Research on architectures accounting for mental states
Social interaction with service robots

More detailed examples that lend themselves to an ethical analysis are:

Kitchen Robot 2

"Imagine your robot entering the kitchen as you prepare breakfast for guests. The robot looks
happy to see you and greets you with a cheery "Good morning." You mumble something it
does not understand. It notices your face, vocal tone, smoke above the stove, and your
slamming of a pot into the sink, and infers that you do not appear to be having a good
morning. Immediately, it adjusts its internal state to "subdued," which has the effect of
lowering its vocal pitch and amplitude settings, eliminating cheery behavioural displays, and
suppressing unnecessary conversation. Suppose you exclaim unprintable curses that are out of
character for you, yank your hand from the hot stove, rush to run your fingers under cold
water, and mutter "... ruined the sauce." While the robot's speech recognition may not have
high confidence that it accurately recognized all of your words, its assessment of your affect
and actions indicates a high probability that you are upset and possibly hurt. At this moment it
might turn its head with a look of concern, search its empathetic phrases and select, "Burn-
Ouch ... Are you OK?" and wait to see if you are, before selecting the semantically closest
helpful response, "Shall I list sauce recipes?" As it goes about its work helping you, it watches
for signs of your affective state changing - positive or negative. It may modify an internal
model of you, representing what is likely to elicit displays such as pleasure or displeasure
from you, and it may later try to generalize this model to other people, and to development of
common sense about people's affective states. It looks for things it does that are associated
with improvements in the positive nature of your state, as well as things that might frustrate or
annoy you. As it finds these, it also updates its own internal learning system, indicating which
of its behaviours you prefer, so it becomes more effective in how it works with you."

Affective Gaming

Games could take into account the players' emotional state. Such technology could cover the
following (Sykes & Brown, 2003):

-The ability to generate game content dynamically with respect to the affective state of
the player.
Knowledge of the player‘s affective state allows the game to deliver content at the most
appropriate moment. For example, in ‗horror‘ based games, the optimum effect of a loud
noise will only occur if produced when the player is incredibly tense.
-The ability to communicate the affective state of the game player to third parties.
With the advent of on-line gaming it is more frequently the case that the player‘s opponent is
not physically present. However, it is often the emotional involvement of other players that
shapes our enjoyment of a game. Affective Gaming technology can address this issue by
having the on-screen persona reflect the player‘s emotional state.
- The adoption of new game mechanics based on the affective state of the player.
2
Picard‘s 2007 scholarpedia entry at http://www.scholarpedia.org/article/Affective_computing

50
An example of affective game mechanics can be found in Zen Warriors, a fighting game
where, to perform their finishing move, the player has to switch from fast paced aggression, to
a Zen-like state of inner calm."

Special controllers could allow the expression of emotions, e.g. controllers in the form of a
toy. Haptic controllers could give emotion-related feedback. The character in the game could
represent the emotions of the human player3. For more on Affective gaming, see Hudlicka
(2009).

Affective Mediation4

There is a range of impairments and disorders that may involve affective communication
deficits. "Within assistive technology, the Augmentative and Alternative Communication
(AAC) research field is used to describe additional ways of helping people who find it hard to
communicate by speech or writing. AAC helps them to communicate more easily."

"Within affective computing, affective mediation uses a computer-based system as an


intermediary among the communication of certain people, reflecting the mood the
interlocutors may have (Picard, 1997). Affective mediation tries to minimize the filtering of
affective information of communication devices; filtering results from the fact that
communication devices are usually devoted to transmission of verbal information and miss
nonverbal information."

"Gestele [name of the system] was developed with the aim of assisting people with severe
motor and oral communication disabilities. It is a mediation system that includes emotional
expressivity hints through voice intonation and preconfigured small talk. The Gestele system
is used for telephone communication […]. Gestele allows for different input options (directly
or by scanning, depending on user characteristics) and makes holding telephone conversations
involving affective characteristics possible for people with no speaking ability."

Emotional Human-Machine Interaction

Human computer interaction is a well-established field of research and practice. The


interaction between humans and machines could benefit immensely if emotional states could
be measured and displayed (Balomenos et el. 2004; Caridakis, 2006). Facial expression and
hand gesture analysis, for example, could play a fundamental part in emotionally rich man-
machine interaction (MMI) systems. Non-verbal cues can be used to estimate the users‘
emotional state. This would allow systems to adapt their functionality depending on the
emotional state of users, such as anger or frustration. The adaptability of the system will
enhance usability and make interactions more attractive to users not accustomed with
conventional interfaces. This might be implemented, for example, in automated telephone
answering systems which react not only to the content of the spoken words but also to
emotional cues. Users displaying high levels of frustration or anger might be connected to
human operators.

3
See the following sites on affective gaming:
http://www.youtube.com/watch?v=XEuy7Zpo4IE (accessed 30.03.2010)
http://www.sics.se/gametheme/media/sentoy-fantasya.mpg (accessed 30.03.2010)
4
Garay, N. et al. (2006): Assistive technology and affective mediation. In: Human Technology (2:1), 55-83,
http://www.humantechnology.jyu.fi/articles/volume2/number1/2006/humantechnology-april-2006.pdf (accessed
30.03.2010)

51
Affective Computing in Tele-home Health

Caridakis et al. (2006)5 describe the design of an intelligent interface (MOUE) which is aimed
at discerning emotional states from processing sensory modalities (or modes) input via
various media and building (or encoding) a model of the user‘s emotions. This interface is
contextualised within the tele-home health care setting to provide the health care provider
with an easy-to-use and useful assessment of the patient‘s emotional state in order to facilitate
patient care. The propose to investigate how emotional state assessments influence responses
from health care professionals and how MOUE can be accepted into the health care
environment.

3. Definition and Defining Features

Definition

The definition of affective computing arising from the analysis so far is that:

Affective computing refers to the use of ICT for the purposes of perceiving, interpreting or
expressing emotions or other affective phenomena and the simulation or realisation of
emotional cognition. This can be done using a range of different modes (e.g. speech, posture,
facial expression) and for a range of purposes and applications. Definitions of affective
computing in the literature generally support the defining features identified below. A broad
definition is offered by the MIT research group: "Affective Computing is computing that
relates to, arises from, or deliberately influences emotion or other affective phenomena."
(MIT Media Lab)
Building on this definition, Roddy Cowie (coordinator of the HUMAINE network of
excellence) offers a definition that indicates why affective computing is perceived to be an
important field: "Data suggest that less than 10% of human life is completely unemotional.
The rest involves emotion of some sort (http://emotion-
research.net/deliverables/D3e%20final.pdf), but probably less than a quarter consists of
archetypal ‗fullblown emotions‘ (brief episodes dominated by strong feelings). Most of it is
coloured by ‗pervasive emotion‘ (feelings, expressive behaviours, evaluations, and so on that
are inherent in most human activities). ‗Emotion-oriented computing‘ is about technology that
takes account of both types." (Cowie, 2005)6.
"Affective computing is trying to assign computers the human-like capabilities of observation,
interpretation and generation of affect features. It is an important topic for the harmonious
human-computer interaction, by increasing the quality of human-computer communication
and improving the intelligence of the computer." (Tao & Tan 2005, p. 981).

Defining Features

Emotion pervades human interaction. Sensitivity to emotions is fundamental to any


communication. Affective computing can change the way humans interact with the world is
through:

Changes in human-computer interaction

5
Lisetti, C., & LeRouge, C. (2004). Affective computing in tele-home health. In Proceedings of the 37th Hawaii
International Conference on System Sciences-2004 (pp. 1–8).
6
http://emotion-research.net/projects/humaine/aboutHUMAINE/HUMAINE%20white%20paper.pdf accessed
30.03.2010

52
Changes to technically mediated interaction between humans.

The application examples have shown that affective computing is complex because emotions
themselves are complex. The examples support Cowie‘s (2005) definition of affective
computing as covering: "affect (feelings and physical changes associated with them);
cognition and conation (shifts in perception, judgment, and selection of action); interaction
(registering other people‘s emotional status and conveying emotion to them); personality;
culture; ethics; and much more".

Defining features of affective computing are:

Perceiving emotions / affects (i.e. sensing of physiological correlates of affect). The


principle here is that humans send signals that allow others to infer the agent's
emotional state. Such emotional cues may be with the agent's control, such as tone of
voice, posture or facial expression. They may also be outside the agent's control as in
the case of facial colour (blushing), heart rate or breathing. Humans are normally
aware of such emotional cues and react easily to them. Most current technology does
not react to such emotional cues even though they are part of the illocutionary and
perlocutionary function of a speech act.

Expressive behaviour by computers / artificial agents. Whereas the previous point


aimed at the recognition of emotions by computers, this point refers to the expression
of computers in ways that can be interpreted as displaying emotion. Affective
expressions can render interaction more natural and thereby easier for users. It can be
employed for persuasive purposes as it increases the likelihood of acceptance of
computer-generated content.

Emotional cognition (agent's empathy, understanding of emotional states). In


humans, affects are closely related, and arguably inseparable from cognition.
Emotional cognition is the next logical step after recognition of emotions. It includes
absorbing emotion-related information, modelling users' emotional state, applying and
maintaining a user affect model and integrating this model with the emotion
recognition and expression. These activities offer the potential of personalising
individual affect models and catering to individual preferences.

These three main functions of affective computing can refer to different modes of expression.
Modes are ways in which emotion is signalled. Humans signal their emotional states in a
number of ways. Affective computing concentrates on the following modes for all of the
different streams of activities:

Emotional speech processing: The measurement of acoustic features to assess


emotions is used for the perception of emotions. Emotional expression can similarly
make use of known features of speech, so that computer-generated speech can
transport emotional messages (e.g. impatience, sadness).

Facial expression: Human facial expressions can be measured to infer emotional


states. Facial animation (e.g of software agents on screens or on robots, famously
KISMET: can be used to signal emotional states7.

7
http://www.ai.mit.edu/projects/humanoid-robotics-group/kismet/kismet.html accessed 30.03.2010

53
Body gesture and movement: Similar to facial expression, the entire body can be
used to infer emotions or express them.

Multi-modal systems: These combine two or more of the above modes. For a
stronger detection, modelling and display of emotions, multi-modality is desirable.

A final way in which affective computing may have profound influence on humans'
interaction with the world has to do with its theoretical underpinnings. Driven by research
interests in affective computing, theories from psychology are developed. The interest in
modelling emotion has implications for the theoretical description and, as a consequence, for
the way we perceive human emotion in general.

Assumptions

All examples and defining features of affective computing are based on the
assumption that human emotions are capable of being measured, recognised, and
classified.
An important aspect is that there is a direct link between emotions and external
expressions.

4. Time Line

Ongoing. There are few predictions on particular breakthroughs or applications. Affect


measuring technologies are described as "becoming robust and reliable" (Robinson & el
Kaliouby, 2009, p. 3442).

5. Relation to other Technologies

Linked to Artificial Intelligence (AI) (models human property, close link to


cognition).
Affects and emotions are part of the human cognitive process. While AI traditionally
has concentrated on formal, expressive and propositional aspects of human
intelligence, one can argue that a majority of human action is emotion-related. If AI is
about perceiving, understanding, simulating and expression of human intelligent
actions, then the incorporation of emotion should be an integral part of AI.
Linked to Ambient Intelligence (AmI) (ubiquitous sensors, pervasive presence).
AmI could play an important part in perceiving as well as expressing human emotions.
Ubiquitous sensor networks or other ambient interaction devices could provide rich
data to draw inferences on humans‘ emotions. An ambient intelligent environment
could react well to these, for example by modifying environmental aspects (e.g.
lighting, sound, connectivity) according to users‘ emotional states.

6. Critical Issues

No critical issues in the database. Goldie et al (2007) list the following ethical issues of
affective computing:

Lack of conceptual clarity. Unlike other technologies, affective computing is based on


concepts that are not well defined even in the research community.
Encroachment on humanity. Emotions are currently still confined to living beings, in
particular humans. Can machines "have" emotions? What would that mean?

54
Reflexivity of ethics and affective computing. Affective computing raises emotions,
ethical judgments on them contain emotions.
Instrumental use of emotions, for example to persuade individuals to consume.
Ethics-related legal issues, liability etc.

References

Academic publications

Robinson, P., & el Kaliouby, R. (2009). Computation of emotions in man and machines.
Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1535),
3441-3447. doi:10.1098/rstb.2009.0198

Tao, J., Tan, T., & Picard, R. (2005). Affective Computing and Intelligent Interaction. Lecture
Notes in Computer Science. Berlin: Springer. Retrieved on 30 March 2010, from
http://dx.doi.org/10.1007/11573548

Sykes, J., & Brown, S. (2003). Affective gaming: measuring emotion through the gamepad. In
CHI '03 extended abstracts on Human factors in computing systems (pp. 732-733). Ft.
Lauderdale, Florida, USA: ACM. doi:10.1145/765891.765957

Hudlicka, E. (2009). Affective game engines: motivation and requirements. In Proceedings of


the 4th International Conference on Foundations of Digital Games (pp. 299-306).
Orlando, Florida: ACM. doi:10.1145/1536513.1536565

Tao, J., & Tan, T. (2005). Affective computing: A review. In Lecture Notes in Computer
Science, Vol. 3784, pp. 981-995. Springer

Picard, R. W. (1997). Affective Computing. Cambridge, Mass: MIT Press

Caridakis, G., Malatesta, L., Kessous, L., Amir, N., Raouzaiou, A., & Karpouzis, K. (2006).
Modelling naturalistic affective states via facial and vocal expressions recognition. In
Proceedings of the 8th international conference on Multimodal interfaces. p. 154

Balomenos, T., Raouzaiou, A., Ioannou, S., Drosopoulos, A., Karpouzis, K., & Kollias, S.
(2004). Emotion analysis in man-machine interaction systems. Machine Learning for
Multimodal Interaction, Lecture Notes in Computer Science, 3361, 318–328

Picard, R.W (2007): Affective Computing. Entry in Scholarpedia. Retrieved on 30 March


2010 from http://www.scholarpedia.org/article/Affective_computing

Associations / research centres

HUMAINE network: http://emotion-research.net/

Cowie, R. (2005). Emotion-Oriented Computing: State of the Art and Key Challenges.
HUMAINE White Paper. Retrieved on 30 March 2010, from http://emotion-
research.net/projects/humaine/aboutHUMAINE/HUMAINE%20white%20paper.pdf

55
Goldie, P., Cowie, P., Doring, S., Sneddon, I. & Carstens, B. (2007) White paper on ethical
issues in emotion-oriented computing. Retrieved on 30 March 2010, from
http://emotion-research.net/projects/humaine/deliverables/D10d%20final.pdf

MIT Affective Computing centre: http://affect.media.mit.edu/

56
6.6.2 Ambient Intelligence

1. History

In 1998, Philips and Palo Alto Ventures coined the term ‗ambient intelligence‘ (AmI) in order
to illustrate a vision of the future where technologies seamlessly interact and adapt to human
needs while being none obtrusive. From then on they have further developed the concept, or
vision, and designed products such as the Philips Healthcare Ambient Experience 8 that fit into
their conception of AmI (de Ruyter & Aarts 2004). AmI gained further substantial attention
when the European Commission‘s Sixth Framework in Information, Society and Technology
(IST) programme set aside a budget to fund research projects that were looking especially at
AmI (Aisola, 2005). The funding may have been as a result of advice received from the
Information Society and Technology Advisory Group (ISTAG), a group set up to advice the
European Commission on overall strategy and implementation of ICT research in Europe who
had launched a scenario planning exercise to demonstrate what might be realised through
AmI technology (ISTAG, 2005; 2001). Through the exercise, ISTAG set out its vision for
AmI by illustrating what living with AmI might be like in 2010 now extended to 2020. As a
result, AmI gained a substantial amount of attention, and in the last years several initiatives
were started. Research projects in the USA, Canada, Spain, France and the Netherlands were
launched. And in 2004, the first European symposium on Ambient Intelligence was held and
many other conferences such as the International Symposium on Ambient Intelligence
(ISAmI) and the European Conference on Ambient Intelligence among others have been held
that address special topics in AmI.
Of course Ambient Intelligence was preceded by other visions and technological
developments that had similar kinds of ambitions that AmI has. Aarts (2003) names for
example artificial intelligence as such a vision that shares same ambitions like those of AmI.
In addition, Weiser work deserves a mention when discussing the history of AmI. For
instance, as Ami is very closely related to the earlier concepts of ubiquitous computing,
Marc Weiser was already showcasing AmI demos in the early 1980‘s/early 1990‘s. In light of
this, one could claim that Mark Weiser‘s vision of ubiquitous computing, closely related to
AmI has almost been achieved nowadays when looked at through the developments taking
place in AmI. However, his other vision of calm computing has not been achieved yet
(Weiser 1991, Weiser and Brown 1995). The biggest difference between earlier visions of
AmI and the current ones is according to Aarts (2003) the shift of focus from productivity in
business environments to consumer and user centred design approach. In addition the new
kind of interaction with technologies in our everyday lives is expected to somehow enhance
our experiences and lives with AmI applications.
The most well-known example of AmI is probably a refrigerator that orders products by itself
without human intervention (Ducatel, 2010). The fridge ‗knows‘ by analyzing previous
consuming behavior that there is always milk in the fridge, and it is programmed to order
milk when there is only one bottle left. Another often mentioned example is the use of
miniaturized biosensor systems that monitor blood pressure, glucose levels, body temperature,
heart rate and other bodily variables. The biosensor systems are wirelessly connected to an
emergency unit that automatically sends an ambulance when an accident happens, which it
presumably automatically ‗assumes‘ when certain threshold values are exceeded (Schuurman

8
http://www.healthcare.philips.com/wpd.aspx?p=/resources/news.wpd&id=1683&c=main

57
et al., 2009; Federal Ministry of Education and Research, 2007). Or, to give a final example,
consider an interactive screen in one‘s mirror that automatically provides its user with
customized information about the weather, traffic jams and appointments of the day, so that
its user knows what to wear, how to drive and what to prepare for which appointment.
In the previous brief examples the user(s) is provided with applications and services with
which he or she interacts in an unobtrusive manner. In the vision of AmI, the sensors are
supposed to be supported and utilized in this interaction, and applications and services should
be consistent, easy to handle and easy to learn. Furthermore, electric devices are wirelessly
connected and form intelligent networks which create environments in which people are
surrounded by intelligent and intuitive interfaces that are embedded in all kinds of objects.
The ultimate goal is to design an environment that is capable of recognizing and responding
to the presence and actions of different individuals in a seamless, unobtrusive and often,
invisible way using several sensors (Holmlid & Björklind, 2003).
2. Application Areas/Examples
In this section some scenarios and examples of AmI are given. AmI is a concept that can be
applied in basically every societal area such as healthcare, offices, homes, education,
transport, entertainment and so on. Therefore, as the scenarios and examples show, AmI is not
bound to a particular context, but is meant for society as a whole and is cross-cutting across
fields of applications.
- AmI in the home environment. ‗Ellen returns home after a long day's work. At the
front door she is recognized by an intelligent surveillance camera, the door alarm is
switched off, and the door unlocks and opens. When she enters the hall the house map
indicates that her husband Peter is at an art fair in Paris, and that her daughter
Charlotte is in the children's playroom, where she is playing with an interactive screen.
The remote children surveillance service is notified that she is at home, and
subsequently the on-line connection is switched off. When she enters the kitchen the
family memo frame lights up to indicate that there are new messages. The shopping
list that has been composed needs confirmation before it is sent to the supermarket for
delivery. There is also a message notifying that the home information system has
found new information on the semantic Web about economic holiday cottages with
sea sight in Spain. She briefly connects to the playroom to say hello to Charlotte, and
her video picture automatically appears on the flat screen that is currently used by
Charlotte. Next, she connects to Peter at the art fair in Paris. He shows her through his
contact lens camera some of the sculptures he intends to buy, and she confirms his
choice. In the mean time she selects one of the displayed menus that indicate what can
be prepared with the food that is currently available from the pantry and the
refrigerator. Next, she switches to the video on demand channel to watch the latest
news program. Through the follow me, she switches over to the flat screen in the
bedroom where she is going to have her personalized workout session. Later that
evening, after Peter has returned home, they are chatting with a friend in the living
room with their personalized ambient lighting switched on. They watch the virtual
presenter that informs them about the programs and the information that have been
recorded by the home storage server earlier that day‘ (Wikipedia, 2010).

- AmI in healthcare. ‗Ambient Intelligence could provide a basis for integrating


intelligent health care technology into an individual‘s personal surroundings.

58
Computers around you, on your body and even in your body could monitor your
health status at all times and, when the need arose, alert your carer or intervene
directly' (Schuurman, El-Hadidy, Krom & Walhout, 2009, p. 15).

- AmI for disabled persons. 'A personal communication device can be worn or fitted to
a wheelchair or a blind person‘s cane. These can be programmed to communicate with
barriers, ticket machines and gates to allow access or more time. Smart tags,
embedded in a floor, can receive and send information that will guide a person to a
destination. A person with low-vision could hear guidance signals' (Gill, 2008, p 6).

- AmI for industry. ‗Intelligent and autonomous networked sensors systems offer
possible uses among others in precise, low-cost control of chemical processes, in
monitoring and linking machines, in the tracking and management of security-relevant
objects, in registering ambient conditions and in testing product quality of the
condition of building fabric‘ (Federal Ministry of Education and Research, 2007, p
39).

- AmI for business. ‗Hélène calls Ralph in New York from their company's home
office in Paris. Ralph's E21, connected to his phone, recognizes Hélène's telephone
number; it answers in her native French, reports that Ralph is away on vacation, and
asks if her call is urgent. The E21's multilingual speech and automation systems,
which Ralph has scripted to handle urgent calls from people such as Hélène, recognize
the word "décisif" in Hélène's reply and transfer the call to Ralph's H21 in his hotel.
When Ralph speaks with Hélène, he decides to bring George, now at home in London,
into the conversation. All three decide to meet next week in Paris. Conversing with
their E21s, they ask their automated calendars to compare their schedules and check
the availability of flights from New York and London to Paris. Next Tuesday at 11am
looks good. All three say "OK" and their automation systems make the necessary
reservations. Ralph and George arrive at Paris headquarters. At the front desk, they
pick up H21s, which recognize their faces and connect to their E21s in New York and
London. Ralph asks his H21 where they can find Hélène. It tells them she's across the
street, and it provides an indoor/outdoor navigation system to guide them to her.
George asks his H21 for "last week's technical drawings," which he forgot to bring.
The H21 finds and fetches the drawings just as they meet Hélène‘ (MIT Project
Oxygen, 2010).

3. Definition and Defining Features


From the five examples described in the previous section, one can infer a definition and
deduce some defining features. The general idea of AmI is that electronic devices in our
homes, offices, hospitals, cars and public spaces will be embedded, interconnected, adaptive,
personalized, anticipatory and context-aware. These six features are now described in more
detail.
- Embedded It is important to note that AmI is not the outcome of any single
technology or application. It is rather an emergent property of several interconnected
computational devices, sensors and ICT systems. The devices, sensors and ICT
systems are distributed and embedded in the surroundings. The technology disappears
into the background and is usually not consciously experienced.

59
- Interconnected The devices, sensors and ICT systems are not only embedded in the
surroundings, they are (wirelessly) interconnected as well, thereby forming a
ubiquitous system of large-scale distributed networks of interconnected computing
devices. For example, the miniaturized biosensor systems that monitor vital body
variables are connected to an emergency unit which is connected to the ambulance.
All the devices, sensors and ICT systems are connected and form one ambient
intelligence system.

- Adaptive Because there is no stable connectivity to services and information sources


in ad-hoc networks, AmI systems can never base their operation on the availability of
complete and up-to-date information and services. This has the consequence that AmI
systems have to organize their services in an adaptive way, i.e. the degree of service
varies with the amount of information available and the reach-ability of external
services.

- Personalized AmI is personalized to specific user needs and preferences. Remember


the example of an interactive interface in one‘s mirror which provides its user with
personalized information about the weather, traffic jams and appointments. In other
words, AmI is user-centered.

- Anticipatory AmI can anticipate the desires of its user(s). Consider an AmI system in
the context of one‘s home that monitors behavioral patterns, infers one‘s mood from
the behavioral patterns and adjusts the light and music accordingly. The system is pro-
active and anticipates to what the user wants or needs.

- Context-aware AmI systems can recognize specific users and its situational context
and can adjust to the user and context. It may know that some users like classical
music and others like jazz. Or that some users prefer it to be warm in the house and
others like it cool. To give a contemporary example, car navigation systems can adjust
the level of their lighting when it becomes dark, and the user benefits from more light
on the screen. The system ‗knows‘ that its user benefits from more or less light in
different contexts.

- Novel human-technology interaction paradigms AmI systems will utilize new kind
of interfaces which should support more seamless user experience with products and
services. It is assumed that interaction paradigms like speech or haptics could lead to
the more intuitive or natural interfaces.

Furthermore, it is important to note that not all features are equally present in all AmI
systems. For example, some AmI systems are very personalized, anticipatory and adaptive
whereas others are not. It may also be helpful to note that there is some overlap between
adaptive, personalized, anticipatory and context-aware. The adaptive and anticipatory aspects,
for example, make sure that the system can be personalized.

60
4. Timeline
As stated previously, AmI is a vision of the future of ICT and does not exist yet, at least not in
the sense of the definitions described in the introduction. Some contemporary devices are to
some extent context-aware, personalized and adaptive, but not on the scale of the AmI vision,
according to which, many devices are embedded in the environment, interconnected and
context-aware so that they can be adaptive, anticipatory and personalized. Contemporary
computing systems do not confirm with this description. Finally, AmI is a view and prediction
of the future and it is claimed to be realized in approximately 10 years from now, that is to
say, in 2020 (Philips Research, 2010; Gasson & Warwick, 2007; Federal Ministry of
Education and Research, 2007). It is, however, not clear from the literature when exactly or to
what extent AmI will become a reality.
5. Relation to other Technologies
AmI is related to other ICT concepts such as ubiquitous computing (the idea that computing
devices are everywhere, they are ubiquitous) and internet of things (the idea that computing
devices are increasingly interconnected and form intelligent networks). These concepts are
often mentioned as synonyms for AmI and there are indeed more similarities than differences.
One could also argue that ubiquitous computing and internet of things are enabling
technologies for AmI. Without computing devices that are ubiquitous and interconnected AmI
would not be possible. Other enabling technologies are sensor systems, usually cameras that
can recognize different objects and biosensors. Radio Frequency Identification (RFID) chips
are also mentioned as enabling technologies as well as other electrical devices such as
interactive screens, televisions, radios, fridges, lighting systems, the Internet and so on.
Basically every electric device could be incorporated into the AmI system.
6. Critical Issues
Although some predict that AmI will be existent within 15 years, others claim it will be an
extremely difficult task to build, provide and maintain the infrastructure required to support
the AmI environment. Both functional (related to the specific system operation) and non-
functional (security, scalability, performance, robustness, availability, reliability, license
issues, etc) requirements of an AmI system pose strict demands on the computational and
communication resources and in general on the underlying infrastructure (Gasson &
Warwick, 2007). Moreover, AmI systems must be able to ‗infer the goals of the users without
giving them the impression that they are under control (big brother), and must be able to
support the users without giving them the impression that they are forcing them. The system
must offer possible solutions, not impose them. This requires a lot of ingenuity also on the
part of human beings, and appears particularly difficult for a machine' (Gill, 2008, p 5).
Finally, the AmI environment will be very complex and stimulating, from both a sensorial and
cognitive perspective. It is not clear whether people will be able to cope with the hyper
stimulation and the corresponding cognitive load. This is particularly true for people with
reduced abilities, and principally for people with cognitive limitations (Gill, 2008). Thus,
there are substantial technical issues to resolve as well as human-centered design issues.
There are several critical issues mentioned in the literature on AmI. It may be helpful to keep
in mind that these critical issues are raised by the authors of scientific and
governmental/political reports on AmI and not by ethicists. These issues are now briefly
summarized.

61
- Data Protection/Surveillance/Privacy AmI systems use profiling to adapt to user
preferences. A large amount of sensors and data mining is needed to adapt to user
preferences. The main question is: Who is in control of the personal data collected by
the sensors and who has access to it? An operator? A hacker? AmI has great
surveillance potential which can therefore cause privacy issues. As such the user
profiles need to be protected (Gasson & Warwick, 2007; Gill, 2008; Schuurman et al,
2009).

- Autonomy & Trust In some cases AmI systems can be said to have the ability to
make decisions for people. Recall the fridge that automatically orders milk when there
is one bottle left. We delegate tasks to the system and trust that it does the tasks well.
But, what if we do not want milk but orange juice? Who is to blame? The system or its
user? Or what if the fridge orders soft drinks, but it should have ordered orange juice?
The point is that AmI can reduce one‘s autonomy (Schuurman et al, 2009; Gill, 2007).

- Reliability ‗One central challenge consists of achieving reliability and fail-safe


stability in autonomous systems with ambient intelligence‘ (Federal Ministry of
Education and Research, 2007, p 39). For example, a user relies on an AmI system to
open the door when he or she stands in front of it. But what happens when the door
remains closed? We heavily rely on the system and things may go wrong if the system
fails or malfunctions.

References
Aarts, E. (2003)Technological Issues in Ambient Intelligence; Emile Aarts and Stefano
Marzano (eds.), The New Everyday Visions of Ambient Intelligence, pp. 12-17; Aarts,
Emile; 010 Publishers Rotterdam; 2003
Aisola, K. (2005) The European Union‘s Sixth Framework Programme: a Laypersons Guide
to Funding. Tactical Technology Collective, Amsterdam
Ducatel, K., Bogdanowicz, M., Scapolo, F., Leijten, J. & Burgelman, J-C. (2010). That‘s what
friends are for. Ambient Intelligence (AmI) and the IS in 2010. Retrieved May 4, 2010,
from http://www.itas.fzk.de/eng/e-
society/preprints/esociety/Ducatel%20et%20al.pdf
de Ruyter, B., Aarts, E. (2004): Ambient intelligence: visualizing the future, Proceedings of
the Working Conference on Advanced Visual Interfaces, AVI 2004, pp 203-208; May 28,
2004
Federal Ministry of Education and Research (BMBF). (2007). ICT 2020 - Research for
Innovations.
Gasson, M. & Warwick, K. (2007). D12.1 Study On Emerging AmI Technologies. FIDIS -
Future of Identity in the Information Society.
Gill, J. (2008). Ambient Intelligence - Paving the way. Cost.
Holmlid, S. & Björklind, A. (2003). Ambient Intelligence to go, AmiGo white paper on
mobile intelligent ambience. Research Report SAR-03-03.
ISTAG. (2005). Ambient Intelligence: from Vision to Reality. Retrieved May 4, 2010, from
http://www.vepsy.com/communication/book5/04_AMI_istag.pdf
ISTAG. (2001). ISATG Scenarios for Ambient Intelligence 2010. Retrieved May 4, 2010,
from ftp://ftp.cordis.europa.eu/pub/ist/docs/istagscenarios2010.pdf
MIT Project Oxygen. (2010). Retrieved March 26, 2010, from http://oxygen.lcs.mit.edu/

62
Philips Research. Retrieved January 31, 2010, from
http://www.research.philips.com/technologies/projects/ambintel.html
Schuurman, J.G., El-Hadidy, F.M., Krom, A. & Walhout, B. (2009). Ambient Intelligence -
Viable Future or Dangerous Illusion? Rathenau Institute, The Hague.
Weiser, M. (1991) The Computer for the 21st Century, Scientific American, vol. 265, no. 3,
September 1991 (reprinted in IEEE Pervasive Computing: Mobile and Ubiquitous
Systems, vol. 1, no. 1, January-March 2002).
Weiser, M. and Brown, J.S. (1995) Designing Calm Technology. Accessed April 7, 2006,
from: http://www.ubiq.com/weiser/calmtech/calmtech.htm
Wikipedia entry on Ambient Intelligence. Retrieved February 2, 2010, from
http://en.wikipedia.org/wiki/Ambient_intelligence#CITEREFAartsHarwigSchuurmans20
01

63
6.6.3 Artificial Intelligence

1. History

Artificial Intelligence (AI) is a diverse field of R&D dealing with machine intelligence. Its
underlying concepts reach back many centuries into Greek philosophy and mathematics. The
idea of universal computation has been discussed in connection with concepts for
computational devices since at least the times of Charles Babbage and Ada Lovelace. Alan
Turing described a theoretical computing machine in 1936/7 (Turing, 1936; 1938) while the
brain was first described as a network of interconnected neurons by McCulloch and Pitts
(McCulloch & Pitts, 1943). Artificial Intelligence was ―christened‖ and launched as a large-
scale scientific endeavour at the so-called ―Dartmouth Conference‖ in the summer of 1956.
The first wave of enthusiasm for AI encouraged radical predictions, such as the existence of
machines with intelligence like a human‘s within a single generation. The 1950s also saw first
experiments with machines based on the neural network concept, so-called perceptrons. The
challenges faced by AI were generally under-estimated by its champions and when promised
results failed to materialise, interest necessary to ensure funding of research waned.
Paradoxically, AI pioneer Marvin Minsky, who had been involved in the development of one
of the first computer neural networks, led to their downfall with a publication showing their
limitations. The result of disillusionment was a so-called AI winter which lasted until Japan
launched an ambitious program for the development of the ―fifth‖ Generation of computers
with strong focus on AI and advances in ―expert systems‖ gave new hope that AI would be
able to realise its ambitious goals after all. There followed another ―golden age‖ of AI with
renewed optimism and radical predictions, which ended when it became obvious that the
predictions would not be realised and that existing AI systems had limitations. AI is currently
experiencing renewed interest, for reasons described later.
As mentioned in the first paragraph, the field of AI is quite diverse and can range from a
robotics perspective to an expert systems perspective including a neuroscience-oriented
perspective to mention a few. The common denominator across the diverse fields, however, is
the creation of machines that possess or emulate human like intelligence. This article does not
claim to provide a comprehensive description of the field but rather gives a restricted and
perhaps narrow perspective of AI that the ETICA project considers likely to emerge and be
more visible in the medium term of 10 to 15 years. In addition, as AI is one of 12 identified
emerging technologies, some perspectives of this technology are likely to have been covered
in one or more of the identified technologies.
As such the field can roughly be divided into so-called ―strong AI‖, which aims to develop
machine intelligence equal or superior to human intelligence, and ―applied AI‖, (sometimes
also known as ―weak AI‖) which focuses on applications of machine intelligence for specific
tasks and services. Applied AI is already widely used in such fields as e-business, mobility
and security technologies and may be having a strong impact on our society and culture.
Today, successful AI applications range from custom-built expert systems to mass produced
software and consumer electronics.
The research programme of strong AI has goals whose realisation would have fundamental
impacts on our self-understandings as homo sapiens, since humans would create a real partner
―species‖ or even a potential successor of humanity. While pioneers of strong AI, such as
Marvin Minsky (Wolff, 2006), portray the last decades of the twentieth century (in which
applied AI was dominant) as wasted time for AI, the proponents of applied AI argue that the
unrealistic and far-reaching claims of Minsky and others have discredited the field in the view
of the public and of decision makers. It is of particular interest that there appears to be a
renaissance of strong AI in the 2000s, since the development of strong AI would be

64
technology becoming a pendant of biology which may ―live‖ and evolve in a similar way as
biological species and together with humanity.

2. Application Areas/Examples

Application examples include:


Software agents
Artificial brains in ―artificial people‖
Artificial Intelligence chips
Control system for robots
Expert Systems
Current AI approaches are quite diverse and the products of AI research have frequently been
incorporated into larger systems not generally tagged as AI. These include data mining,
industrial robotics, speech recognition and even the Google search engine. Some AI has
trickled into the mainstream of computer science, but due to the traumatic experience of the
AI winters, researchers avoid the term for fear of being identified with the radical visions
linked with the strong AI approach.
The Korean 2025 Vision foresees that ―(t)he development of artificial intelligence and
information communicative devices will make it possible to lead to a comfortable and
automated home and society‖ (Korean Government, 2000, p. 13). This is obviously related to
ambient intelligence.
New areas are less concerned with the business of making computers think, focusing instead
on what can be referred to as ―weak AI‖, the development of practical technology for
modelling aspects of human behaviour (OFCOM 2007). In this way, AI research has
produced an extensive body of principles, representations, and algorithms. Today, successful
AI applications range from custom-built expert systems to mass-produced software and
consumer electronics.' (Arnall, 2003).
Software Agents in e-commerce
A specific application of AI research is ―software agents‖, with research on the topic starting
as far back as the late 1980s. Today, software agents are mainly understood as programs that
are able to work independently (autonomous), are able to react to changes in their
environment (reactive), are able to act proactively and can communicate with other software
agents. In the early 1990s the term ―software agents‖ was used quite broadly, covering
different areas of research in software technology. Uses include routine tasks such as complex
searches for information in libraries and network attached storages and processing of (digital)
products in e-commerce. Software agents on one hand need context awareness to understand
e.g. the wishes of their users, on the other hand they need a certain type of ―intelligence‖ to be
able to make decisions. An important area of use for software agents today are simulations in
science and computer games. In this context a number of highly specialised agents have been
developed and are in use. Software agents may play a major role in ambient intelligent
environments to search autonomously for information, to evaluate them and to draw
conclusions including adaptive decision making. Research on software agents is carried out in
the private sector (mainly enterprises) and by public research institutions (mainly universities)
worldwide. In Europe coordination of stakeholders is supported by the EU in the context of
the IST program.
Artificial brains
AI research is also looking into the application area of artificial brains. Human-like functions
will emerge from artificial brains based on natural principles. The resonant tunnelling diode

65
possesses characteristics similar to the channel proteins responsible for much of neurons‘
complex behaviour. Higher functions of the brain are emergent properties of its
neurointeractivity between neurons, collections of neurons and the brain and its environment.
"Artificial people" will be very human-like given that their natural intelligence will develop
within the human environment over a long course of close relationships with humans;
artificial humans will not be any more like computers than humans are - they will not be
programmable or especially good at computing; artificial people will need social systems to
develop their ethics and aesthetics. Human-brain interface could be based on minimally
invasive nano-neuro transceivers. The nature of communication should be based on the same
neural fundamentals as an artificial brain. Humans might find enhancement via such paths
risky.
Artificial Intelligence Chips
At a slightly less ambitious level, but already expected by 2025 are Artificial Intelligence
chips, enabling computers to understand human feelings, possibly even to read information
from the brain using electro-magnetic information (Korea 2000). AI systems are also used to
control robots, particularly those operating in environments inhabited by humans and other
life forms.
Control system for robots
Pires (2007) state that Robot control systems are electronic programmable systems
responsible for moving and controlling the robot manipulator, providing also the means to
interface with the environment and the necessary mechanisms to interface with regular and
advanced users or operators. (p. 86). This can be used to control robots in unforeseen
circumstances such as when an error occurs. A control system application will enable the
functionality of the robots.
Expert Systems
Expert systems are in many forms and therefore can be applied in many areas. These include
expert systems for business or medical purposes among others. Expert systems are capable of
high level processing and interpretation of data. Engelmore & Feigenbaum (1993) state that
―AI programs that achieve expert-level competence in solving problems in task areas by
bringing to bear a body of knowledge about specific tasks are called knowledge-
based or expert systems. Often, the term expert systems is reserved for programs whose
knowledge base contains the knowledge used by human experts, in contrast to knowledge
gathered from textbooks or non-experts. Taken together, they represent the most widespread
type of AI application‖.9 An example of where such an application works is shown by Malone
(1993) who outlines how experts systems can be applied in accounting, particularly for ―tax,
auditing, financial modeling, managerial decision making, personnel selection, accounting
education and training, and decision reinforcement‖ (p.1) purposes. In relation to medical
purposes, Mauno & Crina (2008) discuss how medical expert systems can be used to assist
clinicians when it comes to ―the diagnostic processes, laboratory analysis, treatment protocol,
and teaching of medical students and residents.‖ (p.1)
Finally, a strong faction of AI researchers and champions expects the realisation of a
―singularity‖ according to Ray Kurzweil‘s vision10(Kurzweil, 2005) – that technology
exceeding the capacity and processing power of the human brain should be technologically
feasible within the foreseeable future. Some of them predict major cultural, societal and
political disruptions, or even a global war, resulting from these developments (Garis, 2005).

9
http://www.wtec.org/loyola/kb/c1_s1.htm
10
Kurzweil, R. (2005): The Singularity is Near. When humans transcend biology. New York: Viking

66
3. Definition and Defining Features

Definition

It is difficult to give a universally accepted definition of artificial intelligence, since a major


feature is the display behaviour acknowledged as evidence of the existence of ―intelligence‖
in humans or other natural beings. Intelligence here is the result of computer programmes.
There are distinctly different approaches to the realisation of AI, starting with the divide
between ―strong AI‖, which claims to understand intelligence in humans and other living
beings and ―weak AI‖, which more modestly seeks to find processes which lead to the same
results as intelligence in natural beings. In both cases, the products of research result in
computer programmes representing intelligent processes. It is clear from this that its most
defining feature is intelligence.
A major problem of artificial intelligence is that the concept of intelligence itself is by no
means clear and has changed over time. New discoveries from research on human intelligence
have had impact on research on ―artificial intelligence‖.
A working definition of intelligence is provided by Hutter and Legg (2005): ―[i]ntelligence
measures an agent‘s ability to achieve goals in a wide range of environments‖ (Hutter &
Leggs, 2005). This implies ability to learn, adapt and ―understand‖ or infer.
Modern concepts of intelligence employed in the cognitive sciences and AI research no
longer restrict intelligence to the manipulation of symbols (rationality), but also to embrace
further components, including creativity, problem solving, pattern recognition, classification,
learning, induction, deduction, building analogies, optimization, surviving in an environment,
language processing, planning and knowledge. Research suggests that there is no real
separation between rational thinking and emotion in human beings which provides a
justification for the rise of the related field of ―affective computing‖ (Picard, 1995)
Since the late 1980s, there has been a completely new approach derived from robotics which
builds on the belief that a machine needs to have a body moving around in the world to
display true intelligence (nouvelle AI). This is linked with a rejection of the division of
cognition in layers (perception – recognition –planning- execution – actuation). This is
replaced with the notions of reactivity and situatedness: here intelligence stems from a tight
coupling between sensing and cognition. Similarly nouvelle AI also rejects the notion of
parameterising the functional parts of robotic artefacts. It gives ―the evolutionary code full
access to the substrate‖ and thus ―the search procedure does without conventional human
biases, discovering its own ways to decompose the problem – which are not necessarily those
human engineers would come up with.‖11 According to this approach, intelligence is an
emergent property.

Defining Features

Intelligence: From the definition given above, it is clear that one of AI‘s defining features is
intelligence. This is key to the technology especially as technologies in this realm are
expected to think in a human like form and as such have human intelligence.

11
http://www.cs.brandeis.edu/~pablo/thesis/html/node22.html, accessed 10 March 2010.

67
4. Time Line

Computer systems that display rudimentary to complex intelligence are expected within the
next 15 years (Korea Vision 2025). Kurzweil predicts a ―Singularity‖ within 30 years
(approximately).
Progress in Neuroscience fuelled by new and advanced brain imaging techniques to visualise
processes in the human brain and to link these with specific types of human thinking is
expected to provide a basis for rapid advances in AI.
It will be possible to build computers of ever-increasing size and power thanks to the
continued validity of ―Moore‘s Law‖. Nanotechnology holds the promise for a further
continuation in the future when silicon technology reaches its limits. Kurzweil estimates that
human brain capacity is 100 billion neurons, with 1000 connections per neuron, conducting
200 calculations per second. Thus computers possessing the capacity of one human brain for
$1000 dollars should be available by 2023. The price should drop to a single cent by 2037. A
computer with the brain capacity of the human race for $1000 should be available by 2049,
with the price dropping to a single cent by 2059 (Kurzweil, 2001).
The computer metaphor for the brain has been sufficiently powerful and pervasive over a
period of time to inform the mainstream of AI research: roughly speaking, the goal of this
kind of AI has been to construct an artificial brain working to the same principles as the
human brain. While there always have been other approaches to artificial intelligence, such as
programming processes that achieve results like those of human brains without claiming to
actually model brain processes, the science fiction author and former prominent computer
scientist Vernor Vinge is still optimistic about the computer-model approach: ―Much of this
research will benefit from improvements in our tools for imaging brain functions and
manipulating small regions of the brain‖ (Vinge, 2008).

5. Relation to other Technologies

Artificial intelligence itself draws strongly on results from the cognitive sciences, with much
recent interest in neural imaging techniques. There are strong ties with robotics, since AI is a
crucial element in the development of robots. However, more recent approaches to AI have
relied on robotics to provide the ―body‖ needed for intelligent behaviour to emerge from
interaction with the body‘s environment. AI is also an essential component in ambient
intelligence and in internet technology.

6. Critical Issues

‗...higher cognitive processes such as decision taking, learning and action still pose major
challenges. Despite progress in some areas within cognitive systems and models, the
provisional conclusion is that many hurdles still have to be overcome before an artificial
system will be created which approaches the cognitive capacities of humans‘ (European
Technology Assessment Group, 2006).

'Associated with this reality check is the recognition that classical attempts at modeling AI,
based upon the capabilities of digital computers to manipulate symbols, are probably not
sufficient to achieve anything resembling true intelligence. This is because symbolic AI
systems, as they are known, are designed and programmed rather than trained or evolved. As
a consequence, they function under rules and, as such, tend to be very fragile, rarely proving

68
effective outside of their assigned domain. In other words, symbolic AI is proving to be only
as smart as the programmer who has written the programmes in the first place' (Arnall, 2003).
The following ethical issues are mentioned in the database in relation to AI:

- Autonomy and rights 'Many of the major ethical issues surrounding AI - related
development hinge upon the potential for software and robot autonomy. In the short
term, some commentators question whether people will really want to cede control
over our affairs to an artificial intelligent piece of software, which might even have its
own legal rights. While some autonomy is beneficial, absolute autonomy is
frightening. For one thing, it is clear that legal systems are not yet prepared for high
autonomy systems, even in scenarios that are relatively simple to envisage, such as the
possession of personal information. In the longer-term, however, in which it is
possible to envisage extremely advanced applications of hard AI, serious questions
arise concerning military conflict, and robot take-over and machine rights' (Arnall,
2003, p. 57).

- Robots overtaking humankind There has always been a strong faction of AI


researchers seeking to create an artificial intelligence superior to that of humans. One
proponent of this position, Hans Moravec, called a popular book ―Mind Children‖
(Moravec, 1998), arguing that AI would one day inherit the positions of humans as the
most powerful intelligence on earth. The idea of a super-intelligence is also at the root
of the ―singularity‖, popularized by Ray Kurzweil (2005): technology exceeding the
capacity and processing power of the human brain should be technologically feasible
within the foreseeable future. Other approaches are seeking to enhance human beings
with results of research in AI and robotics. A well known proponent of the approach is
Kevin Warwick, Professor of Cybernetics at the University of Reading, UK.

References

Academic publications
Garis de, H. (2005). The Artilect War. Palm Springs: ETC Publications.
Hutter, M., Legg, S. (2005): A universal measure of intelligence for artificial agents.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
(IJCAI), Edinburgh, July 30 to August 5, 2005. San Francisco: Morgan Kaufmann
publishers. Pp. 1509-1510.
Kurzweil, R. (2005). The Singularity is Near. When humans transcend biology. New York:
Viking
Malone, D. (1993) Expert systems, artificial intelligence, and accounting. Journal of
Education for Business, 08832323, Mar/Apr93, Vol. 68, Issue 4
Mauno, V. & Crina, S. (2008) Medical Expert Systems. In Current Bioinformatics. Vol.
3, No. 1, January 2008 , pp. 56-65(10). Bentham Science Publishers
McCulloch, W. and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous
activity. Bulletin of Mathematical Biophysics, 7:115 – 133
Moravec, H. (1988. Mind Children. Cambridge Mass.: Harvard University Press

69
Picard, R.W. (1995). Affective Computing. MIT Media Laboratory Perception Computing
Section Technical Report No. 231, Revised November 26, 1995
Pires, N. J (2007). Industrial Robots Programming: Building Applications for the Factories of
the Future.
Turing, A.M. (1936) "On Computable Numbers, with an Application to the
Entscheidungsproblem", Proceedings of the London Mathematical Society, 2 42: 230–
65, 1937, doi:10.1112/plms/s2-42.1.230
Turing, A.M. (1938). "On Computable Numbers, with an Application to the
Entscheidungsproblem: A correction", Proceedings of the London Mathematical
Society, 2 43: 544–6, 1937, doi:10.1112/plms/s2-43.6.544
Vinge, V. (2008). Signs of the Singularity. IEEE Spectrum Special Issue: The Singularity,
June 2008
Wolff, P. (2006): Ewig in der Zukunft (interview with Marvin Minsky). Süddeutsche Zeitung
139/2006, 16.

Governmental/regulatory sources
European Technology Assessment Group. (2006). Technology Assessment on Converging
Technologies. Retrieved December 28, 2009, from
http://www.europarl.europa.eu/stoa/publications/studies/stoa183_en.pdf
Korean Government. (2000). Vision 2025 Taskforce – Korea‘s long term plan for science and
technology development. Retrieved February 16, 2010, from
http://www.inovasyon.org/pdf/Korea.Vision2025.pdf

Web Sites/Other sources


Arnall, A. H. (2003). Future Technologies, Today‘s Choices. Nanotechnology, Artificial
Intelligence and Robotics; A technical, political and institutional map of emerging
technologies. Retrieved December 28, 2009, from
http://www.greenpeace.org.uk/MultimediaFiles/Live/FullReport/5886.pdf
Engelmore, R. S. & Feigenbaum, E. (1993) Expert Systems and Artificial Intelligence. In
Knowledge-based systems in Japan. Japanese Technology Evaluation Centre.
Retrieved on 23 June, 2010, from http://www.wtec.org/loyola/kb/c1_s1.htm
Kurzweil, R. (2001): The Law of Accelerating Returns.
http://www.kurzweilai.net/articles/art0134.html?printable=1

70
6.6.4 Bioelectronics

1. History
One can state that bioelectronics is quite an old scientific research area although the actual
term, bioelectronics can be traced back to the mid- 1970s. The first attempts to integrate
biology and electronic devices have been associated to medical development in for example
the development of the electrocardiograph. Electrocardiograph is the recording of the
electrical activity of the heart or field of radiology, where for example magnetic resonance
imaging (MRI), computed tomography (CT), and positron emission tomography (PET) have
made possible to identify and treat diseases or physical injuries. On the other hand visions that
refer to the ―cyborg appearance‖ have been also attached to the development of bioelectronics
both by scientists (lately) and science fiction (a bit earlier on). (Walker at al 2009, Katz 2006,
McGee 2008, Cass 2007).
2. Application Areas/Examples

Application Areas

Bioelectronics application areas are diverse and as Katz (2006) indicates can include the area
of:

Basic science,

Medicine,

High-tech industry,

Military,

Homeland security applications,

Biocomputing.

Application Examples
As bioelectronics itself is a multidisciplinary research area and furthermore integrated to the
research areas of cognitive sciences, nanotechnology or neurotechnologies, the possible
application areas are infinite and numerous. At the moment development of bioelectronics
focuses on medical or personal health care and well-being applications and solutions but the
future visions also include use of bioelectronics in most imaginary ways. For example,
bioelectronics is thought to be an important building block for cyborgs or for establishing
immortal physical body (McGee 2008). In the following examples we‘ll try to sketch diverse
application areas of bioelectronics. They cover areas of interest with concepts of
bioelectronics as being within you, with you or around you (Varma 2008).
i. Environmental monitoring and homeland security (Biosensing)
Bioelectronic noses (sensing device) (also Göpel 1998). ―Of the five human senses, the sense
of smell is least understood by scientists and engineers. Odors can be simply described as
chemicals carried in the air. The scientific challenge is to develop a sensing system capable of
detecting trace amounts of chemicals that are associated with a particular class of odor. The
electronic NOSE (Natural Olfactory Sensor Emulator™) platform project investigates the use
of a sensing system along with an artificial neural network to distinguish specific chemicals
from certain odors. An exciting application of the e-NOSE is to determine the physiological

71
status of shock and trauma patients by monitoring their breath for volatile organic
compounds. Experiments are being conducted on the e-NOSE to examine improving sensor
performance through design and material selection, characterizing the sensing of various
compounds, and developing a neural network that can identify the presence of specific
chemicals by analyzing the electrical signals from the sensor array. Future research
breakthroughs in the e-NOSE platform can have important applications in environmental
monitoring and homeland security‖12.

ii. Homeland security applications (biometrics, forensics)


(Katz 2005, NCBES 2010). ―There is a significant need for advancements in DNA typing for
forensics applications. In particular, a backlog of over 800,000 DNA samples remains
untested throughout the nation at the state and federal level. Research groups are beginning to
develop miniaturized devices targeting DNA analysis for forensics applications. New lab-on-
a-chip devices that can perform rapid DNA analysis will help address this backlog, for
example by allowing law enforcement officials to conduct testing at the scene of the crime.
Lab-on-a-chip systems require much smaller sample volumes than traditional DNA analysis
22 protocols and the amount of sample handling required is minimized by incorporating
processing procedures, thus reducing contamination and chain of custody issues.‖ (Walker et
al 2009, p.21).

iii. Medicine/Health care/Well-being


Biosensors for measuring human physical parameters: ―C3B researchers are working
pertinaciously to develop an implantable biosensor for monitoring lactate and glucose levels.
Funded by the Department of Defence, the goal of this platform project is to develop a
temporary implantable biosensor with wireless transmission capabilities. Packaging a dual
sensing element biochip into the biosensor poses significant engineering challenges.
Experiments are being conducted to investigate the amperometric response of the biochip to
glucose and lactate, the biocompatibility of hydrogels used for coating the biochip, and the
biochip‘s performance in laboratory animals.13‖
vi. Point-of-care diagnostics

―Recent advances in lab-on-a-chip technology allow new systems to be developed that can
provide diagnostic information in a handheld device. The most popular commercial example
of this is the i-STAT blood gas analyzer, (www.i-stat.com), which provides information on
patient blood samples in a handheld unit. Point-of-care (POC) devices are being developed
that can analyze patient samples for a variety of molecular biomarkers. Applications
demonstrated include detection of circulating tumor cells, activation of signalling pathways
associated with malignancies, chemical and biological warfare agent exposure, detection of
food poisoning, and the detection of influenza.‖ (Walker et al 2009, p.23).
v. From eliminating the side effects of chemotherapy to treating Alzheimer‘s disease, the
potential medical applications of nanorobots are vast and ambitious. In the past decade,

12
http://www.clemson.edu/c3b/projects.html
13
http://www.clemson.edu/c3b/projects.html

72
researchers have made many improvements on the different systems required for developing
practical nanorobots, such as sensors, energy supply, and data transmission14.

3. Definition and Defining Features


Definition
The current vision of the area of bioelectronics formulates bioelectronics as a multi-facet
scientific and technological area that includes electronic (or optoelectronic) coupling of
biomolecules, or their natural or artificial assemblies, with electronic or optoelectronic
devices (Katz 2006, Katz 2005). It has been defined that bioelectronics aims: ―at the direct
coupling of biomolecular function units of high molecular weight and extremely complicated
molecular structure with electronic or optical transducer devices. Alternative and new
concepts are being developed for future information technologies to address, control, read and
use information. This requires the development of structures for signal uptake, transduction,
amplification, processing and conversion.‖ (Göpel 1998, p.723). The interfacing of
biomaterials and electronic devices will be used to: ―transduce chemical signals generated by
biological components into electronically (or photonically) readable signals, or to activate the
biomaterials by applying electronic (or optical) signals, thus resulting in the
switchable/tunable performance of the biological components.‖ (Katz 2006, p.1855). The
bioelectronic systems can be used to develop sensing devices and to develop biofuel cells (i.e.
implantable biofuel cells for biomedical applications, self-powered biosensors, autonomously
operated devices, etc.). (Katz 2006).
Furthermore it has been stated that adding vision and development of nanotechnology to the
combination of biotechnology and electronics will emphasize even more a major driving force
in the research and development of novel materials or information technologies which is
miniaturization of devices and systems to the nano-scale (i.e. nano-objects, such as
nanoparticles, quantum dots, carbon nanotubes, nanorods, etc.) (Katz 2006, Göpel 1998, Cass
2007).
Research on the properties of cells and their interactions with the environment has made
possible new forms of integration and interaction between biomaterials and electronic
systems. Bioelectronics is promising for medical devices (implants, biosensors for monitoring
purposes) but it also has lots of wider developmental opportunities for different areas.
Applications include nanorobots, biological computers, biosensors, biochips, implants able to
communicate with nerves, artificial stimulation of nerves, artificial touch, and artificial
organs.

Defining Features
- Miniaturisation: When new technological solutions are getting smaller and smaller
but still having more computational power than supercomputers 50 years ago, we are
reaching visions of nanotechnology.
- Invisibility: Miniaturisation of technology makes invisibility possible. It is now more
possible to see how and what technology is utilised in particular contexts. As
bioelectronics can in the future be even more invisible, for part of our lives the
technology will be in some sense a much more unnoticeable part of our everyday
lives.

14
http://www.physorg.com/news116071209.html

73
- Technology as part of human being (within/with/around). Bioelectronics will be
integrated in our lives very deeply. Already you can have bioelectronic solutions that
are either inside human beings, used as wearables or that are surrounding us without
anyone else than the ―user‖ or ―controller‖ themselves knowing about these
technologies.

- Monitoring everything we do/are. Bioelectronics makes it possible to monitor every


aspect of human life and various environments in invisible ways.

- Treatment of previously uncured diseases or injuries. Bioelectronics represents


new solutions for previously uncured diseases or injuries. In that sense they are just as
a new medicine for treatment of human beings.

- Control over your body and mind. With bioelectronic applications it might be
possible to even control individuals remotely.

4. Timeline
1900 - Integrating biology to electronics
1950 - Establishing strong ―bioelectronical‖ research and practical landscape, especially for
medical applications
1970 - Emergence of bioelectronics as a term
2010 - Strong emphasis on development of bioelectronical science
2010 - Implementation of various biosensors for various purposes. Strong investments for
further development of research area (i.e. bionanotechnology).

5. Relation to other Technologies


Bioelectronics is closely related to nanotechnology, biotechnology and electronics as it is a
clear convergence of the said technologies. It also seems to be as an enabling technology for
applications that are commonly described in relation to affective computing, human-machine
or ambient intelligence. Quantum computing could also be seen as an enabling technology for
biocomputing which is also mentioned as an application area for bioelectronics.

6. Critical Issues
Dehumanising factors. McGee (2008) points out that bioelectronics is one of the most
important enabling technologies that can affect the current meaning of being human. She
states, that: ― In the future, if it becomes possible both to clone an individual and to implant a
chip with the uploaded memories, emotions, and knowledge of the clone‘s source, a type of
immortality could be achieved‖ (p.208).

References
Academic Publications
Göpel, W. (1998). Bioelectronics and Nanotechnologies Biosensors and Bioelectronics,
Vol.13, No. 6, September 1998 , pp. 723-728(6). Elsevier.

74
Göpel, W., Zieglera, Ch., Breerb, H., Schildc, D., Apfelbachd, R., Joergese J. and Malaka
R. (1998). Bioelectronic noses: a status report Part I. Biosensors and Bioelectronics.
Vol. 13, Issues 3-4, 1 March 1998, Pp 479-493. Elsevier.
Katz, E. (2006). Bioelectronics. In Electroanalysis Vol.18, No.19-20, Pp. 1885-1857
Willner, I. and Katz, E. (eds) (2005). Introduction. Bioelectronics: From Theory to
Applications Wiley-VCH, Weinheim, Germany 2005.
McGee, Ellen (2008). Bioelectronics and Implanted Devices. Medical Enhancement and
Posthumanity. In The International Library of Ethics, Law and Technology. Vol. 2.
Springer Netherlands. Pp. 207- 223.
Websites
Walker, G.M., Ramsey, J.M., Cavin, R.K., Herr, D.J.C, Merzbacher, C.I., Zhirnov, V. (2009).
A Framework for BIOELECTRONICS – Discovery and Innovation. Bioelectronics
roundtable report. Retrieved 2 June, 2010, from
www.src.org/trc/bio/.../E003426_roadmapping_framework.pdf
Zyga, L. (2007) Virtual 3D nanorobots could lead to real cancer-fighting technology. Retrieved 8
June, 2010 from http://www.physorg.com/news116071209.html.
Research Groups
The Integrated Bioelectronic Research Laboratory (IBR) at UCSC (UC Santa Cruz)
http://ibr.soe.ucsc.edu/?file=kop1.php
The Center for Bioelectronics, Biosensors and Biochips at Clemson University.
http://www.ces.clemson.edu/c3b/index.html
Research group of Bioelectronics & Bionanotechnology at Clarkson University
http://people.clarkson.edu/~ekatz/index.html
National Centre for Biomedical Engineering Science (NCBES), National University of
Ireland, Galway, Ireland. http://ncbes.nuigalway.ie/bioelectronics.aspx

75
6.6.5 Cloud Computing

Similar to:

Utility Computing
On-Demand Computing

1. History
The history of cloud computing as a concept that would deliver computing resources to
different locations through a global network can be said to have originally been introduced in
the sixties by Joseph Carl Robnett Licklider often shortened to J.C.R Licklider. J.C.R
Licklider‘s idea of an ―intergalactic computer network‖ seems to be quite similar to what we
are now calling cloud computing. Licklider was responsible for enabling the development of
ARPANET. Another pioneer of cloud computing is John McCarthy who envisioned in the
beginning of the 1960s that computation may someday be organized as a public utility like
water or electricity. Cloud computing has developed since the 1960s and since the offer of a
significant bandwidth in the 1990s; it could be developed to serve the masses. The evolution
of cloud computing has gone through many phases such as gird and utility computing,
application service provision (ASP), and Software as a Service (SaaS) (Mohammed 2009,
Dikaiakos et al 2009).
2. Application Areas/Examples
Cloud computing application examples and their areas are multiple. Basically it means that all
the services one can apply via the Internet could be situated in cloud. Below are some
application examples which on one hand refer to the already existing applications and on the
other hand try to describe more future-oriented scenarios for cloud computing applications.
Besides the presented scenarios we also give two use cases that simplify the system from two
perspectives presented in the scenarios: For example, a company can opt to no longer use
purchased accounting software but use an accounting service by another company offering
cloud services. Another example is a consumer oriented scenario where a user utilizes cloud
services to store very personal data such as videos. This is something that is already
happening. In many cases a user does not have to pay for services with any money, but
payment of the service is based on commercial messages one has to accept to view when
using cloud services.

i. Webmail

―A simple example of cloud computing is webmail. Anyone can access their webmail from
anywhere in the world simply by knowing the web address of the webmail service, there's no
need to know the name of the server or an IP address or anything else. The webmail provider
takes on the job of making sure that there's enough disk space and processing power to allow
all their customers to store and retrieve their mail on demand. The user doesn't have to worry
about maintaining web clients on their PC or, in the case of companies, local email servers.15‖

ii. Smart Shopping

―A woman is walking down the street of her local town centre. The RFID tag in her jacket is
contacted by a local reader. The reader sends the tag‘s data to a localization service. The
localization service sends this data to a meta-CRM-system that handles consumer related data

15
http://www.bestpricecomputers.co.uk/glossary/cloud-computing.htm

76
for that particular area. The CRM system recognizes the consumer, looks up her preferences
and offers her via an SMS to her mobile phone - a 20% SALE reduction a nearby shop, if she
orders immediately (via WAP/mobile phone), which she does after having had a look at the
item in the shop. The payment is processed by a payment service provider, who knows the
consumer‘s details who charges her credit card. For security purposes, an alert is sent (via a
web service) to a credit card clearance agency, who checks the credit card number against
recent fraud. Unfortunately, there has been a fraudulent action using this credit card, so the
agency informs the police (again via web service). The police management system accesses
the location service to get the location of the consumer and sends two policemen from the
closest office to speak to the consumer.16‖ (The Think-Trust project).

iii. Nomadic Business Organisation

―A network of business people is spread across the world, but they are working as if they
were one company. They do not have any specific physical office space and meetings are held
by using online conferencing tools provided by the Cloud. They use on-line storage for
documents, a service-based customer relationship management (CRM) system, and service-
based financial performance management software. The membership of the network is highly
dynamic, i.e. people join and leave on a very short-notice basis. The software components are
used by this organization via the Cloud, i.e. without knowing where these services actually
run. So not only are the consumers of the services ―nomadic‖, but the services that this
organization consumes have no physical location themselves either.17‖ (The Think-Trust
project).

iv. Cloud-delivered environments being used within various organizations the


federal government

―The Air Force deployed MyBase, a 3-D virtual recruiting and training platform deployed in
Second Life in December, 2008. It was made available to the public, and includes the ability
to model base configurations, deploy on-line conferences, and welcome civilians and soldiers
to its representation of real physical bases. Similarly, the National Guard has developed U.S.
Nexus to provide the same capabilities and is scheduled be deployed in November,
2009‖….―Second Life figures in the Centers for Disease Control and Prevention's (CDC)
mission of sharing real-life health alerts through avatars in the on-line community. It is also
finding that its presence is felt through outreach and community building‖…..―The National
Oceanic and Atmospheric Administration (NOAA) NOAA Virtual World allows the agency
to share its laboratories, classes, research discussions, and conference spaces with students,
citizens, policymakers, and scientists worldwide. A key initiative is this environment's ability
to test data virtualization, or collaborative geographic information systems‖ (Paquette et al
2010, p.248).

v. Medical practice/patient information sharing

―Sharing patient data, particularly medical images, across the multiple institutions from which
patients receive health care services is a problem for which no good solution currently exists.
Some institutions provide images and reports on portable media such as compact discs, while
some continue to print film. Multisite institutions with business relationships may invest in
network connections and virtual private networks to perform electronic data transfers. Each of
these approaches is problematic and costly and adversely affects the quality of care. In a cloud
16
http://www.think-trust.eu/downloads/think-trust-documents/cloud-computing_v0-2/download.html
17
http://www.think-trust.eu/downloads/think-trust-documents/cloud-computing_v0-2/download.html

77
computing environment, patients‘ images and other medical data could be stored in a virtual
generic archive and accessed by health care providers as needed through the cloud. This could
facilitate the sharing of data and significantly reduce local storage requirements.‖ (Andriole &
Khorasani 2010).
3. Definition and Defining Features
Definition
One can state, that Cloud computing is a recent trend in IT that moves computing and data
away from desktop and portable PCs into large data centres. It basically means that software,
different kind of services and applications are all delivered as services over the Internet as
well as to the actual cloud infrastructure (Dikaiakos et al 2009, Gartner 2009). From a user
point of view, it means that a user can access their files, data, programs and other services
from a Web browser via the Internet that are hosted by other service providers (Won 2009).
On the other hand one can also look at Cloud computing from a more general view and define
Cloud Computing as a style of computing where scalable and elastic IT-enabled capabilities
are delivered as a service to external customers using Internet technologies. As enterprises
seek to consume their IT services in the most cost-effective way, interest is growing in
drawing a broad range of services (for example, computational power, storage and business
applications) from the "cloud," rather than from on-premises equipment. The levels of hype
around Cloud computing in the IT industry are deafening, with every vendor expounding its
cloud strategy and variations, such as private Cloud computing and hybrid approaches,
compounding the hype (Gartner 2009).

Defining Features
Resource/storage virtualization: The resources and services are delivered to users
and/or organizations via Internet from resource clouds where almost all information
and tools are preserved. Resources can be added or removed based on the changes in
needs. The services could be provided by ―pay per use pricing‖ model. The storage of
data may well be in multiple physical locations across many servers around the world
possibly owned and administrated by various organizations. There may also be
interconnections of multiple services across the cloud. At different levels functionality
of different providers is connected to provide a specific service to an end-user. From a
business point of view, the most important change is that Cloud computing shift
software business from product business to service oriented business.
Scalability, elasticity, efficiency of resource sharing and usage
optimizing/optimized by usage: Resources from the service provider are thought to
be used with maximum efficiency and serve multiple needs for multiple parties at the
same time, as they are shared with other consumers or organizations. People or
organizations might not know where the information or services are coming from or
where their information is preserved.
Accessibility, ease of usage, fast information sharing, delivery and control: Ease
of use of cloud services is emphasized in many scenarios. The service experience is
assumed to be smooth and quick with very fast and optimized connections. As one
does not have to install anything on their own computer, it is assumed that one will
always have the most suitable version of the software and service and that the
incompatibility with products and services will no more be a problem. On the other
hand if the system or service updates automatically then service could also appear as
totally different every time one uses it.

78
4. Timeline
Cloud Computing is already in use for many applications (e.g. photographic storage). There is
a fast growth of cloud based service business that is currently happening and there are big
expectations that growth will continue in the forthcoming years. Apart from consumer
targeted services which are already in existence (Facebook, Flickr, email services etc.) there is
also belief that professional applications are increasing with evolution rather than revolution
due to investments in current solutions (Fenn et. al. 2009).

5. Relation to other Technologies

Cloud Computing can be part of any kind of Internet related activity now or in the future. It is
one of the enabling technologies for Future Internet and Ambient Intelligence.

6. Critical Issues

Issues such as trust and reliability are self-evident and possible risks for utilizing cloud
computing applications. The questions of data ownership and security, quality of service or
service agreements arise when scenarios for Cloud computing are presented. For example one
might be forced by agreement to give away one‘s ownership of data if utilizing free cloud
services. Similarly, the duration of agreement may be defined to be eternal or very
troublesome to dissolve.

References
Academic publications
Andriole, K. P., Khorasani. R. (2010). Cloud Computing: What Is It and Could It Be Useful?
In Journal of the American College of Radiology. Vol. 7, No. 4, Pp. 252-254 (April
2010)
Dikaiakos, M. D., Katsaros, D., Mehra, P., Pallis, G., Vakali, A. (2009). Cloud Computing:
Distributed Internet Computing for IT and Scientific Research. In IEEE Internet
Computing, Vol. 13, No. 5, Pp. 10-13, September/October, 2009.
Paquette, S., Jaeger, P.T. Wilson, S.C. (2010). Identifying the security risks associated with
governmental use of cloud computing. In Government Information Quarterly
Won, Kim (2009) Cloud Computing: Today and Tomorrow. Journal of Object technology.
Vol. 8, No. 1, January-February 2009.

Websites
Mohamed, A. (2009, March 27). A History of Cloud Computing. Retrieved 13 June 2010,
from ComputerWeekly:
http://www.computerweekly.com/Articles/2009/03/27/235429/a-history-of-cloud-
computing.htm
Cloud Computing. Retrieved on 8 June, 2010 from
http://www.bestpricecomputers.co.uk/glossary/cloud-computing.htm

79
Research groups/projects/market information

Fenn, J., Raskino, M., Gammage, B. (2009). Hype Cycle for Emerging Technologies.
Gartner. Retrieved 13 June 2010 from:
http://www.gartner.com/resources/169700/169747/gartners_hype_cycle_special__169
747.pdf

The Think-Trust project: Think-Trust (FP7-216890) is a project funded by the European


Commission's 7th Framework Information Society Technologies (IST) Programme,
within the Unit F5 ICT for Trust and Security. It is investigating Trust, Security,
Dependability, Privacy and Identity from ICT and Societal Perspectives. Retrieved 8
June, 2010 from http://www.think-trust.eu/downloads/think-trust-documents/cloud-
computing_v0-2/download.html

80
6.6.6 Future Internet
Related terms:
Web 3.0, Internet of Things, Real World Internet, Semantic Web, Intelligent Web, Internet of
Services

1. History

The development of digital information networks begun in the U.S.A. in the 1960‘s at the
Advanced Research Projects Agency (ARPA)18 and as a result the first network called
ARPANET was opened between four American universities in 1969. The network expanded
rapidly and alongside with it also other networks such as X.25, UUCP and FidoNet emerged
during the 1970‘s and 1980‘s. Eventually there was a need to join all the separate networks
and the TCP/IP protocol was activated in 1983. It enabled this internetworking which led to
the emergence of the Internet. By the development of World Wide Web (WWW) in the
1990‘s the Internet became easily accessible to consumers and by the turn of the decade there
were already almost 400,000,000 Internet users worldwide (ITU ICT Statistics, 2000; Zakon
2010, Wikipedia: History of the Internet). The beginning of the 21st century has witnessed an
exponential growth in internet usage and data in the networks.

The way the Internet is used today is very different from what it was originally designed for.
The Internet is facing several problems because of the enormous growth in content and traffic
in the network. For example streaming video has become popular but it requires a lot of
bandwidth and results in slowdown and congestion in the network. Also finding the right
information among massive amount of data on the Internet has become a difficult task
because the network is not able to understand the meaning of the data it is handling.
Moreover, phenomena like e-commerce and Internet banking are struggling with security
issues because they were not considered in the design of the original network architecture.
These and other problems the Internet is facing today have led to the need for the research and
development of Future Internet (FI).

The first state of the Internet has been called Web 1.0 and consisted of static pages
characterised by a producer-spectator division. In this, the consumer was merely a receiver
and user of content dictated and created by someone else, the producer. The producer
normally had the technical know-how as a programmer and software developer. The second
and current state in 2010 is referred to as Web 2.0. In this state, the distinction between the
producer and consumer of content has disappeared. New applications allow any end-user to
create content on the Internet without the need to know any programming languages. This has
enabled new ways of communicating using the old technologies and therefore referred to as a
social rather than a technological novelty (Fuchs et al. 2010). The FI is sometimes referred to
as Web 3.0.
FI is a higher level concept that refers to the Internet in the future in general regardless of
what it will be like. The views on the technical development of the Internet architecture are
strongly divided in two. Some believe in the evolutionary development where upgrades and
refinements are making the Internet better and gradually adapting to the current needs. A
common argument supporting this view is that the Internet is already so complex and vast that
creating it from scratch is far too expensive if not impossible (Quitney Anderson et al. 2008).
Others see the ―clean slate‖ approach of creating a totally new Internet architecture as the

18
Later renamed as Defence Advanced Research Projects Agency (DARPA)

81
more likely development or the only way to resolve the current problems of the Internet
(Rubina 2009, ITU-T 2009).

One important aspect of FI is that the Internet will extend outside of the traditional computer
devices so that any objects in the environment can be connected to it. This is called the
Internet of Things (IoT) and its development started in 1999 when the Auto-ID Centre was
established at the Massachusetts Institute of Technology (MIT). It had a vision of a networked
physical world (Schuster 2004). This global industry-sponsored centre worked for creating the
Electronic Product Code, a global item identification system based on RFID technology19.
During the first few years several different research laboratories around the world joined the
Auto-ID Centre and in 2003 the Centre was replaced by Auto-ID Labs which is an
international network of seven different research laboratories. One of its missions is to
research and shape new technologies such as active tags and sensors20.

2. Application Areas/Examples

Application Areas

The Internet is a technology for enabling and enhancing communication so all application
areas are potentially viable. The most prominent differences between present day Internet and
Future Internet are that where the current Internet enables mostly communication between
people and is accessed through a limited range of devices, the FI will enable meaningful
communication between people as well as between machines (Semantic Web) and through
other additional objects (Internet of Things).

The Semantic Web along with cognitive networks means a more ―intelligent‖ Internet (for a
definition of these technologies, see the section on ‗Definitions‘) and it will be potentially
seen in all application areas. Horrocks (2007) mentions that the first steps towards the
Semantic Web have already been taken in many application areas including biology,
medicine, geography, astronomy, agriculture and defence.

In IoT the most discussed areas are logistics, retail, healthcare, public services, security and
military. The ITU 2005 report mentions smart homes, smart vehicles, personal robotics and
wearable computing as some of the leading areas for the development of ―smart things‖.
Miniaturisation of technology will enable smaller and smaller objects to interact and connect
to the network allowing objects as small as ―smart dust‖ to be created (ITU 2005).

The development of IoT is considered to be at a critical point at the moment. In 2005 there
were 1.3 billion RFID tags worldwide (From RFID to the Internet of Things 2006) but in
2006 RFID was still mainly used in logistics. Today, however, it is more and more used in
Internet and mobile Internet applications (IoT 2009).

Application Examples

The majority of the following examples are related to the Internet of Things and Semantic
Web because they are two of the most prominent and distinguishing features of FI and enable

19
RFID= Radio Frequency Identification. A technology that is using radio waves to transmit the identity of an object or
person by the use of radio waves. This technology is applied in small paper-thin tags or chips that can be embedded in or
attached on objects, humans or animals. The development of the technology started already during the Second World War.
(http://www.rfidjournal.com/article/view/1338)
20
http://autoidlabs.org/page.html

82
new kinds of services that were not possible before (IoT 2009). The possibility of merging
information coming from the physical environment (using sensor technologies) with
information from the virtual world in an intelligent way is the key enabler of new services and
applications. Different network types like cognitive, virtual or context-aware networks are a
part of FI that enable any applications, whether they are novel or traditional, to work better,
faster, with more intelligence and fewer errors. Among the application examples there are also
a few related to the application of these network technologies.

The main technology used in IoT is RFID. There are already many applications using it
because the technology itself is simple and the use of RFID tags is inexpensive. Examples of
these applications are travelcard systems in public transportation in many cities worldwide,
passports, asset tracking in logistics, self check-out in libraries, payment systems for toll roads
and cattle identification. Currently the applications are restricted to mostly industrial and
entrepreneurial fields with few applications for consumers because of the lack of NFC21
enabled devices for consumers that are able to read the data in the tags. For taking advantage
of the full potential of RFID technology in FI, all mobile devices such as mobile phones will
need to have this technology integrated. The use of semantics on the other hand helps in
producing meaningful information from the scattered sources which would be extremely
laborious and time-consuming if done manually.

Here are some examples of possible future applications classified under different application
areas:

i. Asset management
―Electronic tagging and remote sensing tracks the location of baggage in an airport, or
of goods in a factory production process.‖ (From RFID to the Internet of Things 2006,
p. 6).
Community-based lost and found service where a person who finds a lost tagged item
can on the spot report it to the owner with their tag reading device such as a mobile
phone. Furthermore, the application takes advantage of a network of readers both
installed statically (e.g. in stores) and in mobile devices. Whenever such a reader is in
the vicinity of the lost object, it can autonomously report it to the owner providing
information about the location of the object (Guinard et al. 2009).
ii. Healthcare
―Blood pressure and heart rate sensors relay regular readings from a patient‘s home to
a monitoring centre. A computer detects adverse movements and signals a doctor to
review the patient‘s case.‖ (From RFID to the Internet of Things 2006, p. 6).
A personal ―Blackbox life recorder‖ that provides information on the person‘s lifestyle
and hidden pathology and according to which medication and treatment can be
personalised to respond the patient‘s exact needs. The Blackbox works also as a
memory aid providing information of the person‘s whole life‘s data (Future Internet
2020, 2009).
An electronic prescription writer that can interact with many different health care
systems in constructing the most suitable prescription and that checks from different
databases whether the drug can have negative effects with other drugs the patient is
using, what the appropriate dose is and whether there are cheaper alternatives for the
prescribed drug (Puustjärvi & Puustjärvi 2006).

21
NFC= Near Field Communication. A technology used for reading RFID tags.

83
iii. Environment monitoring
―A network of sensors monitors river heights and rainfall, predicting floods and
supporting water management measures for flood relief.‖ (From RFID to the Internet
of Things 2006, p. 6).
Biodegradable smart dust that is sprayed on the ground and that detects the condition
of snow on a skiing slope (Future Internet 2020, 2009).
iv. Office applications and software
An automated room booking application that identifies a group of people entering a
meeting room and after confirming that the room is free autonomously makes a
reservation for the room and sends an entry to the calendars of each person in the
room. (Ravindranath 2008).
Networks that are able to identify errors happening in applications and understand
what caused them by monitoring the applications and either communicate the
information to the user or take action to repair the error autonomously. The networks
are also able to learn over time the normal behaviour in different situations and are
able to make more accurate evaluations later. (Clark et al. 2003).
v. Information management
Search engines that can combine information retrieved from different sources and that
can recognise properties of various kinds of data, not only text. Such an engine could
answer for example to the following complex question in a future eBay website: ―find
me only used, red Priuses for sale for less than $14,000 by people who are within 80
miles of my house and make them an offer.‖ (Feigenbaum et al. 2007).
vi. Retail
Clothes shop that signals with LED lights the clothes on the racks that fit the customer
as s/he is approaching the clothes. Personalised discounts can also be displayed this
way using different coloured LEDs. The dressing room has a mirror-like screen that
recognises the customer and virtually displays the clothes on him/her. The screen can
also display items from other stores and it is possible to connect to a distant friend
who can see the outfit on their own display and give comments. (Future Internet 2020,
2009).
Deep product information, price comparison, political shopping, product rating,
perishable good that tells its quality status. (Fleisch 2010).
vii. Security
Cognitive networks can be used to enhance security of applications in different ways
such as access control, tunnelling, trust management or intrusion detection. They can
analyse processes in the network and find patterns and risks to which they can react
and change security mechanisms accordingly. (Thomas et al. 2005).
viii. Services
Car owner can customise the features of the car such as performance limits. The
features can also be changed easily according to needs and for example according to
who is driving the car at a particular time or moment. The use is tracked, recorded and
sent to the insurance company so that the insurance price can be adjusted to match real
risks (Future Internet 2020, 2009).
Software that is used as a service on the Internet rather than owned and physically
installed in a specific computer.
ix. Entertainment
A big-screen 3D cinema display that allows the viewer to choose the viewing angle
among different cameras in the filming set. In a Formula 1 race for example the
viewer can choose the preferred driver‘s viewpoint. The race data can be bought and
recreated in a car racing game where the user can join the race with a rendered virtual
car. (Future Internet 2020, 2009).

84
Context-aware network application that dynamically adapts multimedia services
according to the users' context. Depending on the users' device, the users' preferences,
the network conditions and the service provider adaptation policies, the network can
adapt the quality of the streamed multimedia in case of network overflow. Users are
also able to switch from one device to another one (Mathieu 2009).
x. Implants
In addition to objects, RFID tags can be implanted in the human or animal body. RFID
implants are already commonly used for identification of pets and production animals
but human implants have not yet reached widespread use. One example of a human
RFID implant in commercial use is offered by the Baja Beach Clubs in Barcelona and
Rotterdam. (ITU 2005). This implant (VeriChip, a product of PositiveID Corp. 22)
works as customer identification and payment method in the bar23.

3. Definition and Defining Features

Definition

The Future Internet is a term describing all the research and development activities
concerning the Internet. The World Wide Web (WWW) is a central concept of the original
Internet. Fuchs et al. (2010) define WWW as a techno-social system where humans interact
based on technological networks. FI is a network of networks that includes advances in the
current Internet technologies related to performance, reliability, scalability, security and high
level mobility among others. However, in addition to that, it extends the Internet to the
physical world in ways that have not been possible before. Adding semantics will also enable
better ways in organising and using the information in the networks which will add to the
usability of FI.
As stated above, the two main concepts that distinguish FI from the current Internet are the
Internet of Things and Semantic Web. According to the European commission the Internet of
Things means: ―Things having identities and virtual personalities operating in smart spaces
using intelligent interfaces to connect and communicate within social, environmental, and
user contexts.‖ (IoT 2009, p. 5). It means that any physical thing can become a computer that
is connected to the Internet and to other things. IoT is formed by numerous different
connections between PCs, human to human, human to thing and between things. This creates
a self-configuring network that is much more complex and dynamic than the conventional
Internet. Data about things is collected and processed with very small computers (mostly
RFID tags) that are connected to more powerful computers through networks. Sensor
technologies are used to detect changes in the physical environment of things, which further
benefits data collection. The network becomes more powerful when intelligence can be
embedded to things and processing power can be distributed more widely in the network.
The World Wide Web Consortium (W3C) defines Semantic Web as a Web of data. The
original Internet was designed as a web of documents but the amount of information in the
networks has grown so much that better ways of retrieving and combining it are needed.
Semantic Web is a ―framework that allows data to be shared and reused across application,
enterprise, and community boundaries‖. The information can be integrated from various
different sources and types of data and the type of the relationships between the pieces of data
22
http://www.positiveidcorp.com/about-us.html
23
See the following as an example of this trend:
http://news.bbc.co.uk/2/hi/technology/3697940.stm
http://www.baja.nl/site_eng/index2.html

85
are defined to enable better and automatic interchange24. The technology does not necessarily
show to the user, it manifests as an easier and quicker retrieval of data giving the impression
that the computer understands the meanings of the content it manages.
Cognitive networks are another technology that will contribute to making a more intelligent
Internet. They perceive current network conditions, and based on that they are able to plan,
decide and act. Cognitive processes belong to ―machine learning‖. They use different
mechanisms to remember previous interactions with the network and adapt future decisions
according to that knowledge. This results in optimised performance (Thomas et al. 2005).
Cognitive networks can also be called self-aware because they can configure, explain and
repair themselves (Clark et al. 2003).

Defining Features

Pervasiveness and ubiquity: Digital content and services will be all around us in not
only ICT devices but in any physical objects too. Embedding computers to physical
environment creates a link between physical and digital worlds.
Network of networks: Internet of the Future connects networks of objects to the
classic Internet. The result is a combination of different communication networks that
are able to manage the complex communications of large amounts of information and
enable new kinds of services (Future Internet 2020, 2009). As the structure of the
Internet becomes more complex and vulnerable to security threats, according to some
views, governments and corporations are inclined to create separated spaces, ―walled
gardens‖ inside the Internet. (Quitney Anderson et al. 2008).
Interoperability and Accessibility: Devices and objects are networked and work
seamlessly together. Interoperability is implemented also in the level of network
architecture making the communication between services and applications also more
fluent. Services and content can be accessed anywhere, anytime and with many
different devices. Mobile devices will dominate globally as access points to the
Internet. This is the case especially in developing countries where mobile devices are
an affordable solution for the lack of built-in network infrastructure. (Quitney
Anderson et al. 2008).
Miniaturisation with simplification: In IoT the computers at the end nodes of the
Internet are small to the point of being even invisible to the eye when embedded in the
environment. The purpose of this kind of miniaturisation is not necessarily to include
the capacity of a full-blown computer in an ever smaller scale device but to include
only those functionalities that are relevant and necessary in the particular environment
and context of use. They are inexpensive and have low energy consumption and
feature few functions like sensing, storing and communicating a limited amount of
information and they normally need to be accessed with another device such as a
mobile phone. (Fleisch, 2010).
Context-awareness: FI will be able to recognise different contexts by using different
sensor technologies. On the physical level sensors gather information from the
physical environment and on the digital level they gather information about the
network and applications. When that information is combined with other input data
the network and applications are able to dynamically adapt to optimal processes at any
actual moment.
Autonomy: Input of information in the FI does not have to be made by humans only.
Machines will interact more and more with each other becoming more predominant
than human centric interaction. In IoT, sensors and actuators that are embedded in the

24
http://www.w3.org/2001/sw/SW-FAQ

86
environment can collect data autonomously and transmit it to each other and the
network (Bauer et al. 2009). In Semantic Web automatic processes can produce
information combined from separate sources.
Virtualisation of resources: Virtualisation enables better exploitation of network
resources with higher flexibility and security. (Abramowicz et al. 2009).
Semantics: Semantics are an important part of FI. By the use of semantic annotations
linked to the information in the web locating information will become much easier,
faster and more accurate. (Horrocks, 2007).

4. Time Line

Most of the recent visions and scenarios of FI are aimed around the year 2020.

1969 – The development of ARPANET, the predecessor of Internet.

1972 – The first email program

1973 – The first US patent for an active RFID tag with rewritable memory.
(http://www.rfidjournal.com/article/view/1338)

1974 – The first attested use of the term ―internet‖ by Network Working Group (Wikipedia:
History of the Internet)

1983 – Activation of the TCP/IP that became the standard protocol for Internet.

1989 – Invention of World Wide Web by Tim Berners-Lee at CERN.

2009 – 20% Internet penetration worldwide (Hausheer et al. 2009)

2015 – 30% Internet penetration worldwide (ibid.)

2020 – 50% Internet penetration worldwide (ibid.)


Mobile devices will be the primary connection tool to the Internet worldwide.
Voice recognition and touch user-interfaces will be commonplace.
Virtual worlds, mirror worlds and augmented reality will be popular (Quitney Anderson
2008).

5. Relation to other Technologies

FI will make use of many other emerging technologies such as artificial intelligence in
semantic web, virtual and augmented reality in the user interfaces and even neuroelectronics
in more far-reaching scenarios with thought controlled interfaces. Cloud computing will be
important in the Internet of Services. RFID and sensor technologies will also be key
components of FI in IoT and in human-machine symbiosis as implantable or wearable
technologies that can collect information about the person and their body and send it to the
network. Networking of and interoperability between these different technologies creates the
Future Internet.

On the other hand FI and IoT are enabling technologies or infrastructures to application level
solutions (e.g. 3D virtual environments) but they can also be handled as more general visions

87
of future technologies (i.e. AmI). IoT is handled in recent EU documents as enabling
technological approach for Future Internet (see for example FP7, 2009).

6. Critical Issues

A clear majority of the sources identify security and privacy as the main critical issues
concerning FI.

ITU (2005) p. 8

Acceptance
The divide in acceptance of monitoring and recording personal data may result in larger
cultural divisions if such services are accepted only by a part of the population but imposed
on everyone. This kind of way of living may create adverse reactions from many (Future
Internet 2020, 2009).
Trust
The systems and their services should be trustworthy. This includes technical functioning,
protection of personal data against attacks and theft, ensuring privacy and providing usable
security management. Trustworthiness is in danger if these issues are not considered and
taken care of from the beginning of the development but applied at the end as add-ons. (FP7,
2009).
IPR issues
60% of respondents in a survey of Internet experts in Quitney Anderson et al. (2008) did not
believe that the problem with IP law and copyright protection will be solved by 2020 to
enable effective content control.
Transparency and privacy
The experts in the survey alluded to in the previous paragraph had divided opinions about the
effects of transparency of organisations and individuals. 45% of the respondents agreed that
transparency will heighten individual integrity and forgiveness and 44% disagreed with it
believing that it will make everyone more vulnerable. According to some views the concept

88
of ―privacy‖ will change and become scarce resulting in some people having multiple digital
identities and the need for reputation maintenance and repair.
Openness
The openness and freedom of the Internet is enabling both favourable and unfavourable
behaviours. It enables people with all kinds of ideologies and opinions to find like-minded
partners and form communities. In this way it supports the richness and democracy of world
views and cultures. However, it does not expect people to accept or tolerate any other views
and may even accelerate fragmentation and reinforce prejudice which may enhance
possibilities for conflicts. (Quitney Anderson et al. 2008).
Energy
Even though the Internet helps in many ways to reduce the carbon footprint for example in
reduced need for travelling and optimising business processes, it consumes a lot of energy and
takes a very material form in data centres and all different kinds of appliances that are used
for going online. The Internet uses currently 3-5% of the global electricity supply and this is
expected to grow exponentially as more and more people, especially in the large nations of
Asia, are joining the network. Reducing the carbon footprint of the Internet while enabling its
growth is a big challenge for research.25

References

Academic publications
Abramowicz, H.; Baucke , S.; Johnsson , M.; Kind , M.; Niebert, N.; Ohlman, B.; Quittek, J.;
Woesner, H. & Wuenstel, K. (2009): A Future Internet Embracing the Wireless
World. In Tselentis, G. et al. (Eds.) Towards the Future Internet - A European
Research Perspective. IOS Press, Amsterdam.
Bauer, M.; Gluhak, A.; Johansson, M.; Montagut, F.; Presser, M.; Stirbu, V. & Vercher, J.
(2009): Towards an Architecture for a Real World Internet. In Tselentis, G. et al.
(Eds.) Towards the Future Internet - A European Research Perspective. IOS Press,
Amsterdam.
Clark, D.; Patridge, C.; Ramming, J.C. & Wroclawski, J. (2003): A Knowledge Plane for the
Internet. SIGCOMM’03, August 25–29, 2003, Karlsruhe, Germany.

Fleisch, E. (2010): What is the Internet of Things? An Economic Perspective. Auto-ID Labs
White Paper WP-BIZAPP-053. Retrieved on 28th May 2010, from
http://autoidlabs.org/uploads/media/AUTOIDLABS-WP-BIZAPP-53.pdf

From RFID to the Internet of Things. Pervasive Networked Systems. Final report (2006).
Conference organised by DG Information Society and Media. 6-7 March 2006,
CCAB, Brussels.
Fuchs, C.; Hofkirchner, W.; Bichler, R.; Raffl, C.; Sandoval, M. & Schafranek, M. (2010):
Theoretical Foundations of the Web: Cognition, Communication, and Co-Operation.
Towards an Understanding of Web 1.0, 2.0, 3.0. Future Internet 2010, 2, pp. 41-59.

FP7 Cooperation Work Programme: Information and Communication Technologies (2009).


UPDATED WORK PROGRAMME 2009 AND WORK PROGRAMME 2010.

25
http://www.leeds.ac.uk/news/article/829/cutting_the_internets_carbon_footprint

89
Retrieved on 28th May 2010, from ftp://ftp.cordis.europa.eu/pub/fp7/ict/docs/ict-wp-
2009-10_en.pdf

Guinard, D.; Baecker, O. & Michahelles, F. (2009): Supporting a Mobile Lost and Found
Community. Auto-ID Labs White Paper WP-BIZAPP-050. Retrieved on 9th 2010,
from http://www.autoidlabs.org/uploads/media/AUTOIDLABS-WP-BIZAPP-050.pdf

Hausheer, D.; Nikander, P.; Fogliati, V.; Wünstel, K.; Callejo, M.A.; Jorba, S. R.; Spirou, S.;
Ladid, L.; Kleinwächter, W.; Stiller, B.; Behrmann, M.; Boniface, M.; Courcoubetis,
C. & Li, M-S. (2009): Future Internet Socio-Economics – Challenges and
Perspectives. In Tselentis, G. et al. (Eds.) Towards the Future Internet - A European
Research Perspective. IOS Press, Amsterdam.

Horrocks, I. (2007): Semantic Web: The Story So Far. W4A2007 Keynote, May 07–08, 2007,
Banff, Canada.

Mathieu, B. (2009): A Context-Aware Network Equipment for Dynamic Adaptation of


Multimedia Services in Wireless Networks. In Wireless Personal Communications,
volume 48, Number 1 / January, 2009, pp. 69 – 92.

Puustjärvi, J. & Puustjärvi, L. (2006): Improving the Quality of Medication by Semantic Web
Technologies. In Hyvönen, E. et al. (Eds.) New Developments in Artificial Intelligence
and the Semantic Web. Proceedings of the 12th Finnish Artificial Intelligence
Conference STeP 2006, Helsinki University of Technology, Espoo, Finland, October
26-27, 2006.
Quitney Anderson, J. & Rainie, L. (2008): The Future of the Internet III. Pew Internet &
American Life Project.
Ravindranath, L.; Padmanabhan, V. N. & Agrawal, P. (2008): SixthSense: RFID-based
Enterprise Intelligence. MobiSys’08, June 17-20, 2008, Breckenridge, Colorado,
U.S.A.
Rubina, J. (2009): Challenges of Internet Evolution: Attitude and Technology. In Tselentis, G.
et al. (Eds.) Towards the Future Internet - A European Research Perspective. IOS
Press, Amsterdam.
Thomas, R.W.; DaSilva, R.A. & MacKenzie, A.B. (2005): Cognitive Networks. In
Proceedings of the First IEEE International Symposium on New Frontiers in Dynamic
Spectrum Access Networks, Baltimore, MD, USA, November 8–11, 2005, pp. 352 –
360.

Governmental/regulatory sources
Future Internet 2020. (2009) Call for action by a high level visionary panel. May 2009.
Retrieved on 28th May 2010, from
http://www.future-internet.eu/fileadmin/documents/reports/Future_Internet_2020-
Visionary_Panel.pdf

IoT: an early reality of the Future Internet (2009). Workshop report. European Commission,
Information Society and Media. Retrieved on 28th May 2010, from
http://ec.europa.eu/information_society/policy/rfid/documents/iotprague2009.pdf

90
ITU Internet Reports 2005: Internet of Things (2005). Executive summary. Retrieved on 28 th
May 2010, from http://www.itu.int/dms_pub/itu-s/opb/pol/S-POL-IR.IT-2005-SUM-
PDF-E.pdf

ITU-T Technology Watch Report 10 (2009): The Future Internet. International


Telecommunication Union. Retrieved on 28th May 2010, from
http://www.itu.int/oth/T230100000A/en

Websites

http://autoidlabs.org/page.html. Accessed on 22nd June 2010.

http://www.baja.nl/site_eng/index2.html. Accessed on 22nd June 2010.

http://en.wikipedia.org/wiki/History_of_the_Internet. Accessed on 22nd June 2010.

http://www.leeds.ac.uk/news/article/829/cutting_the_internets_carbon_footprint. Accessed on
8th July 2010.

http://www.rfidjournal.com/article/view/1338. Accessed on 22nd June 2010.

http://www.w3.org/2001/sw/SW-FAQ. Accessed on 22nd June 2010.

Other sources

Feigenbaum, L.; Herman, I.; Hongsermeier, T.; Neumann, E. & Stephens, S. (2007): The
Semantic Web in Action. Scientific American, vol. 297, Dec. 2007, pp. 90-97.
ITU ICT Statistics (2000): http://www.itu.int/ITU-
D/icteye/Reporting/ShowReportFrame.aspx?ReportName=/WTI/InformationTechnolo
gyPublic&ReportFormat=HTML4.0&RP_intYear=2000&RP_intLanguageID=1&RP_
bitLiveData=False. Accessed on 22nd June 2010.
Schuster, E. W. (2004): Auto-ID and The Data Centre: Creating an Intelligent Infrastructure
for Business. Auto-ID Labs, MIT. Retrieved on 15th June 2010, from
http://mit.edu/edmund_w/www/APICS%20St.%20Louis%20Seminar%20Oct.2004%2
0PART%201.pdf
Zakon, R. H. (2010, January 1): Hobbes‘ Internet Timeline 10. Retrieved on 15th June 2010,
from http://www.zakon.org/robert/internet/timeline/

91
6.6.7 Human-Machine Symbiosis

1. History
Human-machine symbiosis can be traced back to Licklider who in 1960 envisaged a situation
where machines could work side by side with humans. Licklider‘s vision was that humans
and machines could be coupled together and work interactively. He stated the following:
Man-computer symbiosis is an expected development in cooperative interaction
between men and electronic computers. It will involve very close coupling between
the human and the electronic members of the partnership. The main aims are 1) to let
computers facilitate formulative thinking as they now facilitate the solution of
formulated problems, and 2) to enable men and computers to cooperate in making
decisions and controlling complex situations without inflexible dependence on
predetermined programs. In the anticipated symbiotic partnership, men will set the
goals, formulate the hypotheses, determine the criteria, and perform the evaluations ...
Preliminary analyses indicate that the symbiotic partnership will perform intellectual
operations much more effectively than man alone can perform them. (Licklider, 1960,
p. 1).
The premise of this idea was that human intellect could be augmented and as a result human
beings would be able to perform more challenging tasks beyond the limitations placed on
them. In considering the aspect of augmenting human intellect even further, Engelbart and
English (1968) came up with an augmentation system in the form of a computer-based
multiconsole interactive display system. This is generally believed to have been the debut of
the mouse, human-computer interaction, interactive computing, hypermedia, video
teleconferencing26, all mediums that have helped humans work and communicate much more
effectively and interactively (Friedewald 2005). Putting this into context, human-machine
symbiosis can be understood as a technology that as Roy (2004) accords has the ability to be
able to enhance and improve the human potential where human capacities are restricted.
Therefore, such technologies will as Roy asserts ―complement rather than replace human
abilities‖ (2004, p. 1). He further views such interaction or interdependence of human and
machine as an extension of the other, i.e. the technological machine as an extension of the
human. Such symbiosis can be seen through wearable technologies, as suggested by Starner et
al (1997), through assistive technologies such as discussed by Lay et al (2001) or through
neural implants as explored by Saniotis (2009).
When considering Licklider‘s visionary idea in the present day, it would appear that his vision
has to some degree become a reality. For instance, technology has permeated modern society
to an extent where it can be argued that users have become dependent and perhaps reliant to a
large extent on technology. This may be in any number of ways and include the way they
access information, conduct business as well as the way in which communication is
conducted. However, as Foster (2007) suggests, although humans can now perform more
complex computations much more than during Licklider‘s time, there is still much work that
needs to be undertaken with regard to human-machine symbiosis. Hence, the continued
growth and development of the technology to this day. Some of the developments are outlined
in the application examples and areas given below:

26
http://en.wikipedia.org/wiki/Douglas_Engelbart
http://www.archive.org/details/XD300-23_68HighlightsAResearchCntAugHumanIntellect
http://sloan.stanford.edu/mousesite/1968Demo.html

92
2. Application Areas/Examples
This section outlines some examples and application areas of human-machine symbiosis.
Research shows diverse application areas of the technology. For purposes of this exercise,
five application examples are outlined:
i. Human-Machine Symbiosis in Medical Science
Human-machine symbiosis is being applied in medical science as a way to improve quality of
life. The technology may be used to better the quality of life of people who may otherwise be
disadvantaged due to for instance a physical disability. Research into this area can be seen in
several examples such as in work being carried out by the Human Machine Symbiosis Lab
and by other research work in programs such as the MIT 10x research program. The Human
Machine Symbiosis Lab27 associated with Arizona State University is designing, developing
and evaluating new human machine interfaces that can be applied in haptic user interfaces
which is a field of R&D that relates to the sense of touch. The lab aims to incorporate
psychophysics, biomechanics and neurology in its development of smart and effective haptic
interfaces and devices. The initial target groups are the blind and patients suffering from
Alzheimer‘s disease. The intention is to stimulate and enhance the haptic memories of such
group of patients. This is being done through surgical simulations. Another research project
that has looked at human-machine symbiosis for medical science is the MIT‘s 10x28 research
program whose objective was to magnify human abilities. The research program looked at a
cross-section of application areas including aspects of memory in order to enhance and
expand human cognitive abilities. The MIT 10x program has focused its research on physical
disability by looking at bionic and robotics technologies which may be used as an extension
of physically disabled body (Roy, 2004).
In addition to the above, other examples that may fall under this category are implants.
Admittedly implants are not necessarily a new phenomenon particularly in the medical field
when one considers the development of heart pacemaker‘s in the 1960s or cochlear and
cortical implants for the hearing and visually impaired respectively (EU, 2005). However, the
continuing technological research advancements in brain or neural implants render this
technology as emerging. Brain implants may be used for the treatment of memory loss,
paralysis or similar other neural related conditions. The brain implant is implanted in the brain
where the device transmits signals to and from the brain (Stieglitz & Meyer, 2006).

ii. Human-Machine Symbiosis in Brain Computer Interfaces

This has some overlaps with neuroelectronics, particularly when one considers brain
computer interface (BCI) or brain-computer symbiosis as captured by Schalk (2008). Aspects
of this as covered in the neuroelectronics meta-vignette include the following:
BCIs, sometimes called brain-machine interfaces (BMIs), are an emerging
neurotechnology that translates brain activity into command signals for external
devices. Research on BCIs began in the 1970s at the University of California Los
Angeles (UCLA). Researchers at UCLA also coined the term brain-computer
interface. A BCI establishes a direct communication pathway between the brain and
the device to be controlled. They are mainly being developed for medical reasons,

27
http://symbiosis.asu.edu/research_hms.html

28
http://10x.media.mit.edu/

93
because there is a societal demand for technologies which help to restore functions of
humans with central nervous system (CNS) disabilities (Berger, 2007). Patients for
whom a BCI would be useful usually have disabilities in motor function or
communication. This could be (partly) restored by using a BCI to steer a motorized
wheelchair, prosthesis, or by selecting letters on a computer screen with a cursor.
Invasive or non-invasive electrodes are used to detect brain activity, which is
subsequently translated by a signal processing unit into command signals for the
external device. The most common BCI responds to specific patterns detected in
spatiotemporal EEGs measured non-invasively from the scalp. Spatiotemporal EEGs
can be controlled by imagining specific movements (Gasson & Warwick, 2007).
iii. Human-Machine Symbiosis in Human-Systems Integration for Optimal Decision
Making.

The application example is highlighted in the database29 and is according to the database entry
potentially most suitable for people working in dynamic and complex environments such as
air traffic controllers and nurses in busy hospitals. The technology and in this case the human
systems integration system can then be used to support decision making in such dynamic and
complex environments. Human-systems integration considers personnel, training, system
safety, health hazards and other human-centric issues in the design of the technologies the
targeted audience will use. Cognitive and Organisational Systems Engineering (COSE), a
Project at NICTA, is finding ways to model human-systems integration to support optimal
decision-making in these sorts of environments, through the system development life-cycle.
Innovative integrated processes and tools will be developed, initially to support people‘s
cognitive work in health and air traffic management environments, but later extended to other
domains. The goal of the COSE project30 is to develop better techniques and tools for
evaluating the integration between people, the ICT systems that they use, and the
environments in which they work.
iv. Human-Machine Symbiosis in Industry

Human-machine symbiosis offers an opportunity for effective and improved operations in


industry through smart and intelligent applications. Lepratti (2006) has stated the following
―An intelligent use of human knowledge and skills coupled with the qualities of robotic
systems (such as flexibility, high speed and precision) lead to profitable synergetic effects.
Hence, humans and robots should be seen as complementary elements. Robotic systems
employed in manufacturing will assist the work of humans supporting—not replacing—them
in carrying out various tasks through an intelligent interaction‖ (p.654).
v. Human-Machine Symbiosis in Entertainment

The technology is applicable in entertainment through for example interactive gaming (Tseng
et al, 2006) or dance as demonstrated by Nahrstedt et al. (2007) in their discussion of tele-
immersive environments. Another example that may be considered here are game worlds
(such as Second Life) in which users have the ability to transcend the physical in virtual

29
This is an ETICA database created to develop a list of technologies from difference sources which include
government and scientific sources. It encompasses application examples, artefacts, technologies and the various
sources that data has been derived from.
30
http://www.nicta.com.au/research/projects/cose

94
reality. The use of such technology provides a sense of reality as well as interaction between
human and machine.

3. Definition and Defining Features

Definition

Symbiosis is defined as:

A close, prolonged association between two or more different organisms of different


species that may, but does not necessarily, benefit each member31.

The above definition suggests that humans and machines can work together and interact to
mutual benefit. However, this may not always be the case as such mutuality can mean that
one agent benefits more than the other or indeed that the agents do not actually benefit from
each other at all and instead bring potential harm. It is due to the latter that ethical issues may
arise and therefore the need to understand the associated risks becomes important. These of
course will be covered in the critical issues area. Looking at this definition of symbiosis, the
implication is that the interaction between man and machine can have both a positive and
negative effect. The positive effect is evident in how the machines improve and enhance
human life while the negative aspects lie in the problems that the technology brings to its
intended users.
Defining Features
Interactivity: One of the major defining characteristic of human-machine symbiosis is that of
interaction. For humans and machines to mutually work together and be effective in for
instance brain-computer interaction and/or in performing other highly challenging activities,
interactivity is central to the technology.
Augmentation: In cases where the technology is used to improve and enhance memory and
vision, the main defining feature is augmentation. As Greef et al (2007) argue in relation to
augmented cognition, the aim is ―the creation of adaptive human-machine collaboration that
continually optimises performance of the human-machine system‖ (p.1). By so doing, the
assertion of Greef and colleagues is that such collaboration can overcome man‘s limitations in
human-machine processing.
Sensory perception to aide cognitive abilities: Interaction between the brain and the
technology is possible with sensory perceptions. This allows technologies like robots to be
cognizant of and execute a user‘s needs. Kurzweil (2000) has written of how in the future
neural implants will provide a range of sensory perceptions and sensations when the implants
interact with virtual environments.

4. Timeline
This is an ongoing development

31
http://education.yahoo.com/reference/dictionary/entry/symbiosis

95
5. Relations to other Technologies
From the examples given above as well from some of the defining features looked at, it is
clear that human-machine symbiosis is related to, and overlaps with other technologies. These
technologies include neuroelectronics, particularly when brain-computer interface technology
is taken into consideration. There is an additional relationship between robotics and human-
machine symbiosis as gathered from discussions on human-robot interaction (Lepratti, 2006)
and (Lay et al, 2001) where robots are expected to play a significant part in industry as well as
in medical science. In additional, another relationship can be seen in Artificial Intelligence
which has many links with robotics. Lastly, human-machine symbiosis can also be said to be
related to augmented reality. An example of this relation can be seen in Starner et al (1999,
1997).

6. Critical Issues
While acknowledging that new technologies in direct human-brain interaction may offer
opportunities to have little to no invasive procedures, Schalk (2008) has stated that invasive
procedures are likely to remain one of the critical issues in brain-computer symbiosis. In
addition, safety and dependability have been identified as one of the issues when human-robot
interaction is considered (De Santis & Siciliano, 2008).

References
Berger, T. W. (2007). Report on International Assessment of Research and Development in
Brain-Computer Interfaces. Retrieved 20th May, 2010, from
http://www.wtec.org/bci/BCI-finalreport-10Oct2007-lowres.pdf
De Santis, A & Siciliano, B. (2008). Safety Issues for Human-Robot Cooperation in
Manufacturing Systems. Retrieved on 20th May, 2010, from
http://www.phriends.eu/Virtual_08.pdf
Engelbart, C. & English, W.K. (1968). A research center for augmenting human intellect. In
AFIPS Conference Proceedings of the 1968 Fall Joint Computer Conference, San
Francisco, CA, December 1968, Vol. 33, pp. 395-410 (AUGMENT,3954,). Retrieved
on 21st May 2010 from,
http://sloan.stanford.edu/mousesite/Archive/ResearchCenter1968/ResearchCenter1968
.html
European Union (2005). Ethical Aspects of ICT Implants in the Human Body. Opinion of the
European Group on Ethics in Science and New Technologies to the European
Commission. Retrieved on 25th May 2010, from
http://ec.europa.eu/european_group_ethics/docs/avis20_en.pdf
Foster, I. (2007). Human-Machine Symbiosis, 50 Years On. Retrieved on 10th May 2010,
from http://arxiv.org/ftp/arxiv/papers/0712/0712.2255.pdf
Friedewald, M. (2005). The Continuous Construction of the Computer User: Visions and User
Models in the History of Human-Computer Interaction. In Buurman, G.M. (ed.), Total
Interaction: Theory and Practice of a New Paradigm. Basel/Berlin, pp. 26–41
Gasson, M. & Warwick, K. (2007). D12.1 Study On Emerging AmI Technologies. FIDIS -
Future of Identity in the Information Society. [Scientific Report]
Greef, T., Dongen, K., Grootjen, M. & Lindenberg, J. (2007). Augmenting Cognition:
Reviewing the Symbiotic Relation Between Man and Machine. In Foundations of
Augmented Cognition. Springer Berlin / Heidelberg

96
Kurzweil, R. (2000). The age of spiritual machines: When computers exceed human
intelligence. New York: Penguin.
Lay, K. et al. (2001). MORPHA: Communication and interaction with intelligent,
anthropomorphic robot assistants. In Tagungsband Statustage Leitprojekte Mensch-
Technik-Interaktion in der Wissensgesellschaft, Oktober 2001, Saarbrücken, Germany
Lepratti, R. (2006). Advanced Human-Machine System for intelligent manufaturing: Some
Issues for Employing Ontologies for Natural Language Processing. In Journal of
Intelligent Manufacturing, Vol. 17, No. 6, 2006
Licklider, J. C.R. (1960) Man-Computer Symbiosis. In IRE Transactions on Human Factors
in Electronics, Vol. HFE-1, p. 4-11. Retrieved on 10th May 2010, from
http://groups.csail.mit.edu/medg/people/psz/Licklider.html
Nahrstedt, K. et al. (2007). Symbiosis of Tele-Immersive Environments with Creative
Choreography. Retrieved on 21st May 2010 from
http://shamurai.com/sites/creativity/papers/4.nahrstedt.pdf
Roy. D. (2004) 10x: Human-machine symbiosis. In BT Technology Journal, Vol. 22, No. 4,
October 2004. Retrieved on 17th May 2010 from
http://10x.media.mit.edu/10x%20draft.pdf
Saniotis, A. (2009) Present and Future Developments in Cognitive Enhancement
Technologies. In Journal of Futures Studies, Vol. 14, No. 1, p. 27–38. Retrieved on
22nd May 2010 from, http://www.jfs.tku.edu.tw/14-1/A02.pdf
Schalk, G. (2008) Brain-Computer Symbiosis. Retrieved on 18th May 2010, from
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2722922/
Starner, T. et al. (1997). Augmented reality through wearable computing. Presence, Special
Issue on Augmented Reality. Vol. 6, No. 4, (1997
Starner, T. et al. (1999). Everyday-use Wearable Computers. In International Symposium on
Wearable Computers, 1999.
Stieglitz, T. & Meyer, J. (2006). Biomedical Microdevices for Neural Implants: Interfacing
neurons for neuromodulation, limb control and to restore vision–Part II. Retrieved on
25th May 2010, from
http://www.springerlink.com/content/t5257jug8k150117/fulltext.pdf
Tseng, K. et al (2006). Vision-Based Finger Guessing Game in Human Machine Interaction.
In IEEE International Conference on Robotics and Biomimetics, 2006

97
6.6.8 Neuroelectronics
1. History
Neuroelectronics, sometimes referred to as neurotechnology, is the discipline that deals with
the interface between the human nervous system and electronic devices. It is a highly
complex and interdisciplinary field with contributions from computer science, cognitive
science, neurosurgery and biomedical engineering. Neuroelectronics has roughly three related
branches: (1) neuroimaging, (2) brain-computer interfaces (BCIs), and (3) electrical neural
stimulation. The discipline exists for more than half a century. However, in the last decade
significant advances have been made, particularly in neuroimaging, which revolutionized the
field by allowing researchers to directly monitor brain activity during experiments. And it is
predicted that neuroelectronics, particularly neuroimaging and brain-computer interfacing,
will be employed much more in the future.

2. Application Areas/Examples
Neuroimaging has two branches: (1) structural imaging, which tries to unravel the structure,
or anatomy, of the brain, and (2) functional imaging, which tries to examine functions of
(certain areas of) the brain. The latter enables a researcher to directly visualize how
information is processed in different areas of the brain (European Technology Assessment
Group, 2006). Both structural and functional neuroimaging are used for diagnostic as well as
research purposes. Contemporary neuroimaging techniques such as Magnetic Resonance
Imaging (MRI) and Functional Magnetic Resonance Imaging (fMRI) are used for cancer
scanning, stroke rehabilitation and functional analyses of cognitive processes in the brain.
fMRI is the cornerstone technology to study the human brain. Other neuroimaging techniques
such as electroencephalography (EEG), positron emission tomography (PET), and
magnetoencephalography (MEG), amongst others, are also used by researchers to study brain
structure and function.

BCIs, sometimes called brain-machine interfaces (BMIs), are an emerging neurotechnology


that translates brain activity into command signals for external devices. Research on BCIs
began in the 1970s at the University of California Los Angeles (UCLA). Researchers at
UCLA also coined the term brain-computer interface. A BCI establishes a direct
communication pathway between the brain and the device to be controlled. They are mainly
being developed for medical reasons, because there is a societal demand for technologies
which help to restore functions of humans with central nervous system (CNS) disabilities
(Berger, 2007). Patients for whom a BCI would be useful usually have disabilities in motor
function or communication. This could be (partly) restored by using a BCI to steer a
motorized wheelchair, prosthesis, or by selecting letters on a computer screen with a cursor.
Invasive or non-invasive electrodes are used to detect brain activity, which is subsequently
translated by a signal processing unit into command signals for the external device. The most
common BCI responds to specific patterns detected in spatiotemporal EEGs measured non-
invasively from the scalp. Spatiotemporal EEGs can be controlled by imagining specific
movements (Gasson & Warwick, 2007). So, merely by imagining movements one can steer a
wheelchair, prosthesis or a cursor on a computer screen.

98
- Brain fingerprinting. Brain fingerprinting is a particular application of
neuroimaging. The idea behind it is that the brain processes known, relevant
information differently from the way it processes unknown or irrelevant information.
The brain‘s processing of known information, such as the details of a crime stored in
the brain, is revealed by a specific pattern in the EEG. So it is claimed that brain
fingerprinting can be used for lie detection (Simon, 2005)

- BCI to control an aircraft. Defense Advanced Research Projects Agency (DARPA)


has a brain-machine interface program to control an aircraft (Rocco & Bainbridge,
2002).

- BCI to control a motorized wheelchair. A BCI is being developed that enables a


person with locked-in syndrome, a severe neurological disorder that almost totally
paralyses a person, to control a motorized wheelchair (Berger, 2007).

- BCI for spelling. A BCI may help to restore one‘s ability to communicate. The
application then uses brain signals to control a cursor on a computer screen. After
some practice, the cursor control becomes accurate enough to spell words and
sentences by using the interface to pick out letters of the alphabet from a virtual
keyboard (Friman et al, 2007).

- Neuroimaging. ‗A wearable brain imaging tool will enable identification of children


with learning disabilities, assessing the effectiveness of learning as well as identifying
the emotional state of a human being‘ (Beckert, Bluemel, Friedewald & Thielmann,
2008).

3. Definition and Defining Features


There are roughly three branches in neuroelectronics. Each branch uses different devices to
interface with the brain, and each of these devices has different features. The first branch,
neuroimaging, uses techniques such as fMRI, PET, MEG or EEG, amongst others, to extract
information from the brain to diagnose disorders or to study the brain. The second branch,
BCIs, uses invasive or non-invasive electrodes to extract information from the brain, not for
diagnostic or research purposes, but to control external devices such as wheelchairs,
computers or airplanes. And the third branch, electrical neural stimulation, uses invasive
electrodes to send electrical signals to specific parts of the brain. The only defining feature
these three branches have in common is that they all interface electrical devices with the
brain, either to extract information from the brain or to send electrical signals to the brain. In
overview:
- Neuroimaging technologies extract information from the brain to diagnose disorders
or study brain structure or function.

- BCIs extract information from the brain to control external devices such as
wheelchairs, prosthesis or computers.

99
- Electrical neural stimulation devices stimulate parts of the brain so that symptoms
like tremor, clinical depression or pain are reduced.

4. Timeline
This seems to be an ongoing development for the time being.

5. Relation to Other Technologies


Neuroelectronics is closely related to bioelectronics; the field that interfaces the human body
with electrical devices. Strictly speaking neuroelectronics is a branch of bioelectronics, since
the brain and nervous system are part of the human body. Bioelectronics has resulted in
several healthcare applications such as electrocardiography, cardiac peacemakers and blood
glucose meters. Neuroelectronics is also related to biometrics, which are technologies to
uniquely identify humans based upon one or more physical or behavioral traits. Neuroimaging
technologies can be used to identify humans based on their brain activity patterns. Some have
argued that neuroimaging and BCIs can be used for ambient intelligence systems. Such
systems need as much information of its users as possible regarding their interactions,
thoughts and feelings. And information from the brain extracted either with (portable)
neuroimaging technologies or BCIs provide valuable information for an ambient intelligence
system (Gasson & Warwick, 2007). Finally, neuroelectronics is highly interdisciplinary and
receives contributions from computer science, cognitive science, neurosurgery and biomedical
engineering.
6. Critical Issues
Several critical issues are expressed concerning neuroelectronics in one of the texts on
emerging ICT. Neuroimaging and brain-computer interfacing allow processing of neural
signals and it is assumed that neural signals may indicate - even represent - thoughts. Under
what conditions can the extracted neural signals be considered as creative and specific enough
to invoke intellectual-property rights? Furthermore, there is a fundamental right of protection
of personal data and hereby states that personal data may only be processed on the basis of the
consent of the person concerned or some other legitimate basis laid down by law. Also, can
certain thoughts, when registered by neuroimaging or brain-computer interfacing, be a work
of invention that falls under copyright law? One question is whether processing of neural
signals (personal data) without consent of the data subject (thus on the basis of another
legitimate basis) can be lawful in any situation. If yes, what situation would that be and under
which conditions? For example, can employers (such as schools afraid of hiring pedophiles,
or intelligence services screening personnel for infiltrators), insurance providers, or the police
(lie detection) ever be allowed to compulsorily process brain signals? (Gasson & Warwick,
2007).

References
Beckert, B., Bluemel, C., Friedewald, M., Thielmann, A. (2008). Converging Technologies
and their impact on the Social Sciences and Humanities (CONTECS). An analysis of

100
critical issues and a suggestion for a future research agenda. Retrieved January 4, 2010
from http://www.contecs.fraunhofer.de/images/files/contecs_report_complete.pdf
Berger, T. W. (2007). Introduction. In International Assessment of Research and
Development in Brain-Computer Interfaces. Chapter one. Retrieved December 18, 2007,
from http://www.wtec.org/bci/BCI-finalreport-10Oct2007-lowres.pdf
European Technology Assessment Group. (2006). Technology Assessment on Converging
Technologies. Retrieved January 4, 2010 from
http://www.europarl.europa.eu/stoa/publications/studies/stoa183_en.pdf
Friman, O., Luth, T., Volosyak, I. and Graser, A. (2007). Spelling with Steady-State Visually
Evoked Potentials. In 3rd International IEEE/EMBS Conference on Neural Engineering.
355 - 357.
Gasson, M. & Warwick, K. (2007). D12.1 Study On Emerging AmI Technologies. FIDIS -
Future of Identity in the Information Society. [Scientific Report]
Haynes, J-D. (2008). Detecting deception from neuroimaging signals – a data-driven
perspective. Trends in Cognitive Science (12) 4, 126-127.
Kern, D. S. & Kumar, R. (2007). Deep Brain Stimulation. The Neurologist (13) 5, 237-252.
Rocco, M.H. & Bainbridge, W.S. (2003). Converging Technologies for Improving Human
Performance: Nanotechnology, Biotechnology, Information Technology and Cognitive
Science. Kluwer Academic Publishing. Retrieved January 4, 2010 from
http://www.wtec.org/ConvergingTechnologies/1/NBIC_report.pdf
Simon, S. (2005). What you don‘t know can‘t hurt you. Retrieved March 27, 2010, from
http://www.brainwavescience.com/LET%20Article.pdf

101
6.6.9 Quantum Computing

1. History

Blackbody Radiation
The technology of quantum computing is based on the idea of extending classical information
theory. This is done by combining it with the concept of quantum physics and thereby
enhancing computer science, which has so far been reduced to a binary architecture, built on
transistor systems. Hereby the theory of quantum information opens and enlightens the gap
between zero and one by embracing the scale of all possibilities.

This idea is based on the theory of quantum optics that dates back to the end of the 19th
century, when Max Planck modelled ―blackbody radiation‖ (Planck 1899). From the
assumption that the electro magnetic modes in a cavity were quantised in energy with the
quantum energy equal to Planck‘s constant times the frequency, Planck derived a radiation
formula where the average energy per ―mode‖ or quantum‖ is the energy of the quantum
times the probability that it will be occupied. His formula in comparison to the classical
Rayleigh-Jeans Law marks the basis of the explanation of why light becomes visible for the
human eye.
hv
E = hv ∕ k T
e −1

Einstein-Podolsky-Rosen paradox
This quantisation assumption then led Albert Einstein during his ―annus mirabilis‖ in 1905 to
the hypothesis of the existence of the particles of life, the so called photons32 and later in 1935
to the publication of the paradoxical thought experiment by Einstein, Podolsky and Rosen
(EPR, 1935). The uncertainty principle33 by Werner Heisenberg that emerges from the
double-slit experiment34 showing that an interference pattern from photons only shows up
when there is no measurement, marks the beginning of a controversy where Albert Einstein
asked ―if the Moon wouldn‘t be there if nobody looked‖ and ―if god plays dice‖.
Controversial, because the validation of the experiment led to a debate between Nils Bohr,
Max Planck and Albert Einstein, known as the Copenhagen interpretation. In the experiment,
the random state of two entangled photons, one capturing the same condition when the other
is changed, was refused by Einstein as ―spooky action‖. This correlation of entangled objects
is defined as quantum entanglement or quantum non-local connection. It represents the core
of the EPR paradox and the reason why quantum theory is seen as the most fundamental
theory, giving an explanation about the stability of material.

32 http://dx.doi.org/10.1002%2Fandp.19053220607

33 △ x △ p≥ (Heisenberg 1927)
2
34 http://www.physics.uc.edu/~sitko/CollegePhysicsIII/24-WaveOptics/WaveOptics.htm

102
Quantum Information
The deep insight into quantum information theory came with John Bell‘s analysis of the EPR
publication, building a link between quantum mechanics and information theory. He noticed
the importance of the correlation between separated quantum systems which have interacted
(directly or indirectly) in the past, but which no longer influence one another. His argument
showed that the degree of correlation, which can be present in such systems, exceeds what
could be predicted on the basis of any law of physics which describes particles in terms of
classical variables rather than quantum states. Bell‘s argument was clarified later on by others
and experimental tests were carried out in the 1970s (Steane 1998).

Qubits and Superposition


One of the pioneers in quantum computing is Richard Feynman, a Nobel-Prize winner in
physics, who observed in the early 1980s that certain quantum mechanical effects cannot be
simulated efficiently on a binary processing computer system. While Moore‘s law 35 describes
the exponential growth of processing velocity in relation to the reduction of a processor size,
this law might one day converge with the size of a qubit.

The qubit, or quantum-bit, is a unit of quantum information, where in contrast to the bit in
computer information (one or zero), the qubit can be zero, one or a superposition of both
(Schumacher 1995). Schrödinger explained the superposition phenomenon with his thought-
experiment, as an answer to the EPR paradox, about a cat in a box 36 that can exist in a
superposition of life and death (Schrödinger, 1935).

From experiments to practice


The observation of Feynman led to the speculation that perhaps computation in general could
be done more efficiently if it utilises these quantum effects (Bone and Castro 1997; Riefel and
Polak 2000). However the quantum effects have been shown in experiments of tiny objects
that depend on sophisticated equipment37, but as observed by Riefel and Polak (2000, p.300),
building a quantum computer has not been very easy: ―Computational machines that use such
quantum effects, proved tricky, and as no one was sure how to use the quantum effects to
speed up computation, the field developed slowly. It was not until 1994, when Peter Shor38
surprised the world by describing a polynomial time quantum algorithm for factoring integers
that the field of quantum computing came into its own. This discovery prompted a flurry of
activity among experimentalists, trying to build quantum computers and theoreticians trying
to find other quantum algorithms‖.

Spooky action, teleportation and entanglement in practice


Nevertheless, in 1985 the British quantum physicist David Deutsch wrote a paper on
Quantum Turing Machines and the first quantum algorithm (Deutsch 1985). For his work,

35 ftp://download.intel.com/museum/Moores_Law/Articles-Press_Releases/Gordon_Moore_1965_Article.pdf
36 http://www.tu-harburg.de/rzt/rzt/it/QM/cat.html
37 http://www.quantum.at/typo3temp/pics/1a46307c10.jpg
38 http://www-math.mit.edu/~shor

103
Deutsch won the computation science price39 in 2005 and has therefore been seen as the
founder of quantum computing. Since 2004 the concept or quantum teleportation trough
entanglement has been proven in experiments at the University of Vienna in Innsbruck,
Austria and on the Island of Tenerife40, where a team of British and Austrian physicists
transmitted entangled photons over 144 kilometers (Zeilinger, 2004 & 2007).

Out of the experiments, the scientists aim to build the fundament for practical quantum
computation tasks such as quantum information storage, quantum calculation and quantum
teleportation. In the future this might even offer ways and methods to decrypt the code of life,
such as the DNA, where entanglement in bioinformatics is seen as a possible explanation to
hold together the life‘s blueprint (Ananthaswamy 2010).

2. Application Areas/Examples
Quantum computing is assumed not to be built for general-purpose devices, but for specific
applications. Additionally, quantum teleportation will not be realised in the form of an
artefact to teletransport large objects to offer a new way of mobility. Nonetheless, in general,
any kind of applications are assumed to be possible and to be handled by quantum computing
at some point in the future. However, the most promising application areas for the near future
are understood to: ―be purpose-built tools that exploit quantum rules to improve on existing
technologies such as atomic clocks and photonic technology….‖ (Ball 2006).

On the other hand it has been stated, that on the longer run quantum computing can
potentially have a revolutionary effect, especially in areas such as optimisation, code
breaking, DNA and other forms of molecular modelling, large-database access, encryption,
stress analysis for mechanical systems, pattern matching and data forecasting. One of the
basic features behind quantum computers is that they are able to calculate upon absolute
randomness, thus give more veracity to programs that need absolute randomness in their
processing. Randomness plays a significant part in applications with a heavy reliance on
statistical approaches, for simulations, for code making, randomised algorithms for problems
solving. In comparison to absolute randomness, that is for example measured from the
localisation of a photon, the common randomness, generated by a transistor based processor,
is much weaker and can be re-calculated. The absolute randomness of a quantum computer is
not re-calculatable and would even surprise the creator.

Scientific instrument
As a scientific research tool the quantum computer could have revolutionary impact because
of its ability to simulate other quantum systems (Ball 2006). For example, a quantum
computer can simulate physical processes of quantum effects in real time. Molecular
simulations of chemical interactions will allow scientists to learn more about how their
products interact with each other, and with biological processes (e.g. how a drug may interact

39 http://www.edge.org/3rd_culture/prize05/prize05_index.html
40 http://www.quantum.at/research/quantum-teleportation-communication-entanglement/entangled-photons-
over-144-km.html

104
with a person‘s metabolism or disease).

Encryption Algorithm
Up until now encryption algorithms are based on keys, which are generated through large
prime numbers. These keys, practically split up into a private and a public key-pair, are then
used to encrypt and later decrypt data. When two parties, for example, Alice and Bob
exchange sensitive messages, Alice uses her private key together with the public key of Bob,
which are summarised into a large string to cipher the message. Bob then uses the public key
of Alice together with his private key to regain the same string to decipher the message. This
method is called public-key cryptography, whose safety relies on the (relatively weak)
randomness and calculation speed of a common computer.

Fig. 1qc. Public-key cryptography. (c) 2010 by AA CC-BY-SA 3.0

Quantum key distribution (QKD) offers the possibility to encrypt data without having to
exchange public keys, because quantum entanglement would spare this necessity and more
importantly, the encryption key would be based on an absolute quantum randomness. The
information of the state of the key would be teletransported to Bob in real-time when Alice
encrypts the message.

Fig. 2qc. Quantum-key cryptography. (c) 2010 by AA CC-BY-SA 3.0

The fact that quantum computing offers absolute randomness could therefore jeopardise the
encryption scheme of public-key based information. It is the encryption algorithm that today
for example is used to secure bank transactions and encrypt the keys on credit cards. As
quantum cryptography has been proved and tested in several experiments, this application
could be introduced before the expiry of the commonly used public-key based application.

105
The task of calculating large prime numbers in zero time is seen as an easy job for a quantum
computer.

Quantum teleportation
―In 1993 an international group of six scientists, including IBM Fellow Charles H. Bennett,
confirmed the intuitions of the majority of science fiction writers by showing that perfect
teleportation is indeed possible in principle, but only if the original is destroyed. In
subsequent years, other scientists have demonstrated teleportation experimentally in a variety
of systems, including single photons, coherent light fields, nuclear spins, and trapped ions.
Teleportation promises to be quite useful as an information processing primitive, facilitating
long range quantum communication (perhaps ultimately leading to a "quantum internet"), and
making it much easier to build a working quantum computer. But science fiction fans will be
disappointed to learn that no one expects to be able to teleport people or other macroscopic
objects in the foreseeable future, for a variety of engineering reasons, even though it would
not violate any fundamental law to do so‖.41

Experiments with entangled photons have shown, that, due to the immediate change of state
of entangled photons, time becomes obsolete. In relation to Albert Einstein‘s relativity theory,
which says that approaching the speed of light, the time passes slower and the state of
quantum entanglement seems to be in a state where time does not go by any more. For a qubit
now is the only moment existing, inheriting the future and the past. With quantum
teleportation experiments, physicists are showing that teleportation can be done bypassing
time.

Brain-Computer interfaces and Quantum robots


―The actual Brain-Computer Interface (Brain Gate) attempts to use brain signals to drive
suitable actuators performing the actions corresponding to subject‘s intention. However this
goal is not fully reached, and when BCI works, it does only in particular situations. The
reason of this unsatisfactory result is that intention cannot be conceived simply as a set of
classical input-output relationships. It is therefore necessary to resort to quantum theory,
allowing the occurrence of stable coherence phenomena, in turn underlying high-level mental
processes such as intentions and strategies. More precisely, within the context of a dissipative
Quantum Field Theory of brain operation it is possible to introduce generalised coherent
states associated, within the framework of logic, to the assertions of a quantum language.‖
(Pessa & Zizzi 2009, p.1).

Quantum money and quantum memory


In 1970 Steven Wiesner came up with the concept of quantum money, an application where
20 photons would be trapped into bills, which then could be identified as a unique serial
number and the bill could not be counterfeit. His idea was introduced in 1983, but remains
fiction, as it only works in theory, because nobody until now has succeeded in building robust

41 http://www.research.ibm.com/quantuminfo/teleportation/

106
photon traps.

But just lately scientists at the National University of Australia have apparently managed to
build a solid state quantum memory body.42 ―The device allows the storage and recall of light
more faithfully than is possible using a classical memory, for weak coherent states at the
single-photon level through to bright states of up to 500 photons. For input coherent states
containing on average 30 photons or fewer, the performance exceeded the no-cloning limit.
This guaranteed that more information about the inputs was retrieved from the memory than
was left behind or destroyed, a feature that will provide security in communications
applications.‖43

Quantum features of consciousness


In 2000 the Mensky (2009) suggested an approach called Extended Everett‘s Concept (EEC).
Mensky claims that EEC offers the shortest line of consideration connecting quantum theory
with consciousness. Mensky continues that: ―consciousness, or rather complex consisting of
explicit consciousness and super-consciousness (manifesting itself in the regime of
unconscious), is a human‘s ability providing the best possible orientation in the world.
According to EEC, consciousness is not produced by brain, but is independent of it. The brain
serves as an interface between conscious and the body‖ (Mensky 2009).

Mensky states that: ―whole quantum world is a sort of quantum computer supporting the
phenomenon of consciousness and super consciousness. Instead of being an origin of
consciousness, real quantum computers can be used to construct models of quantum world
demonstrating how the phenomena of life and consciousness may exist. Due to special
features of human super consciousness, it cannot be replaced by the action of any technical
device or even any material system. However, technical equipment may be used to make
usage of super-consciousness more efficient.‖ (Mensky 2009).

Quantum Information Network


Looking at the history of the world wide web and the development since its proposal by Tim
Berners-Lee in 1989, the information pool has grown up to an extent that finding the right
data by the use of search engines has become a challenging task. The introduction of semantic
web solutions have just introduced a meta level between the data and the user into
descriptions and meta-descriptions generating an endless self-sustentative process ending in a
Ourobouros-circle44, a snake that eats its own tail. Governments and authorities are struggling
to introduce barriers and the perishability of information, as many start to understand that ―the
internet never forgets‖.

The proposal of a quantum information network would introduce a new possibility of


information handling, simultaneously with the solution that information would not have to be

42 http://www.heise.de/newsticker/meldung/Lichtspeicher-fuer-Quantencomputer-1030662.html
43 http://www.nature.com/nature/journal/v465/n7301/full/nature09081.html
44 http://hermetic.com/texts/plato/timaeus.html

107
searched as it appears in the moment of demanding for it.

3. Definition and Defining Features


A definition for understanding some of the very basic ideas behind quantum computing and
the differences between so called traditional computing and quantum computing, is presented
below:

Quantum computer basics: ―In the classical model of a computer, the most fundamental
building block, the bit, can only exist in one of two distinct states, a 0 or a 1. In a quantum
computer the rules are changed. Not only can a ‘quantum bit‘, usually referred to as a ‘qubit‘,
exist in the classical 0 and 1 states, it can also be in a coherent superposition of both. When a
qubit is in this state it can be thought of as existing in two universes, as a 0 in one universe
and as a 1 in the other. An operation on such a qubit effectively acts on both values at the
same time. The significant point being that by performing the single operation on the qubit,
we have performed the operation on two different values. Likewise, a two-qubit system would
perform the operation on 4 values, and a three-qubit system on eight. Increasing the number
of qubits therefore exponentially increases the ‘quantum parallelism‘ we can obtain with the
system. With the correct type of algorithm it is possible to use this parallelism to solve certain
problems in a fraction of the time taken by a classical computer‖ (Bone and Castro 1997).

―Imagine a macroscopic physical object breaking apart and multiple pieces flying off in
different directions. The state of this system can be described completely by describing the
state of each of its component pieces separately. A surprising and unintuitive aspect of the
state space of an n-particle quantum system is that the state of the system cannot always be
described in terms of the state of its component pieces. It is when examining systems of more
than one qubit that one first gets a glimpse of where the computational power of quantum
computers could come from.‖ (Rieffel and Polak 2000).

Quantum computation is strongly seen to efficiently solve some of the most difficult problems
in computational science and in a way change dramatically the development and
implementation of information and communication systems of the future (such as integer
factorisation, discrete logarithms, and quantum simulation and modelling that are intractable
on any present or future conventional computer).

Defining Features
Much faster computing for special purposes: quantum computing may offer us new tools
for different field of science or research.

Simulation of various phenomena which equals understanding of phenomena: it is


expected that quantum computing may support new kind of simulations (e.g. nanoscale
or macroscale) and due to this may also increase understanding of phenomena still not
understood

Totally new way of computing: new kind of paradigm in computing may require totally
new skills

108
Revealing mysteries of world in general: quantum computing and physics is expected to
increase our understanding in the whole about how the world or human mind are
constructed

4. Time Line
Quantum Computing has been actively researched for around 20 years although many
theoretical concepts and ideas are 100 years old. While elementary applications of quantum
computing exist, it will take around 5-1045 years for quantum computers to be introduced for
wide common use, even as some commercial actors from the field of semantic-system
applications46 and search engines47 start claiming to offer such solutions.

5. Relation to other Technologies

Artificial intelligence and Robotics


The theories of quantum computation have some implications in the world of artificial
intelligence. The debate about whether a computer will ever be able to be truly artificially
intelligent has been going on for years and has largely been based on philosophical
discussion. There is a strong statement that the human mind does things that are not, even in
principle, possible to perform on a Turing machine. But the theory of quantum computation
looks at the question of consciousness from a different perspective. The quantum computer
theory or in any case some of the developers of the theory state that every physical object
(from a rock to the universe as a whole) can be regarded as a quantum computer and that any
detectable physical process can be considered a computation. Under these criteria, it has been
argued, that the brain can be regarded as a computer and consciousness as a computation.
Bone and Castro (1997) state, that: ―next stage of the argument is based in the Church-Turing
principle‖ while every computer is functionally equivalent and that any given computer can
simulate any other. So finally it should be possible to simulate conscious rational thought
using a quantum computer. For some quantum computing offers the solution to solve the
problem of artificial intelligence but many also disagree. It has been stated that an even more
exotic and as yet unknown physics may be required to understand the human consciousness
(Bone & Castro 1997; Mensky 2009; Pessa & Zizzi 2009).

6. Critical Issues
It has been stated that while quantum computing is a new emerging technological field, it has
potential to change very dramatically the way we think about computation, programming, or
complexity. New computing paradigm may have lots of critical issues. Quantum computing
actually is in a exciting research phase, where not too many possible critical issues have been
reported. On the other hand Quantum computing may be utilised as an enabling technology
for the development of for example, artificial intelligence or robotics so the same critical
issues may then be confronted at application level (e.g. autonomy and rights).

45 http://www.nature.com/nature/journal/v440/n7083/pdf/440398a.pdf
46 http://www.semanticsystem.com
47 http://www.dwavesys.com

109
References
Ananthaswamy, A. (2010). Quantum entanglement holds together life‘s blueprint - life - 15
July 2010 - New Scientist.
Zeilinger, A., Aspelmeyer, M. & Brukner, C. (2004). Entangled photons and quantum
communication.
Ball, P. (2006) Champing at the bits. Nature, Vol 440, 23 March 2006
Bone, S and Castro, M. (1997) A Brief History of Quantum Computing.
http://www.doc.ic.ac.uk/~nd/surprise_97/journal/vol4/spb3/
Bouwmeester, D., Mattle, K., Pan, J., Weinfurter, H., Zeilinger, A., Eibl, M., et al. (1998).
Experimental quantum teleportation of arbitrary quantum states. Applied Physics B:
Lasers and Optics, 67(6), 749-752. doi: 10.1007/s003400050575.
Deutsch, D. (1985). Quantum Theory, the Church-Turing Principle and the Universal
Quantum Computer. Proceedings of the Royal Society A: Mathematical, Physical and
Engineering Sciences, 400(1818), 97-117. doi: 10.1098/rspa.1985.0070.
Einstein, A., Podolsky, B., & Rosen, N. (1935). Thought Experiment. Physics Letters A (Vol.
153, pp. 279-284). doi: 10.1016/0375-9601(91)90943-3.
Haug, F., Freyberger, M., Vogel, K., & Schleich, W. P. (1927). 5 . 1 Quantum optics.
Mensky, M.B. (2009). Quantum features of consciousness, computers and brain. The 9th
WSEAS International Conference on Applied Computer Science (ACS‘09), Genova,
Italy, October 17-19, 2009
Mukunda, N. (2008). Max Planck - Founder of quantum theory. Resonance, 13(2), 103-105.
doi: 10.1007/s12045-008-0026-9.
Pessa, E. and Zizzi, P. (2009) Brain-Computer Interfaces and Quantum Robots. Physical and
Cognitive Mutations in Humans and Machines", Laval (France), 24-25 April 2009
Planck, M. (1899). Über irreversible Strahlungsvorgänge : fünfte Mitteilung (Schluss). Berlin:
Reimer
Poppe, A., Fedrizzi, A., Ursin, R., Böhm, H., Lörunser, T., Maurhardt, O., et al. (2004).
Practical quantum key distribution with polarization entangled photons. Optics express,
12(16), 3865-71.
A Quantum Information Science and Technology Roadmap Part 1: Quantum Computation
Report of the Quantum Information Science and Technology. Experts Panel (2004)
Riefel, E and Polak, W (2000): An Introduction to Quantum Computing for Non-Physicists.
ACM Computing Surveys, Vol. 32, No. 3, September 2000, pp. 300–335.
Schumacher, B. (1995). Quantum coding. Physical Review A, 51(4), 2738 LP – 2747.
American Physical Society.
Schrödinger, E "Die gegenwärtige Situation in der Quantenmechanik", Naturwissenschaften
23: pp.807-812; 823-828; 844-849 (1935).
Steane, A. (1998). Quantum computing. Rep. Prog. Phys. 61 (1998) 117–173.
Zeilinger, A. (2006). Quantum Computation and Quantum Communication with Entangled
Photons. Quantum optics, (Figure 2), 10. doi: 10.1038/nphys629.

110
Research Groups
Institute for theoretical physics at Vienna University, Austria48
Max Planck Institute for physics, Munich, Germany49
Theoretische Quantenphysik Technische Universität Darmstadt, Germany50
Physics of Information / Quantum Information Group at IBM Research Yorktown51
Oxford Center for Quantum Computation, UK.52
Oxford Foundations of Computer Science Research Group, UK53
Oxford Mathematical Physics Group, UK54
MIT Quantum Information Science, USA55
Institute for Quantum Computation, Canada56
University of Leeds Quantum Information group, UK.57
The Cambridge Center for Quantum Computation, UK58

48 http://quanten.at
49 http://www.mpp.mpg.de/
50 http://www.iap.tu-darmstadt.de/tqp
51 http://www.research.ibm.com/physicsofinfo
52 http://www.qubit.org
53 http://se10.comlab.ox.ac.uk:8080/FOCS/PhysicsandCS_en.html
54 http://www.maths.ox.ac.uk/groups/mathematical-physics
55 http://qis.mit.edu
56 http://www.iqc.ca/
57 http://www.qi.leeds.ac.uk
58 http://cam.qubit.org

111
6.6.10 Robotics

1. History
The term ―robot‖ was derived from early 20th century literature (Čapek 1920) to
denote machines that are apparently able to work independently of direct human
control. There is currently no uniform agreed definition of ―robot‖, although the
international standard ISO 8373 defines them as "an automatically controlled,
reprogrammable, multipurpose, manipulator programmable in three or more axes,
which may be either fixed in place or mobile for use in industrial automation
applications.‖ However, mobile robots are increasingly being designed for
applications outside of industry. These must have the facility to acquire information
from their environment and to adapt their actions according to this information.
An important element in all robots is provided by computer programming, usually
based on research in ―artificial intelligence‖59. Other contributions are provided by
electrical engineering and mechanical engineering. One could argue that robotics is a
converging technology, because it is an interdisciplinary engineering discipline where
multiple fields converge.
2. Application Areas/Examples
In principle robots can come in many shapes and sizes but the typical industrial robot
which exists in numbers of many thousands has a single arm, is fixed and
programmed to do a single task, such as welding or spraying automobiles. It is
isolated from humans for reasons of labour safety. More recent developments have
produced so-called softbots, or virtual robots, which are programmed to
autonomously perform tasks on the internet and in other virtual environments. There
is great military interest in the development of robots capable of replacing human
soldiers and superior to them in physical and psychological respects.
Recent interest has focused strongly on robots designed for applications in healthcare,
household and military. A pioneering role here is being played by Japan, which is
facing a grave aging problem and sees the potential to substitute human work in the
fields of application. There is discussion on the degree to which such robots should be
anthropomorphic (human-like) to achieve greater acceptance by humans, or to better
negotiate environments otherwise inhabited by humans.
In healthcare, for example, work is being done on robots to lift patients in and out of
bed. Also, humanoid and animal-like robots (pet robots) could be used as companions
for the elderly or others who may feel the need for companionship. The appeal of
these kinds of robots is thought to be partially due to their therapeutic qualities,
resulting in a reduction of stress and loneliness among the elderly and infirm (Harper,
2008). Robots are being designed for household use as automatic vacuum cleaners,
lawn mowers or floor cleaners.
59
This description is not concerned with purely mechatronic robots without any artificially intelligent
component, such as existing industrial robots.

112
The more complex forms of care also require robots capable of offering affection and
apparently able to communicate with their surroundings. Japan has been heavily
investing in a research programme with the title Humanoid and Human Friendly
Robotics Systems. This has produced a humanoid robot that is 160 cm tall, weighs 90
kg and which can move at a speed of two kilometers per hour (also over uneven
surfaces). The European Robot network (EURON, a partnership of 22 European
countries) also expects a major role for the use of robots in the household
environment. In the short term, a shift of focus is envisaged from industrial robots
(especially in the car industry) to other areas including health care (European
Technology Assessment Group, 2006).

Robots could be applied in several societal fields. Contemporary robots are used in
manufacturing, transport, earth and space exploration, surgery, weaponry, laboratory
research, safety as well as mass production of consumer and industrial goods.
However, it is predicted that future robots will emerge in the military, household and
healthcare. Below are five examples of this.
- Service robots in households. ‗Robots could help the elderly and disabled to
with certain activities such as picking up objects and cooking meals‘ (Harper,
2008, p. 20).
- Robots as companions. ‗Robots could provide a companion to talk to or
cuddle, as if they were pets or dolls. The appeal of these kinds of robots is
thought to be partially due to their therapeutic qualities, being able to reduce
stress and loneliness among the elderly and infirm‘ (Harper, 2008, p. 20).
- Robots as soldiers. ‘The Gladiator Tactical Unmanned Ground Vehicle
program will support Marine Corps conduct of Ship to Objective Maneuver
(STOM) through the use of a small-medium sized mobile robotic system to
minimize risk and neutralize threats to Marines across the spectrum of
conflict. Gladiator will perform scout/surveillance, NBC reconnaissance,
direct fire, and personnel obstacle breaching missions in its basic
configuration‘ (GlobalSecurity, 2010). Military robots are not necessarily
limited to serving as weapons platforms and can also have other battlefield
tasks.
- Robots for therapeutic purposes. ‗Social interaction of people with mental,
cognitive and social handicaps (e.g., autistic children or elderly people with
dementia) is essential to their social participation but proves to be a major
challenge for healthcare. Robotized systems can support human care and offer
unprecedented therapeutic functionality that will develop or maintain social
skills which would not be available without these systems or would vanish.
Results can be expected in terms of developing basic social skills through play
or maintaining skills to deal with everyday life. These robotic systems can be
programmed to generate all kinds of communicative reactions (e.g. sounds and
colors), invite to move or play games, stimulate friendly face expressions and
can also learn to adapt to the individual person‘ (TNO - Quality of Life, 2008).

113
- Robot swarms. 'Autonomous miniature robots, the size of a few millimeters,
behaving as ecological systems such as a swarm of bees or ants, are seen to
have significant commercial and military applications for surveillance,
microassembly, biological, medical, or any tasks where a collective action can
provide a greater benefit than a single unit or small groups of robots. Although
the technological challenges are similar to traditional wireless sensor
networks, swarm robots are an order of magnitude smaller and consequently
will challenge the technologists even further. One critical aspect is the short
range (centimeter) communication link required between the robots to enable
them to perform their actions collectively' (Ofcom, 2008, p 61).

3. Definition and Defining Features


This document is confined to robots in military, households and healthcare, since
these are the areas in which robots are predicted to develop rapidly. Robots in these
areas can be defined as follows: Robots are machines with motor function that are
able to perceive their environment and operate autonomously so that they can replace
human effort. Below are a number of features that define robots in military,
households and healthcare.

- Motor function: robots are equipped with limbs able to move in three or more
axes and with devices fulfilling the function of hands to enable the
performance of tasks usually performed by humans. The robots we are dealing
with here are usually mobile and some have wheels, while fully humanoid
robots have legs and arms.
- Mobile energy system A mobile robot needs a highly mobile and
miniaturized energy system to function.
- Sensory perception Robots receive information on their environment which
is required to perform the tasks for which they have been programmed. This is
usually done with visual perception (cameras equipped with pattern
recognition software), but sometimes also with tactile feedback provided by
sensors. Through perception robots receive real-time feedback on the
environment in which they operate.
- Autonomy/agency Robots have some degree of autonomy or agency. They
are programmed to make decisions (based on their perceptions) without direct
human intervention. There is furthermore research on robots that program
themselves.

4. Timeline
Only one of the sources in the database makes a prediction about when robots will be
implemented in society and that is in 2015. However, the robots the authors are
referring to are ‗next generation robot-controlled system based on artificial
intelligence, sensors and tele-control related technologies, mutual sensing technology
between human beings and robots‘ (Korean Government, 2000). This description is
not very specific although it implies that rapid progress is expected in all of the fields

114
mentioned. Considering that this is a very isolated prediction, it is to be regarded as
very optimistic and taken with caution.

5. Relation to other Technologies


In order to build a robot, an engineer or group of engineers draws from electrical
engineering, mechanical engineering and AI. These are the building blocks of
robotics. Some (Huw Arnall, 2003) have argued that robotics is a branch of AI,
because robotics depend crucially on research in AI. The design of robots also relies
on human-robot interaction research which draws from (cognitive) psychology and
philosophy. Robotics is also related to the ubiquitous computing, internet of things
and ambient intelligence visions. Ubiquitous computing is the idea that computing
devices are everywhere, they are ubiquitous. Internet of things is the idea that
computing devices are increasingly (wirelessly) interconnected and form intelligent
networks. Ambient intelligence is the idea that electronic devices in our homes,
offices, hospitals, cars and public spaces will be embedded, interconnected, adaptive,
personalized, anticipatory and context-aware. Robots in households and healthcare
(and perhaps in other areas as well) nicely fit into these visions. Robots are likely to
be connected to other computational devices and could thereby be part of an ambient
intelligence system.

6. Critical Issues
‗The knowledge-processing aspects of robots are, however, still limited: in particular
the higher cognitive processes such as perception, decision taking, learning and action
still pose major challenges. Despite progress in some areas within cognitive systems
and models, the provisional conclusion is that many hurdles still have to be overcome
before an artificial system will be created which approaches the cognitive capacities
of humans‘ (European Technology Assessment Group, 2006, p. 36).

'Associated with this reality check is the recognition that classical attempts at
modeling AI, based upon the capabilities of digital computers to manipulate symbols,
are probably not sufficient to achieve anything resembling true intelligence. This is
because symbolic AI systems, as they are known, are designed and programmed
rather than trained or evolved. As a consequence, they function under rules and, as
such, tend to be very fragile, rarely proving effective outside of their assigned domain.
In other words, symbolic AI is proving to be only as smart as the programmer who
has written the programmes in the first place' (Huw Arnall, 2003, p. 43). One should
concede, however, that not all robots are necessarily based on the assumptions of
symbolic AI.
The following ethical issues are mentioned in the database in relation to robots.

- Robot autonomy and robots rights 'Many of the major ethical issues
surrounding AI - related development hinge upon the potential for software
and robot autonomy. In the short term, some commentators question whether
people will really want to cede control over our affairs to an artificial
intelligent piece of software, which might even have its own legal rights.

115
While some autonomy is beneficial, absolute autonomy is frightening. For one
thing, it is clear that legal systems are not yet prepared for high autonomy
systems, even in scenarios that are relatively simple to envisage, such as the
possession of personal information. In the longer-term, however, in which it is
possible to envisage extremely advanced applications of hard AI, serious
questions arise concerning military conflict, and robot take-over and machine
rights' (Huw Arnall, 2003, p. 57).

- Robots overtaking humankind There is a faction of AI researchers seeking


to create an artificial intelligence superior to that of humans. One proponent of
this position, Hans Moravec, called a popular book ―Mind Children‖
(Moravec, 1998), arguing that AI would one day inherit the positions of
humans as the most powerful intelligence on earth. The idea of a super-
intelligence is also at the root of the ―singularity‖, popularized by Ray
Kurzweil (2005): technology exceeding the capacity and processing power of
the human brain should be technologically feasible within the foreseeable
future. Other approaches are seeking to enhance human beings with results of
research in AI and robotics. A well known proponent of the approach is Kevin
Warwick, Professor of Cybernetics at the University of Reading, UK.

References
Čapek, Karel: RUR Rossum's Universal Robots. Originally published in 1920. Latest
publication in 2010 by Echo.
European Technology Assessment Group. (2006). Technology Assessment on
Converging Technologies. Retrieved December 28, 2009, from
http://www.europarl.europa.eu/stoa/publications/studies/stoa183_en.pdf.
GlobalSecurity. (2010). Gladiator Tactical Unmanned Ground Vehicle. Retrieved
March 22, 2010, from
http://www.globalsecurity.org/military/systems/ground/gladiator.htm.
Harper, R., Rodden, T., Rogers, Y. & Sellen, A. (2008). Being Human – Human-
Computer Interaction in the year 2020. Microsoft Research. Retrieved
November 22, 2009, from
http://research.microsoft.com/enus/um/cambridge/projects/hci2020/downloads
/BeingHuman_A4.pdf.
Huw Arnall, A. (2003). Future Technologies, Today‘s Choices. Nanotechnology,
Artificial Intelligence and Robotics; A technical, political and institutional
map of emerging technologies. Retrieved December 28, 2009, from
http://www.greenpeace.org.uk/MultimediaFiles/Live/FullReport/5886.pdf.
Kurzweil, R. (2005): The Singularity is Near. When humans transcend biology. New
York: Viking.
Moravec, H. (1988): Mind Children. Cambridge Mass.: Harvard University
Press.ICT2020 - Research for Innovation. Federal Ministry of Education and
Research. Retrieved February 14, 2010 from
http://www.bmbf.de/pub/ict_2020.pdf.

116
Korean Government. (2000). Vision 2025 Taskforce – Korea‘s long term plan for
science and technology development. Retrieved February 16, 2010, from
http://www.inovasyon.org/pdf/Korea.Vision2025.pdf.
Ofcom. (2008). Tomorrow‘s Wireless World. Retrieved 31 January, 2009, from
http://www.ofcom.org.uk/research/technology/overview/randd0708/randd070
8.pdf.
Wikipedia entry on Robots. Retrieved February 14, 2010, from
http://en.wikipedia.org/wiki/Robot.
Wikipedia entry on Robotics. Retrieved February 14, 2010,
fromhttp://en.wikipedia.org/wiki/Robotics.
TNO - Quality of Life. (2008). Robotics for Healthcare - Personalising care and
boosting the quality, access and efficiency of healthcare. Retrieved February
16, 2010, from
http://ec.europa.eu/information_society/activities/health/docs/studies/robotics_
healthcare/robotics-in-healthcare.pdf.

117
6.6.11 Virtual/Augmented Reality

Similar to60:

Virtual worlds: mainly referring to entertainment when virtual reality refers


more to utilities
Mixed reality: mainly used in entertainment applications where different
kinds of realities are mixed. Augmented reality on the other hand refers to
situations where reality is augmented with some other elements.
Metaverse: vision of an immersive 3D virtual world, to include aspects of the
physical world objects, actors, interfaces, and networks that construct and
interact with virtual environments. The concept is not widely used in academic
discourse but recently there has been a strong consortium which has launched
this concept for further development.

1. History

The first virtual reality simulator ‗‗Sensorama Simulator‘‘ can be traced back to
Morton Heilig who is believed to have invented the system in around 1962. Heilig‘s
work is outlined in his patent of August 28, 1962 (Heilig, 1962). In it he outlines how
the sensorama simulator works and explains that part of the usefulness of the
simulator would be in avoiding risks that came with hazardous jobs such as the
military or in educational training. Therefore, rather than deal with actual real line
situations, the system would be useful in simulating the environments that users
wanted to train in. Heilig‘s sensorama came in the form of a head mounted display
which was a central device in augmented reality systems at the time of its invention.

Ivan Sutherland was the pioneer in the introduction of the first considered virtual
reality and augmented reality head-mounted system in 1968. Since Sutherland‘s first
paper on the subject in 1965, examination of virtual reality has occupied the research
field (Blanchard et al. 1990). The terms ―virtual reality‖ and ―augmented reality‖ were
introduced around the turn of the decade, 1980 – 1990 when the first commercial
virtual reality applications, such as Lanier‘s VPL DataGlove, were brought onto the
market. Lanier‘s VPL DataGlove and Azuma et al (2001) demonstrate that AR
technology has not only been limited to head-worn displays but also includes
handheld displays which may also include flat LCD screens for display. Today
handheld display augmented reality is already being developed for new smartphones
whose applications are further being developed for consumer markets61. In addition
augmented reality applications also cover a projection display which as the name
suggests projects virtual information on the augmented object or the augmented
information is projected to the real life context. (Zhou et al 2008).

In recent years the more advanced computer hardware has enabled the virtual and
augmented reality applications to become more practically useful and numerous. (e.g.
immersive architectural modeling, simulations for vehicle driving/training) (Peng and
Leu 2005, Cruz-Neira, 1998).
60
In this meta-vignette we focus on computer aided virtuality (i.e. we do not discuss issues related to the
experiences of virtual that are established via other means)
61
http://pocketnow.com/software-1/augmented-reality-app-appears-on-samsung-omnia-ii

118
In the future virtual reality will be an integral part of our daily life and will be used in
various ways. The techniques will influence our behaviour, interpersonal
communication and cognition (i.e. virtual genetics). We are expected to migrate more
and more to the virtual space in the future which will have great impact on economics,
worldview and culture. For example human rights may be extended into virtual space
in some cases meaning that virtual identities (i.e. avatars) may be handled almost
similarly as identities we apply today in real life situations (Cline 2005).

2. Application Areas / Examples

Azuma (1997) points to several application areas which include:


i. Medical: Augmented reality has the potential to help medical personnel
during surgical procedures. The technology is likely to be helpful in the
collection of 3-D datasets through use of non invasive sensors which can
be rendered in real time. This can help to visualize the real patient and the
area that needs surgery. The technology can also be used for medical
training as well as through virtual instructions to a trainee. Similar
techniques can be used in military aircrafts that navigate, provide flight
information and target environments.

ii. Manufacturing and Repair: Augmented reality could be used in the


manufacture and repair of complex machinery. Rather than have manuals,
3-D may be employed and used and superimposed with the actual machine
to show step by step processes.

iii. Annotation and visualization: Augmented reality has the potential to


annotate objects and environments and thereby provide required
information to the user. The technology can also be used to aide
visualization in poor visibility conditions.

iv. Robot path planning: Virtual Reality can be used to control a virtual version
of a robot in the event that the control of an actual one fails. This can be
carried out through the manipulation of the virtual version in real time.

v. Entertainment: Augmented or virtual reality can be used to create virtual sets


that can be superimposed and merged with real life acting and actors to
create a cinematic environment in 3-D (p.8).

3. Definition and Defining Features

Definition

Virtual Reality

The term virtual reality (VR) has been widely used and defined in various ways by
different researchers. In the widely cited paper published in Communications of ACM
Feiner et. al. (1992) utilize an attribute that defines the virtual reality as systems that

119
substitute the experience of the physical world with 3D material, such as icons or
graphics.

VR is originally being referred solely to completely immersive virtual reality or


virtual environment. Today the term virtual reality is also used for describing non-
immersive or partial immersive applications, although the boundaries are becoming
obscure (Beier 1999). Regardless of the terminological comparison VR has many
advantages over fully immersive virtual environments. Azuma (1997) points out
benefits VR possesses. For example the virtual objects of virtual reality convey
information that is helpful in performing real-world tasks. In a completely immersive
environment the user is unable to see the world around him while being completely
immersed inside a synthetic environment whereas partial immersive VR allows the
user to observe and to operate in real world situations. By supplementing the reality
with virtual objects the user is able to perceive the environment more
comprehensively than the user is able to directly detect with his own senses.
Consequently the partial immersion enables the enhancement of the user‘s perception
of and interaction with the real world.

According to a broad definition virtual reality makes it possible for humans to


visualize, manipulate and interact with extremely complex data and with computers.
From an interaction point of view virtual reality has been described as an interface
that involves real-time simulation and interactions through multiple sensorial channels
(Cline 2005). The simulation can be of either real or an imaginary world.

The juxtaposition of real and imagery is what prompts Beardon (1992) to define
virtual reality as a simulation of the imaginary which is presented as the real.
According to Beardon, because of this, virtual reality becomes subjective and may
therefore obscure reality.

Augmented Reality

Azuma et al (2001) define augmented reality (AR) as systems with three


characteristics:

1.) Combines real and virtual objects in a real environment;


2.) Runs interactively, and in real time; and
3.) Registers (aligns) real and virtual objects with each other (p.34)

In his earlier definition Azuma (1997) compares AR to Virtual Environments (VE)


but differentiates the two by stating that where VE technologies completely immerse a
user inside a synthetic environment in which a user is unable to see the real word, AR
on the other hand allows the user to be able to see the real world with virtual objects
superimposed with the real world (1997, p. 2).

Such technology may consist of a display device such as a mobile phone, PDA or a
head-mounted display that shows the real physical environment on which it overlays
digital information.

120
Metaverse

The Metaverse is another concept which is based more on Neal Stephenson‘s coinage
in the cyberpunk science fiction novel, Snow Crash, which envisioned a future
broadly reshaped by virtual and 3D technologies. Stephenson‘s 1992 vision is an
immersive 3D virtual world, which includes aspects of the physical world objects,
actors, interfaces, and networks that construct and interact with virtual environments.
Metaverse is defined as the convergence of physical reality that is virtually enhanced
and virtual space that is physically persistent. It is a fusion of both, physical and
virtual, while allowing users to experience it as either. It is called the 3D enhanced
web which is a shared virtual social space that can be represented to the user in either
2D or 3D (Smart et al 2007).

Defining Features

Virtual and augmented reality changes people‘s experience of an environment by


either replacing the experience of a real environment with a virtual one or adding
virtual elements to the real environment.

Defining features of virtual and augmented reality are:

Immersion: Virtual and augmented reality applications create environments in


which users are immersed in and to varying extents. Immersion can be partial,
full-body or mental. Because the immersiveness affects the person‘s whole
perception of the environment the subjectivity of the experience is
emphasized.
Simulation: The simulation can concern either a whole environment or a
virtual artefact and the environments simulated in virtual reality applications
can be either realistic (e.g. simulations for pilot or combat training) or based
on imaginative worlds like in VR games. In either cases simulation aims to
create an experience as realistic as possible.
Sensory stimulation: Virtual reality environments are currently primarily
visual experiences but the stimulation of multiple senses is an essential feature
of the concept of virtual reality. Some advanced VR environments offer tactile
information through devices like wired gloves, the Polhemus boom arm, and
omnidirectional treadmill.
Interactivity and communication: A large part of virtual reality solutions are
created in gaming applications with rich multimodal interaction. Many
applications are also created for enabling communication between people.
Telepresence is a feature often included in virtual reality systems for
enhancing communication between distant locations.
Augmentation: Although AR is often associated with adding objects to real
environments, it can also be used to remove objects. This means that AR has
the ability to hide or remove real life environments due to its overlaying
characteristic, although as Azuma (1997) points out, it may be harder but not
entirely impossible if being done interactively.
Portability: Most AR technology is such that there is little movement from
the user, although as suggested by Azuma (1997), some applications will have
the capacity to support a user in motion in a large environment.

121
4. Timeline
The actual development of computer aided augmented or virtual reality was
started around the 1970‘s and the wide utilization of these technologies has been
quite strong mainly in professional applications ever since. There is strong belief
that these technologies will have even greater impact on human society in the
coming years (Smart et. al. 2007 and MIT technology review 2007).

5. Relation to other Technologies

Artificial intelligence: Virtual reality is related to artificial intelligence in its


cultural dimension because both are using simulation to persuade users to
amend their belief of what is real (Beardon 1992).
AmI: Affiliation with ambient intelligence is explicable by the similarities in
efforts aiming to support the users in daily activities by the information
holding devices integrated into a real life environment. The development of
AmI aims to create embedded environments where the technology eventually
disappears into its surroundings remaining solely the user interface
perceivable by users.

6. Critical Issues
Azuma (1997) states that registration and sending errors are two main challenges in
building an effective virtual reality system including limiting virtual reality
applications.

Registration: The problem of registration concerns the proper aligning of elements of


the real world with the virtual objects so that the illusion of two coexisting worlds
won‘t be compromised. The registration error can be either dynamic or static
depending on whether the errors occur while the user‘s viewpoint and the objects
remain still or move.

Resolution: A simulated environment may not offer the exact resolution as a real life
environment (Azuma et al, 2001; Azuma, 1997).

Safety: In the event that an AR technology fails especially during the process of use,
e.g during surgery or in military action, the safety of the user or the patient in the case
of surgery may be compromised (Azuma et al, 2001; Azuma, 1997.)

122
References

Academic publications

Beier, K.-P. 1999. Virtual reality: A short introduction. Virtual Reality Laboratory.
University of Michigan.

Blanchard, C., Burgess, S., Halvill, Y., Lanier, J., Lasko, A., Oberman, M. & Teite,
M. 1990. Reality build for two: A virtual reality tool. ACM SIGGRAPH
Computer Graphics archive. 24 (35-36)

Cline, M. S. (2005) Power, madness, and immortality: The future of virtual reality.
University Village Press.

Cruz-Neira, C. (1998) Making virtual reality useful: A report on immersive


applications at Iowa State University* Future Generation Computer Systems
14 (1998) 147-155

Feiner, S., MacIntyre, B. & Seligmann, D. (1992). Knowledge-Based Augmented


Reality. Published in Communication of ACM.

Xiaobo Peng and Ming C. Leu (2005) Engineering Applications of Virtual Reality.
Mechanical Engineers' Handbook, Materials and Mechanical Design. Wiley

Zhou, Feng. Duh. Henry Been-Lirn, Billinghurst, Mark (2008). Trends in Augmented
Reality Tracking, Interaction and Display: A Review of Ten Years of ISMAR.
ISMAR '08: Proceedings of the 7th IEEE/ACM International Symposium on
Mixed and Augmented Reality.

Website sources
Azuma, R. T., Baillot, Y., Behringer, R., Feiner, S., Julier, S. & MacIntyre, B. (2001)
Recent Advances in Augmented Reality. In IEEE Computer Graphics and
Applications. Retrieved on 14 May, 2010 from
http://www.cs.unc.edu/~azuma/cga2001.pdf

Azuma, R. T. (1997) A Survey of Augmented Reality. In Presence: Teleoperators and


Virtual Environments 6, 4 (August 1997), 355-385. Retrieved on 15 May,
2010 from http://www.cs.unc.edu/~azuma/ARpresence.pdf

Beardon, C. (1992) The Ethics of Virtual Reality. Intelligent Tutoring Media. 3(1),
23-28, 1992. Retrieved on 15 May, 2010 from
http://www.cs.waikato.ac.nz/oldcontent/cbeardon/papers/9201.html

Heilig, M. L. (1962) Sensorama Simulator. Retrieved on 14 May, 2010 from


http://www.mortonheilig.com/SensoramaPatent.pdf

Smart, J., Cascio J., Paffendorf, J., Bridges, C., Hummel, J., Hursthouse, J., Moss, R.
(2007) Metaverse Roadmap. Pathways to the 3D Web. A Cross-Industry

123
Public Foresight Project. Retrieved on 13 June 2010 from:
http://metaverseroadmap.org/

Websites

Pocketnow.com. Smartphone Reviews, News, and Video. Retrieved from


http://pocketnow.com/software-1/augmented-reality-app-appears-on-samsung-
omnia-ii (9.10.2010)

MIT technology review 2007. Special Reports 10 Emerging Technologies.


Technology Review Published by MIT. Monday, March 12, 2007 edition.
Retrieved on 9 July, 2010 from
http://www.technologyreview.com/special/emerging/

124

You might also like