You are on page 1of 216

Version 2.

0 Page 1 of 161
© Copyright 2012, the Members of the SESERV Consortium
Socio-Economic Services for
European Research Projects (SESERV)
European Seventh Framework Project FP7-2010-ICT-258138-CSA

Deliverable D2.2
Final Report on Economic Future Internet
Coordination Activities


















The SESERV Consortium

University of Zürich, UZH, Switzerland
University of Southampton, IT Innovation Centre, U.K.
Athens University of Economics and Business - Research Center, AUEB-RC, Greece
University of Oxford, UOX, U.K.
Alcatel Lucent Bell Labs, ALBLF, France
Atos Spain SA, Atos, Spain


© Copyright 2012, the Members of the SESERV Consortium

For more information on this document or the SESERV support action, please contact:

Prof. Dr. Burkhard Stiller
Universität Zürich, CSG@IFI
Binzmühlestrasse 14
CH—8050 Zürich
Switzerland

Phone: +41 44 635 4355
Fax: +41 44 635 6809
E-mail: info@seserv.org

D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 2 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Document Control

Title: Final Report on Economic Future Internet Coordination Activities
Type: Public
Editor(s): Costas Kalogiros
E-mail: ckalog@aueb.gr
Author(s): Costas Kalogiros (AUEB-RC), Ioanna Papafili (AUEB-RC), George D.
Stamoulis (AUEB-RC), Costas Courcoubetis (AUEB-RC), George Thanos
(AUEB-RC), Martin Waldburger (UZH), Patrick Poullie (UZH), Burkhard
Stiller (UZH), Daniel Field (AOSAE), Michael Boniface (UoS-ITI)
Doc ID: D2.2-v2.0.doc

AMENDMENT HISTORY

Version Date Author Description/Comments
V0.1 Jan 25, 2012 Costas Kalogiros Template, initial ToC
V0.2 Feb 6, 2012 Costas Kalogiros ETICS whitepaper
V0.3 Feb 20, 2012 Costas Kalogiros Initial version of Introduction, Methodology added
V0.4 Feb 23, 2012 Costas Kalogiros Initial version of Section 4 added
V0.5 Feb 29, 2012 Costas Kalogiros, Ioanna Papafili, George
Stamoulis, Costas Courcoubetis, George
Thanos, Martin Waldburger, Patrick Poullie,
Burkhard Stiller
All inputs integrated and reviewed
V1.0 Feb 29, 2012 Costas Kalogiros Final editing and formatting
V1.1 Jul 05, 2012 Daniel Field, James Ahtes Inclusion of detailed OPTIMIS tussle
V1.2 Jul 10, 2012 Patrick Poulie, Costas Kalogiros Inclusion of detailed tussle analysis for ULOOP, C2POWER, ETICS,
UNIVERSELF and updated survey of technologies
V1.3 Jul 17, 2012 Costas Kalogiros, Ioanna Papafili, Patrick
Poulie, Daniel Field
Updates of detailed tussle analysis for ULOOP, C2POWER, ETICS,
OPTIMIS, SAIL, PURSUIT, survey of technologies, ITU section added
V1.4 Jul 30, 2012 Costas Kalogiros, Michael Boniface Inclusion of stakeholders’ interests, updated survey of technologies,
Consolidated view of tussles
V1.5 Aug 7, 2012 Michael Boniface, Ioanna Papafili, Costas
Kalogiros,
FIArch section and cloud functionality taxonomy added, BonFIRE
tussle analysis, updated tussle analysis for UNIVERSELF
V1.6 Aug 20, 2012 Costas Kalogiros, Ioanna Papafili, George
Stamoulis, Costas Courcoubetis, Patrick
Poullie, Daniel Field, Michael Boniface, Eric
Meyer
First set of reviewers’ comments addressed
V1.7 Aug 28, 2012 Costas Kalogiros, Ioanna Papafili, Patrick
Poullie, Daniel Field, Didier Bourse
Second set of reviewers’ comments addressed
V1.8 Aug 31, 2012 Martin Waldburger, Costas Kalogiros, Added sections for High-Speed Accounting Paper, Executive
Summary, Summary and Conclusions
V2.0 Sep 11, 2012 Costas Kalogiros, Patrick Poullie, Martin
Waldburger, Ioanna Papafili
Document completion and final editing.

Legal Notices
The information in this document is subject to change without notice.
The Members of the SESERV Consortium make no warranty of any kind with regard to this document,
including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose.
The Members of the SESERV Consortium shall not be held liable for errors contained herein or direct,
indirect, special, incidental or consequential damages in connection with the furnishing, performance, or use
of this material.

Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 3 of 161
© Copyright 2012, the Members of the SESERV Consortium

Table of Content
1! Executive Summary 8!
2! Introduction 11!
2.1! Purpose of D2.2 11!
2.2! Document Structure 11!
3! Methodology 12!
3.1! Network Functionalities 14!
3.2! Cloud Functionalities 16!
3.3! Stakeholders 17!
3.4! Tussles 20!
4! Economic Priorities for Future Internet 22!
4.1! Main Stakeholders of Network and Cloud Functionalities 22!
4.1.1! Network 22!
4.1.2! Cloud 28!
4.2! Main Tussles in Networking and Cloud Functionalities 30!
4.3! A Consolidated View of Stakeholders and Tussles 36!
4.4! Lessons Learnt 40!
5! Survey of Technologies by Challenge 1 Research Projects 47!
5.1! ETICS 52!
5.2! GEYSERS 53!
5.3! UNIVERSELF 54!
5.4! SAIL 55!
5.5! PURSUIT 55!
5.6! MEDIEVAL 56!
5.7! ENVISION 57!
5.8! C2POWER 58!
5.9! ULOOP 60!
5.10! OPTIMIS 61!
5.11! BONFIRE 61!
5.12! Lessons Learnt 63!
6! Standardization Activities in ITU 64!
6.1! Contribution to Y.3001 67!
6.2! New Document Proposal (Y.FNsocioeconomic) 68!
6.3! Preparation of the New Document (Y.FNsocioeconomic) 68!
6.4! Summary and Assessment 69!
6.4.1! Y.FNsocioeconomic 70!
6.4.2! Question Description 70!
6.4.3! Liaison statement 71!
6.4.4! Next Steps 71!
7! Design Principles for the Future Internet Architecture 72!
7.1! Motivation 72!
7.2! Contribution 72!
7.2.1! Exchange of Information Between End-points 73!
7.2.2! Sustain the Investment 74!
7.3! Summary and Next Steps 75!
8! Techno-Socio-Economic Challenges for High-Speed Accounting 77!
8.1! Abstract 77!
8.2! Introduction and Motivation 77!
8.3! Lessons Learnt 79!
8.3.1! Lessons from Comparing High-speed Accounting Approaches 79!
8.3.2! Lessons from the Data Retention Debate 79!
8.3.3! Lessons from the Legal Interception Debate 80!
8.3.4! Lessons from the Usage-based Charging Debate 81!
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 4 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

8.4! High-speed Accounting Conclusions 81!
8.4.1! Conclusions in Relation to Question 1 81!
8.4.2! Conclusions in Relation to Question 2 82!
8.4.3! Conclusions in Relation to Question 3 82!
8.4.4! Conclusions in Relation to Question 4 83!
9! Summary and Conclusions 84!
10! References 88!
11! Abbreviations 91!
12! Acknowledgements 95!
Appendix A! Detailed Tussle Analysis for a Subset of FP7 Research Projects 96!
A.1! Detailed Tussle Analysis for ETICS Technologies 96!
A.1.1! Introduction to the ETICS System 96!
A.1.2! Case study A: QoS-aware Transmission and Transit Competition 98!
A.1.3! Case study B: Customer SLA Monitoring and Incentives for Backup ASQ Provisioning102!
A.2! Detailed Tussle Analysis for UNIVERSELF technologies 106!
A.3! Detailed Tussle Analysis for SAIL Technologies 110!
A.3.1! Use-Case: Content Delivery and Access Control 110!
A.3.2! Use-Case: Content Delivery and Content Freshness 112!
A.3.3! Use-Case: Content Network Management 113!
A.3.4! Use-Case: ICN Content Delivery and Competition with Legacy CDNs 114!
A.3.5! Discussion 116!
A.4! Detailed Tussle Analysis for PURSUIT Technologies 116!
A.4.1! Use-Case: Content Delivery and Name Resolution Provided by Local ISP 116!
A.4.2! Use-Case: Content Delivery and Conflicting Optimization Criteria 118!
A.4.3! Use-Case: Content Delivery and Imbalance on Peering Link 120!
A.4.4! Discussion 122!
A.5! Detailed Tussle Analysis for ULOOP Technologies 123!
A.5.1! Game Theoretic Analysis of Cooperation Incentives in UCN – A Prisoner’s Dilemma 125!
A.5.2! Traffic Management on the ULOOP Gateway 131!
A.5.3! Tussle Evolution for Connectivity Re-selling in UCN 132!
A.6! Detailed Tussle Analysis for C2POWER Technologies 135!
A.6.1! Encryption 135!
A.6.2! Preliminary Overhead 140!
A.6.3! Connectivity Re-selling 140!
A.7! Detailed Tussle Analysis for OPTIMIS Technologies 142!
A.7.1! OPTIMIS Features 142!
A.7.2! Use-Case: User Controlled QoS Selection 143!
A.8! Detailed Tussle Analysis for BONFIRE technologies 147!
A.8.1! Cloud Functionalities and Stakeholders 148!
A.8.2! Tussle Analysis 150!
Appendix B! Interactions with Other Projects 153!
Appendix C! Related Documents 157!
C.1! FIA Chapter 158!
C.2! Networking Paper 159!
C.3! FI Paper 160!
C.4! Recommendation ITU-T Y.3001 161!

Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 5 of 161
© Copyright 2012, the Members of the SESERV Consortium

List of Figures
Figure 1: High-level View of Tussle Analysis Methodology .............................................................................12!
Figure 2: Surveyed Projects and Thematic Areas ...........................................................................................14!
Figure 3: A Generic Taxonomy of Network Functionalities (from [1]) ..............................................................15!
Figure 4: An Internet Reference Model ...........................................................................................................15!
Figure 5: A Generic Taxonomy of Cloud (IaaS) Functionalities (source: SESERV)........................................16!
Figure 6: A Generic Taxonomy of Internet (Network and Cloud) Functionalities ............................................18!
Figure 7: Initial SESERV Taxonomy for Internet Stakeholder Roles...............................................................19!
Figure 8: Updated SESERV Taxonomy for Internet Stakeholder Roles..........................................................20!
Figure 9: Tussles Identified and their Respective Functionalities....................................................................21!
Figure 10: A Cartography of Tussles...............................................................................................................31!
Figure 11: A High-level View of the Spillovers Amongst Functionalities .........................................................37!
Figure 12: Timeline of SESERV Interactions with the ITU-T...........................................................................65!
Figure 13: The WP2 Framework .....................................................................................................................84!
Figure 14: A Scenario of Premium Interconnection Services Under the ‘Distributed Pull’ Coordination Model97!
Figure 15: A Scenario for Internet Connectivity Market ...................................................................................99!
Figure 16: Candidate Tussle Evolution for QoS-aware Service Composition ...............................................100!
Figure 17: The Scenario for Internet Connectivity Market Using ASQ Goods...............................................101!
Figure 18: A Scenario of SLA Violation Identification Using the Hierarchical Monitoring Approach..............103!
Figure 19: Candidate Tussle Evolution for ETICS Network Service Delivery................................................104!
Figure 20 High-level Overview of an Operator’s Infrastructure .....................................................................106!
Figure 21: A UNIVESELF Scenario...............................................................................................................107!
Figure 22: Candidate Tussle Analysis Evolution for UNIVERSELF ..............................................................108!
Figure 23: Content Delivery in SAIL’s NetInf Architecture – Content Access Management and AAA
Functionality......................................................................................................................................110!
Figure 24: Candidate Tussle Evolution for Content Access Control..............................................................111!
Figure 25: Content Delivery in SAIL’s NetInf Architecture – Cache Management and Content Update .......112!
Figure 26: Candidate Tussle Evolution for Content Freshness .....................................................................113!
Figure 27: Content Delivery in SAIL’s NetInf Architecture – Content Network Management. .......................113!
Figure 28: Candidate Tussle Evolution for Controlling Server Advertisements Tussle Between Two Edge
ISPs. .................................................................................................................................................114!
Figure 29: Content Delivery in SAIL’s NetInf Architecture – Competition with legacy CDN. .........................115!
Figure 30: Candidate Tussle Evolution for Controlling Server Advertisements Between an Edge ISP and a
Legacy CDN. ....................................................................................................................................116!
Figure 31: Content Delivery in a Pub/Sub Architecture - Local RENE. .........................................................117!
Figure 32: Candidate Tussle Evolution for Spam Received by Subscriber S
3
. .............................................118!
Figure 33: Content Delivery in a Pub/Sub Architecture - Global RENE. .......................................................119!
Figure 34: Candidate Tussle Evolution for Conflicting Optimization Criteria. ................................................120!
Figure 35: Content Delivery in a Pub/Sub Architecture – Peering Agreement Between ISP2 and ISP3. .....120!
Figure 36: Candidate Tussle Evolution for Transmission/Routing Due to Imbalance on the Peering Link. ..121!
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 6 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Figure 37: ULOOP Use Case “Extended Coverage/Offload” [2] ...................................................................123!
Figure 38: Candidate Tussle Evolution for Traffic Forwarding with ULOOP .................................................130!
Figure 39: Candidate Tussle Evolution for Connectivity Re-selling with ULOOP..........................................133!
Figure 40: Advantages and Drawbacks of the Two Approaches to Preclude Eavesdropping. .....................136!
Figure 41: Candidate Tussle Evolution for Traffic Forwarding with C2POWER Technology ........................139!
Figure 42: The OPTIMIS Cloud Broker Use Scenario...................................................................................144!
Figure 43: The OPTIMIS Tussle over IaaS Provider Selection .....................................................................146!
Figure 44: Main Stakeholders of BONFIRE Project ......................................................................................148!
Figure 45: Candidate Tussle Analysis Evolution for BonFIRE ......................................................................151!

Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 7 of 161
© Copyright 2012, the Members of the SESERV Consortium

List of Tables
Table 1: Summary table of main progress regarding the tussle analysis only ................................................13!
Table 2: Tussle Groups and Related Functionalities in Taxonomy .................................................................21!
Table 3: Short Description of Tussles Related to the Transmission Network Functionality ............................32!
Table 4: Short Description of Tussles Related to the Naming/Addressing Network Functionality ..................33!
Table 5: Short Description of Tussle Related to the Traffic Control Network Functionality .............................33!
Table 6: Short Description of Tussles Related to the QoS Network Functionality...........................................34!
Table 7: Short Description of Tussles Related to the Network Security Functionality.....................................34!
Table 8: Short description of tussle related to the Mobility networking functionality........................................35!
Table 9: Short Description of Tussles Related to the Cloud QoS Functionality ..............................................36!
Table 10: Short Description of Tussle Related to the Cloud Security functionality..........................................36!
Table 11: Aggregates of Outdegree and Indegree per Functionality...............................................................38!
Table 12: Number of Distinct Tussles Each Stakeholder Appears per Network Functionality ........................39!
Table 13: Number of Distinct Tussles Each Stakeholder Appears per Cloud Functionality ............................40!
Table 14: Survey of Technologies Related to Transmission Functionality as Proposed by Selected Network
Research Projects ..............................................................................................................................47!
Table 15: Survey of Technologies Related to Traffic Control Functionality as Proposed by Selected Network
Research Projects ..............................................................................................................................48!
Table 16: Survey of Technologies Related to QoS Functionality as Proposed by Selected Network Research
Projects...............................................................................................................................................48!
Table 17: Survey of Technologies Related to Mobility Functionality as Proposed by Selected Network
Research Projects ..............................................................................................................................49!
Table 18: Survey of Technologies Related to Security Functionality as Proposed by Selected Network
Research Projects ..............................................................................................................................50!
Table 19: Survey of Technologies Related to Naming/Addressing Functionality as Proposed by Selected
Network Research Projects ................................................................................................................50!
Table 20: Survey of Technologies Proposed by Selected Cloud Research Projects......................................51!
Table 21: Comparison of Functionalities Focused by Selected Network Research Projects ..........................63!
Table 22: Comparison of Functionalities Focused by Selected Cloud Research Projects..............................63!
Table 23: ConditionS for the Tit-for-Tat Strategy to be Reasonable for a Fixed Number of n+1 Rounds
(Benefit vs. Cost). .............................................................................................................................128!


D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 8 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

1 Executive Summary
This document is Deliverable D2.2, entitled “Final Report on Economic Future Internet
Coordination Activities” for the SESERV Coordination Action. It provides the FISE (Future
Internet Socio-Economics) community, the Challenge 1 research projects and the EC
(European Commission) with the results of coordination activities for the economic aspects
of the FI (Future Internet) over the full length of the SESERV project, focusing in particular
on the work performed in the second project year.
D2.2’s purpose is to identify, discuss and disseminate key (socio)economic issues related
to FI technologies, which affect the interests of involved stakeholders, their chances of
adoption, as well as, their effects on other Internet technologies. The goal is two-fold; a) to
increase the awareness of technologists about fundamental economic issues such as
prevalent tussles and the risks of not taking them into account when designing new
technologies and b) to demonstrate how those ideas could be applied by learning from
related case studies.
The document first presents the methodology used by SESERV's Work Package 2 in the
second project year, outlining the main differences from the approach followed during the
first project year. In particular, SESERV provided a framework which helps technology
developers and policy makers understand the complex interplay of technology and
economics in the Internet. This framework is composed of a methodology for evaluating
Internet technologies (“tussle analysis”) and a set of taxonomies. The latter include:
a) An extensive taxonomy of Internet functionalities as presented in Sections 3.1, 3.2,
which covers both aspects of how services are being hosted (cloud-related
functionalities) and their actual delivery (network-related functionalities).
b) A generic classification of Internet stakeholder into seven high-level stakeholder
roles, as documented in Section 3.3, where each one is further decomposed into
more detailed instances.
c) Four socio-economic dimensions of the influencing factors on the demand for high-
speed Internet accounting, presented in the paper on the socio-economics of high-
speed accounting (discussed in Section 8.2), providing an assessment framework
for respective technologies.
SESERV collaborated with a broad set of Challenge 1 projects and members of the FISE
community, either directly or in the context of SESERV events. These interactions led to
the identification of economic priorities for the FI by building upon the results of the first
year with respect to the conflicts of interest that may appear amongst major stakeholders
(tussles). Insights gained were consolidated by a survey of technologies for a number of
common Internet functionalities by studying a set of 11 European research projects
covering the areas of networks and cloud services. The outcomes of these bilateral
discussions, wider focus groups and meetings resulted in a set of six recommendations to
research projects, providers and policy makers for successfully redesigning and
configuring Future Internet technologies. These are:
a) Technology makers should understand major stakeholders' interests:
Towards this objective, Section 4.1 provides an overview of the interests of major
stakeholders (including possible conflicts) for all Internet functionalities.
b) Technology makers should allow all actors to express their choices: Section
3.4 gives a list of generic economic challenges, grouped into classes in order to
provide technology makers guidance when looking for candidate tussles in which
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 9 of 161
© Copyright 2012, the Members of the SESERV Consortium

the functionality provided by their technology may be involved, while Section 4.2
provides an extensive list of tussles in the Networking and Cloud computing
research areas. These tussles and tussle groups can help designers understand
how unsatisfied stakeholders could react in their case. Furthermore, Appendix A
documents all the details of how tussle analysis has been applied and especially on
finding technologies that are compatible to the stakeholders’ interests (or “designed
for tussle”).
c) Technology makers should explore consequences and dependencies on
complementary technologies: Section 4.2 provides a cartography of the tussles
that have been identified, the functionalities that these tussles entail and their
relationships (spillovers). Furthermore, the Appendix A documents how the tussle
analysis can be used for exploring consequences and dependencies on
complementary technologies.
d) Technology makers and Providers should align conflicting interests through
incentive mechanisms: Appendix A provides several examples where tussles
could have been dealt effectively with the appropriate economic mechanisms in
place. Furthermore, Section 7 provides two related seeds for Future Internet design
principles, which have been contributed by SESERV to the FIArch working group;
namely the "Exchange of Information between End-Points" and "Sustain the
Investment".
e) Technology makers should increase transparency: By examining the tussles
cartography, Section 4.3 provides a useful insight in critical functionalities that are
missing and have negative effects to other functionalities. In particular it was found
that if a number of Security-related mechanisms (especially monitoring) were in
place, then this would help tussles in other functionalities to be resolved more
smoothly. Furthermore, Section 8 considers the managerial and technical feasibility
and other socio-economic challenges of high-speed Internet accounting in a world
of increasing volumes of real-time communication.
f) Policy makers should encourage knowledge exchange and joint
commitments: Performing the detailed tussle analysis resulted in suggesting
candidate technologies that follow the “Design for Tussle” goal, which was not
always feasible for the projects to include in their architectures. There were several
valid reasons for this, such as lack of expertise in other research projects and
groups on the economic issues on which SESERV focused, lack of resources and
need to focus on the contracted workplan. Given the limited duration of the research
projects and the need for a systematic approach in dealing with the complex
socioeconomic challenges, it is recommended that projects are encouraged to
announce shortcomings of their technology and dependencies on other
technologies in a way that makes possible their continous evolution by other
entities. To this end, Section 5 provides a survey of technologies proposed by a
carefully selected set of 11 Challenge 1 research projects.
Apart from studying and interacting with research projects and raising their awareness on
the main socio-economic challenges, standardization activities undertaken within the ITU
Study Group 13 (SG13) on “Future Networks Including Mobile and Next Generation
Networks (NGN)” supported further coordination of socio-economic issues. D2.2
summarizes in Section 8 the standardization activities within the ITU-T
1
. In particular, the

1
This particular version of D2.2 is restricted to members of the SESERV consortium and the reviewers’ team due to
applicable publication rules of the ITU-T with regard to meetings’ inputs/outputs as well as non-final Recommendation
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 10 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

interactions that happened between ITU and UZH (on behalf of SESERV) are summarized
and the resulting, multisided outcomes are discussed – the most remarkable being the
standardization of tussle analysis as an ITU recommendation. These standardization
activities complement SESERV’s pre-standardization activities in the FIArch (Future
Internet Architecture) Group over the previous year documented in Section 7.
As already mentioned, many socio-economic issues encountered raise questions in
relation to traceability of Internet traffic and services, since such information provides facts
that can serve as a quantifiable basis on network data to analyze, judge, and even
regulate a considerable number of tussles. The deliverable therefore considers the
economic and technical feasibility of high-speed Internet accounting. The relevant joint
work with a team of European experts in high-speed accounting resulted in the following
set of four recommendations as covered in a whitepaper on high-speed accounting:
a) ISPs should carry out and promote research to study technical feasibility, gains,
and trade-offs of high-speed accounting approaches to a representative
number of specific application cases. “Cases” here refer to varying uses of high-
speed accounting and to varying jurisdictions with different legal frameworks. “High-
speed accounting approaches” here refer to an emphasis on ex ante promising
approaches such as NetFlow and sFlow.
b) Legislators, policy makers, and lobbying organizations are recommended to work
towards internationally harmonized legal frameworks for high-speed
accounting.
c) Legislators, policy makers, and lobbying organizations are recommended to work
towards adoption of a dual legal model for high-speed accounting
regulations, combining instruments of a law and an enactment (as a by-law).
d) ISPs should work towards common technology for implementing high-speed
accounting. In consideration of diverse and potentially diverging incentive sets,
ISPs should coordinate and agree on the respective interfaces, protocols, and/or
data models to follow.
Besides the legacy described above, several research projects already benefited from the
interactions during the SESERV Coordination Action’s lifetime. For example, game-
theoretic results together with focus group discussions confirmed the need for the ULOOP
project to design economic mechanisms and assessed the (non) suitability of a particular
mechanism. Similarly, the ETICS and SAIL projects have confirmed receiving interesting
feedback from the tussle analysis that was performed.
In conclusion and based on the feedback SESERV received from other project
representatives and members of the FISE community, as in the case of the Athens
workshop, SESERV believes to have managed to identify, discuss and increase the
awareness of FI stakeholders on keyeconomic issues related to FI technologies. It is
interesting to mention that the collaboration of SESERV members with some projects and
institutions will continue after the end of SESERV project’s lifetime as well. For example,
the combination of tussle analysis, MACTOR and UBM methodologies will be explored
together with members of the UNIVERSELF project for identifying feasible future value
networks. Similarly, SESERV members will continue providing their expertise to the ITU
and FIArch Group.

documents. The public version of D2.2 containis the results of the coordination activities along with a publishable
digest of the interactions with ITU-T, while this restricted version at hand includes all contents from the public version
together with the complete output of ITU interactions.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 11 of 161
© Copyright 2012, the Members of the SESERV Consortium

2 Introduction
This section lays down the logic, purpose and structure of this report.
2.1 Purpose of D2.2
This document provides the FISE (Future Internet Socio-Economics) community, the
Challenge 1 research projects and the EC (European Commission) with the results of co-
ordination activities for the economic aspects of the FI (Future Internet) over the full length
of the SESERV project, focusing in particular on the work performed in the period
September 2011 to August 2012
2
and on its integration with the relevant work performed
overall within the two years of SESERV.
The purpose is to identify, discuss and disseminate key (socio)economic issues related to
FI (Future Internet) technologies, which affect the interests of their stakeholders, their
chances of adoption, as well as, their effects on other Internet technologies (either
substitute or complementary). The goal is two-fold; a) to increase the awareness of
technologists about prevalent tussles and the risks of not taking them into account when
designing new technologies and b) to demonstrate how those ideas could be applied by
learning from related case studies.
2.2 Document Structure
This document is Deliverable D2.2, entitled “Final Report on Economic Future Internet
Coordination Activities” for the ICT SESERV Project. After the introduction in Section 2,
Section 3 briefly discusses the methodology used by WP2 in the second year of the
project and the main differences from the approach followed during the first half of the
project’s lifetime.
Section 4 focuses on the economic priorities for the Future Internet by building upon the
results of the first year with respect to the conflicts of interest that can appear amongst the
major stakeholders and candidate selfish uses of available technologies for meeting their
goals (tussles). Section 5 performs a survey of technologies for a number of the Internet
functionalities by studying a set of 11 research projects covering the areas of networks and
cloud services. Then, Section 6 summarises the outcomes of the interactions with ITU-T
on the socioeconomics of future networks in general, while Section 7 gives an overview of
the SESERV contribution in the FIArch (Future Internet Architecture) Group activities over
the last one year, i.e. 2011-2012, and especially towards the identification and
specification of the Design Principles that will govern the FI architecture and protocols.
Section 8 provides the motivation for High-speed accounting and lessons learned from a
socio-economic point-of-view. Finally, Section 9 concludes with the main findings of
SESERV on the economic aspects of Future Internet.

2
The period from September 2010-August 2011 was documented in the previous deliverable D2.1, which reported on
the first year of the project. D2.2 (this document) updates the conclusions of D2.1, provides new content arising from
the activities of the project’s final year and integrates the whole WP2 work.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 12 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

3 Methodology
This deliverable is based on a combination of 2 approaches:
a) detailed
3
tussle analysis. This work instrument refers to performing the following steps
of a SESERV tussle analysis methodology (see Figure 1 for a high-level description or [13]
for more details) for a subset of Challenge 1 research projects:
• Identify main Internet and cloud computing functionalities (see Sections 3.1, 3.2 for
suggested taxonomies) and potential bottlenecks (critical control points and/or
scarce resources).
• Identify major stakeholders for each networking functionality and cloud computing
functionality and their interests (step 1).
• Identify tussles (step 2) and provide scenarios of their evolution by examining
whether technologies designed by those research projects meet major
stakeholders’ interests (step 3).

Figure 1: High-level View of Tussle Analysis Methodology
This is a top-down approach starting from generic functionalities and studying related
tussles, while the approach in D2.1 was more of a bottom-up: starting from tussles and
then consolidating them. This method was applied by interacting with project
representatives and other participants of SESERV events (such as scientific workshops,
Future Internet Assembly sessions, etc.). Notably, the SESERV Workshop “The interplay
of economics and technology for the Future Internet” in Athens (Greece) on January 31
st
of

3
For each project perform multiple iterations of the tussle analysis methodology for a set of representative case studies.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 13 of 161
© Copyright 2012, the Members of the SESERV Consortium

2012, hosted three focus groups where a broad range of participants expressed their
concerns and possible reactions on technologies proposed by projects that are being
analysed. The subsequent discussion and feedback was built upon by the project and
enhanced through a meeting of the focus groups during the FIA (Future Internet
Assembly) conference in Aalborg in May 2012 and through the SESERV workshop in
Brussels, in June 2012. Please refer to Appendix B for representative bilateral discussions
with members from other projects. The discussions and feedback have shaped the
economic priorities described in this report. The methodology followed is discussed in
greater detail in D1.5 [15].
b) survey of technologies. The proposed technologies of a representative set of
Challenge 1 research projects which were studied and compared for each of the Internet
and Cloud functionalities in the taxonomies of Sections 3.1, 3.2. The idea is to create a
map of new technologies and architectures, which will be used for providing a high-level
view of current research activities and identifying critical functionalities that, if were
carefully redesigned, could have positive impact on the Future Internet.
The following Table summarizes the work accomplished during the lifetime of the SESERV
coordination action, regarding the tussle analysis only. This information is broken down
into two phases; September 2010 – August 2011 (the first year of SESERV) and the
September 2011 – August 2012.
Table 1: Summary table of main progress regarding the tussle analysis only
Main results in period September 2010 –
August 2011
Main results in period September 2011 –
August 2012
Preliminary tussle analysis for 16 research
projects
Detailed tussle analysis for 8 research
projects (see Figure 2)
- Interrelations amongst 7 generic networking
functionalities and those amongst 4 cloud
functionalities
7 meta-stakeholder roles for the Future
Internet were identified
Small-scale updates to the FI stakeholders
taxonomy were required
7 tussle groups were identified 27 tussles were examined and mapped to 8
functionalities (6 network, 2 cloud)
- Survey of technologies per functionality of 11
projects (9 projects related to networking and
2 on cloud computing; see Figure 2 for more
information)

The projects we interacted with when performing the detailed tussle analysis and
technology survey cover several thematic areas, as shown in Figure 2. The focus was
placed on Objective 1 projects, where SESERV coordination action (highlighted using a
red circle) also belongs. Besides the 9 network projects, two cloud-related projects
(OPTIMIS, BonFIRE) were also included in our analysis in order to gain a more complete
picture of the Future Internet. Projects where detailed tussle analysis has been applied to
are marked with a red star.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 14 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium


Figure 2: Surveyed Projects and Thematic Areas
3.1 Network Functionalities
Thi-Mai-Trang [1] has suggested a generic taxonomy of network functionalities, as shown
in Figure 3. While each of these functionalities can be decomposed further into sub-
functionalities, they can be also seen as components for composing more complex
network services. More specifically:
• The Transmission functionality was suggested to include sending, receiving,
forwarding, routing, switching, storing, and processing of data packets or, in
general, PDUs (Protocol Data Units).
• The Traffic control functionality includes flow and congestion control (for tuning the
transmission rate parameters), as well as, error control for dealing with lost and out-
of-order PDUs.
• The QoS (Quality of Service) functionality would cover queuing algorithms,
admission control techniques and mechanisms for managing QoS-aware paths
(e.g., advertisement of infrastructure availability and current workload, setup of Diff-
Serv classes, etc.).
• The Mobility functionality would include protocols for handover and location
management.
• The Security functionality would cover protocols for authentication, authorization,
accounting, monitoring, encryption, redudancy, data integrity, and key distribution.
• The Naming/Addressing functionality would include naming schemes and name
resolution techniques.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 15 of 161
© Copyright 2012, the Members of the SESERV Consortium


Figure 3: A Generic Taxonomy of Network Functionalities (from [1])
Note that this taxonomy is different than the layered-view of the Internet, as shown in
Figure 4. In the OSI reference model the Internet can be seen as a set of layers (each one
composed of related protocols) where each layer serves the layer above it and is served
by the layer below it. For example Addressing, which is considered a separate functionality
under the taxonomy in [1], can be implemented at several layers of the OSI model (e.g.,
Data link, Network, Session and Application).


Figure 4: An Internet Reference Model
Sources: Left figure from http://en.wikipedia.org/wiki/OSI_model, right figure adapted from H. Schulzrinne
4
.

4
Available online at www.cs.columbia.edu/~coms6181/slides/1/internet.ppt
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 16 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

3.2 Cloud Functionalities
Today, there are three main classes in the cloud services stack which are generally
agreed upon that can classify cloud modalities, and identify common stakeholders and
concerns in each classification: Infrastructure as a service (IaaS), Platform as a service
(PaaS) and Software as a service (SaaS).
Infrastructure as a service (IaaS): the provision of ‘raw’ machines (servers, storage and
other devices) on which the service consumer installs their own software (usually as virtual
machine images). The service is billed on a utility computing basis according to the
amount of resources consumed.
IaaS resources are pooled - a pool can be an internal cloud, owned by an enterprise, or a
public cloud owned by a service provider. IaaS has attracted the interest of vast numbers
of IT (Information Technology) staff, developers, and end-users, with a promise of
compute and storage on-demand, freed from the encumbering shackles of hardware and
datacenters. IaaS has often been compared to managed hosting services, and the
comparison is an apt one – they both represent multitenant approached to compute and
storage resources. However, most conventional managed hosting services share a
number of characteristics with conventional enterprise IT – they are designed to handle
static and continuous IT loads rather than bursty and dynamic loads. The ability of cloud
services to handle dynamic IT loads is their true differentiator from both conventional
internal IT and managed hosting.

Figure 5: A Generic Taxonomy of Cloud (IaaS) Functionalities (source: SESERV)
Figure 5 provides a generic taxonomy of cloud (IaaS) functionalities. These functionalities
are the following:
a) Virtualization, which includes image creation, image deactivation, image portability (to
different machines), etc.
b) Execution, which includes low-level tasks such as Task scheduling (at the CPU level),
Task execution (at the CPU level), memory allocation and similar aspects related to
Operating Systems.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 17 of 161
© Copyright 2012, the Members of the SESERV Consortium

c) QoS, which includes advertisement of site availability, site selection, machine selection,
machine reservations, admission control (for performance reasons), cost allocation, etc.
d) Security that includes monitoring, accounting, SLA management, admission control (for
security reasons), etc.
Platform as a service (PaaS): the provision of a development platform and/or environment
providing services and storage, hosted in the cloud. Whilst there are many different kinds
of PaaS, to many it is synonymous with the most prevalent kind: development platforms as
a service (dPaaS). dPaaS aims to be a developer’s friend – the “platform” is a
development platform writ large. The idea is simple, even if the execution is complex:
multiple applications share a single development platform and common services, including
authentication, authorization, and billing. PaaS developers build web applications without
installing any tools on their computer and deploy those applications without needing need
to know or care about the complexity of buying and managing the underlying hardware
and software layers. A PaaS is built on IaaS and uses both a multi-tenanted development
tools and deployment architecture. A good example of PaaS is Facebook – a venue where
multiple applications care share resources and user information, subject to tight controls.
Software as a service (SaaS): the provision of a pre-defined application as a service over
the Internet. The client/server paradigm is both broken and preserved as the server moves
into an Internet datacenter, under the aegis of a SaaS provider, while the client stays on
the desktop but is seen through the prism of a web browser. SaaS providers become 75%
software developers and 25% web hosting provider as they merge the idea of offering
software to enterprises into a subscription-based over-the-Internet model. SaaS is still in
its infancy. Some SaaS packages are wildly successful (e.g. Salesforce.com, who focus
on CRM service provision), but false starts such as the idea of desktop productivity
software over the Internet have drained resources and delayed early adoption. At this
stage, success depends on bundling SaaS with valuable expertise (e.g. in the CRM
arena), or on choosing applications that are especially well-suited to delivery as SaaS in
the cloud (e.g. digital media rendering services). The attraction of SaaS remains strong as
the economic model is friendlier and more rational for both providers and customers, as
opposed to ‘white box’ software distribution.
Given the similar rationale of IaaS with PaaS and SaaS we can assume that these share a
lot of functionalities. The main differences are related to the actual technology component
that is being virtualised, executed or secured.
By combining the taxonomy of Network functionalities (Figure 3) with that for the Cloud
functionalities (Figure 5) we can get a generic classification of Internet functionalities, as
shown in Figure 6.
3.3 Stakeholders
Stakeholders are entities supervising or making decisions that affect how the Internet
ecosystem operates and evolves. A usual phenomenon is that the same entity plays
multiple roles, for example ISPs (Internet Service Providers) offer connectivity services but
at the same time can provide entertainment content services. In D2.1 an initial taxonomy
of Internet stakeholder roles has been formed, by studying a number of research projects
and identifying missing roles from previous attempts.
Figure 7 presents the outcome of this analysis. Connectivity Providers refer to the entities
responsible for the delivery of traffic from its source to the ultimate destination. This traffic
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 18 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

may involve Users consuming services in order to meet their business and personal needs
or Information Providers that offer service applications to address other, non-networking
needs. Both Connectivity and Information providers may depend on Infrastructure
Providers for leasing the necessary components (computational, network and storage
resources). Content Owners produce content items such as movies. Technology Makers
make available Internet protocols, software and hardware that seeds new needs and
services. Last but not least, Policy Makers supervise the operation of the Internet and
intervene when necessary.

Figure 6: A Generic Taxonomy of Internet (Network and Cloud) Functionalities
In our interactions with the FISE community and Challenge 1 research projects, as these
are documented in D1.3, D1.4 and D1.5, we had the opportunity to present this taxonomy,
encourage them to identify roles that may have been excluded from their analysis and
provide us feedback on the taxonomy itself.
There were several cases where the distinction between stakeholders and stakeholder
roles was considered confusing. For example, Last Mile Providers that operate access
networks based on copper, fiber or wireless technologies were often confused with Edge
ISPs, who provide Internet connectivity services to Users and Information Providers. This
distinction was mandated by regulation, where the former (usually the incumbent
operators) are required to supply Edge ISPs with wholesale access to their copper, fiber or
wireless network infrastructure.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 19 of 161
© Copyright 2012, the Members of the SESERV Consortium


Figure 7: Initial SESERV Taxonomy for Internet Stakeholder Roles
Alissa Cooper, one of the invited speakers at the 2
nd
SESERV workshop titled “The
interplay of economics and technology for the Future Internet”, suggested splitting the
stakeholder role Regulators into Legislators and Regulators. The reason was that laws are
often introduced and enforced by different entities.
Another suggestion received by Man-Sze Li during the FISE workshop “How Disruptive
Technologies Influence the FI Business Ecosystem” was renaming the Consumers
stakeholder role (referring to those who buy Internet connectivity, or other Internet
services, on behalf of many single end-users) to Customers.
Furthermore, GEYSERS project representatives at the workshop “Cloud Networking –
Technical and Business Challenges” suggested to broaden the Connectivity Providers role
by including Virtual ISPs. The rationale is that future virtualisation technologies will allow
the emergence of network operators, who will able to apply their policies without the need
to deploy their own network infrastructure.
Similar additions were suggested during our interactions with the OPTIMIS and BonFIRE
projects. More specifically it was suggested to expand the Cloud Operator role by including
the following sub-roles: a) the IaaS hosts who own the infrastructure to be available and b)
the Virtual Cloud Operator / IaaS providers that provide a richer interface to cloud users
(allowing added-value services to be offered). Today it is normal for the IaaS provider to
also host the cloud resources, but multi-hosting models are being developed in which at
least some hosts delegate the provision of customer-facing cloud management services to
a separate entity. Furthermore, it was suggested to add two contemporary Information
Provider roles; Platform as a Service Providers (PaaS) and Software as a Service
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 20 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Providers (SaaS). The former provide an environment suitable for general developers to
build web applications without deep domain expertise of back-end server and front-end
client development or website administration
5
, while the latter offer software to enterprises
into a subscription-based over-the-Internet model.
Taking those suggestions into consideration we updated the Future Internet stakeholder
taxonomy as shown in Figure 8 (changes are marked with a star).

Figure 8: Updated SESERV Taxonomy for Internet Stakeholder Roles
3.4 Tussles
Performing the initial tussle analysis of 16 research projects in D2.1 [4] SESERV identified
a number of tussles, which were further grouped into 7 tussle groups. Table 2 presents
those tussle groups, its association to the functionalities and a tussle of interest to several
research projects.
Performing the detailed tussle analysis with other research project representatives and
experts has resulted in 27 tussle instances. Figure 9 gives an overview of those tussles
and the associated Internet functionality. The major stakeholders of those functionalities
and their interests are described, and more information about the identified tussles is given
in the next section.

5
A good example of PaaS is Facebook – a platfrom where multiple applications can share resources and user
information, subject to tight controls.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 21 of 161
© Copyright 2012, the Members of the SESERV Consortium

Table 2: Tussle Groups and Related Functionalities in Taxonomy
Tussle group Related Functionality Popular tussle
Network Security Security Stop DDoS attacks near the
source ISP
Interconnection Agreements Transmission Imbalance traffic ratios using
local content
Allocation of scarce
resources
Traffic control, Execution Capacity of a common
link/frequency
Responsibility for agreement
violation
Network security
(monitoring), Cloud security
Effort in serving a customer’s
request
Routing / provisioning
Service Requests
(selecting a provider to fulfill
a customer request)
Naming and Addressing,
Mobility, cloud QoS
Brokers serve customer
requests using different
optimization criteria
Controlling content/service
delivery (referring to possible
anti-competitive tactics)
Naming and Addressing Filter user requests for
service (walled garden)
Controlling access to
sensitive data
Network security
(encryption, etc), cloud
security
Reusing data of previous
sessions


Figure 9: Tussles Identified and their Respective Functionalities
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 22 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

4 Economic Priorities for Future Internet
SESERV collaboratted with a broad set of Challenge 1 projects and members of the FISE
community, either directly or in context of SESERV events, FIA sessions co-organized by
SESERV and other popular research workshops. Thus, as a result, the most popular
socio-economic challenges are identified and a consolidated overview and classification of
Future Internet stakeholders is presented in this section along with an analysis of how
such stakeholders interact by exploiting Future Internet technologies to advance their
economic interests and influence economic outcomes.
4.1 Main Stakeholders of Network and Cloud Functionalities
In this section we provide a list of major stakeholders and their interests, for the Internet
functionalities described in Section 3.1 and the Cloud ecosystem, in general.
4.1.1 Network
This section focuses on the interests of major stakeholders in the network marketplace.
These interests are discussed separately for each of the network functionalities; namely:
Transmission, Traffic Control, QoS, Mobility, Security and Naming/Addressing.
4.1.1.1 Transmission
ISPs want to achieve a right balance between traffic demands and supply costs. In current
Best Effort Internet, ISPs try to improve the quality of the services offered to their
customers by performing traffic engineering. This can be achieved, for example, when
ISPs interconnect at more than one place by configuring BGP so that some traffic is routed
through a less congested path. It is quite often that two neighbouring ISPs have different
optimization criteria when perform routing, leading to frequent advertisements of
new/withdrawn routes. These updates cause instability and not only increase the operating
costs of ISPs but can deteriorate user experience, as well.
Similarly, one of the major goals of Edge-ISPs offering mobile services, where recent
developments in network management applications allow the flexible utilisation of
resources, is to configure the coverage range of access points for offering good signal
quality and service reliability at a reasonable cost. It should be noted here that such ISPs
are usually organised into multiple administrative domains; for example a separate
department for wireless and wireline services, or even for 3G and 4G networks. The
reason is that new technologies have to coexist with older ones before the critical mass of
users has been achieved and their management is distributed. In this case, conflicts of
interest can appear amongst different departments even though these try to accomplish
the same high-level management goal, e.g., reducing operation expenditures by a certain
factor.
Transit ISPs worry about their costs and sign bilateral peering agreements, which allow
them to exchange traffic without compensating each other. This minimizes their traffic-
related costs, but the fact that no service differentiation can take place (see QoS Section
4.1.3) turns transit services into a commodity and increases the competition.
Furthermore, Edge ISPs are also interested in keeping their transit costs as low as
possible and they can exercise several strategies in order to do so. These strategies
decrease demand for transit services and, of course, pose a significant risk to the business
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 23 of 161
© Copyright 2012, the Members of the SESERV Consortium

model of Transit ISPs. For example, in the third focus group of the Athens Workshop the
case of “donut peering” was mentioned, where traffic between the customers of the two
Edge ISPs is accepted free-of-charge. In order to do so they configure the BGP (Border
Gateway Protocol) inter-domain routing protocol so that peering links are preferred over
transit links. We should mention that such “peering” agreements rarely take place between
an Edge and Transit ISP.
At the same time, as was mentioned in the final SESERV Workshop, many Gaming
Providers (members of the Information Providers stakeholder role) want to offer gamers a
neutral playing field regardless of the location of the gaming server. In order to do so,
these employ a bonus-malus system for balancing users’ QoE (Quality of Experience).
After benchmarking the network response times (one-way delay) of each gamer they route
traffic of high-delay users using faster paths than the paths used for low-delay users. What
is interesting is that this behaviour actually cancels out the traffic engineering efforts by
ISPs and causes even more instability to the Internet routing “sub-system”. Similar
behaviours can arise by other instances of Information Providers stakeholder role.
End Users are interested in getting uninterrupted network service and the QoE they are
accustomed to, or they have paid for. More experienced users ask for the ability to
configure their device based on their own preferences. For example, in low activity periods
they may desire to save battery in their mobile phone by selecting an energy-efficient radio
interface (like GPRS instead of UMTS). Furthermore, End Users may be willing to act as
connectivity providers themselves, but their decision to do so can depend on sociological
aspects of their character (e.g., altruism), the legal implications or cost constraints (e.g.,
battery exhaustion, connectivity data plan, etc.). Another interest of economic nature is the
fee asked by an Edge-ISP for receiving connectivity service, especially when it is
considered unjustifiably high (as is the case with roaming users).
Regulators are interested in highly competitive market, where prices are not distorted
(especially if too high). If necessary, they intervene by imposing ex-ante or ex-post
mandates (e.g., functional separation of wholesale and retail units for easier auditing of
possible anti-competitive tactics).
4.1.1.2 Traffic Control
Users are interested in getting good QoE that – depending on application – can be
translated into several low-level requirements (e.g., high availability, high throughput, low
delay, low jitter, etc.).
In the case of wireless access networks, a mobile user’s throughput is directly related to its
distance from the access point and other properties of its location, such as interference
from nearby electronic devices. This means that users serviced by the same base station
can influence each other. Edge ISPs try to deal with such performance issues by
configuring the cell radius, as way for performing load balancing (see more in Section
4.1.1.4 about network mobility).
The experience of fixed users, on the other hand, depends on the presence of bottleneck
links on the path used by their traffic. Such users can be further decomposed into heavy
users, who are interested in maximizing the total exchanged bandwidth over a long time
period (e.g. a day), and interactive users who actually utilize their Internet connection for
short time-scales. This means that the former users care about average throughput
maximization, while the latter have a high valuation for instant throughput maximization.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 24 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

The interests of the Edge ISPs are to balance user satisfaction with the cost of operating
the network. Given that continuous network upgrades are costly for network infrastructure
owners, Edge ISPs have introduced middleboxes for performing traffic control (such as
bandwidth throtling) when they find it necessary, e.g. for users of popular file-sharing
applications having exceeded a certain threshold.
Regulators and legislators, as previously mentioned, are interested in
obtaining/preserving a highly-competitive Internet marketplace. It is interesting that, based
on market data collected by Alissa Cooper and presented in the Athens Workshop
6
,
competition in UK’s retail market for Internet connectivity was not effective in driving the
majority of Edge ISPs to abandon such strategies. Given the high publicity of the network
neutrality debate over the last years, it may be the case that the majority of users are
interactive ones and were not affected by traffic control policies of their ISP.
To make things even more complicated, not all researchers share the same opinion in the
network neutrality debate. Prof. Robin Mason presented a simple economic model
7
in the
Athens Workshop where, depending on the exact value of a single model parameter, both
sides can be shown to be beneficial to the society.
Of course, regulators and legislators do not ignore the impact their decisions would have
on Information providers, who fear that absence of network neutrality will harm creativity in
the Future Internet. Their rationale is that Edge ISPs will be in position to perform price
discrimination based on the type of services offered, which will pose significant obstacles
for new Information providers.
4.1.1.3 Network QoS
It is not unusual for ISPs to operate a multi-vendor and multi-technology infrastructure. In
that case, several problems need to be tackled in the context of service deployment so as
to deploy new services and/or accommodate new traffic. Thus, even though autonomic
network management can bring cost-savings to operators, keeping operating expenditures
(OPEX) as low as possible can still be very challenging. Besides, ISPs need to have the
ability to control, manage and intervene in the operations by having the necessary
information in a timely-manner in order to deal with exceptions, change policies and/or
impose new constraints.
Furthermore, two Edge Internet Service Providers that compete at the regional market
could find it beneficial to cooperate. For example, they could charge higher prices for (or
lower their transit costs when) providing premium-quality connectivity services between
their customers. The problem is that such incentives would be hardly maintained for long
time periods due to potential business stealing effects. This is especially true because
ISPs are rarely symmetric in terms of customers, content/service providers attached,
network size, etc. Furthermore, as discussed in the Athens workshop, large ISPs are
sensitive to their market reputation and try to identify what technical parameters third
parties (such as the press) or technical savvy users are monitoring, in order to improve
their scores. In addition, all types of ISP have the incentive not to announce sensitive
information such as network topology and dimensioning (including the backup paths).
Furthermore, they tend to keep failover capacity low to avoid unused and therefore
unbilled capacity.

6
http://www.seserv.org/athens-ws-1/webcasts#cooper
7
http://www.seserv.org/athens-ws-1/webcasts#mason
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 25 of 161
© Copyright 2012, the Members of the SESERV Consortium

Information Providers such as Content Providers have the incentive to differentiate
themselves in terms of quality, prices, etc. in order to attract more customers. When QoS
is important such providers usually find beneficial to deploy caches nearer to customers,
but they rarely find cost effective to select more than one ISP for hosting their
infrastructure for the same time period (or buy services from a Content Delivery Network).
Furthermore, one of the findings of the Athens workshop focus groups was that they are
afraid of being locked-in to a particular ISP, which means that they prefer to go to neutral
infrastructure providers (such as Network Exchange Points).
A recent eurobarometer survey reported [16] that 14% of ISP customers would be
interested in paying more for a faster Internet connection and most of them would be
willing to pay up to 15% more. User representatives at the Athens workshop appeared to
be interested in premium-quality connectivity services for certain types of applications
(e.g., gaming). But one of their fears is whether they will get what they have paid for.
Furthermore users would go to another ISP that offers better quality for the same price,
thus quality matters even for customers willing to pay for Best-Effort connectivity services,
only.
Regulators are interested in achieving competition in the regional market for Internet
services and especially by promoting ISPs to “climb the ladder of investments”.
Furthermore, as was mentioned in the Athens workshop, they seek to identify anti-
competitive tactics by providers having significant market power.
4.1.1.4 Mobility
End-Users want to have seamless Internet connection regardless of their location and
moving speed (e.g., when travelling in a highway). In one of the focus groups held in the
Athens WS (workhop), a representative of the Users stakeholder group explicitly
mentioned his desire for mobility of his fixed Internet connection profile when visiting other
places. Primitive roaming capabilities are becoming increasingly popular in several
European countries, but no QoS differentiation is provided yet. For example, BT
8
in UK
and Forthnet in Greece allow their broadband subscribers to connect to their WiFi network
in several locations. In case that handing-over to another base station is followed by a
change in access technology (e.g. when connecting from a 3G network to a GPRS one),
this can affect the user experience by a) decreasing the throughput and b) decreasing
battery life. This means that an End-User may have strict preferences about the access
technology to be used.
Edge ISPs operating a mobile network are interested in providing seamless connectivity
and balancing network performance with cost. This involves good placement of base
stations and appropriate configuration, so that, not only holes are minimal but also
overlaps exist and thus the provider can perform load-balancing in case of a sudden
increase of users.
Regulators are interested in guaranteeing that Universal Service obligations are met, for
example coverage of at least X% of the population.
4.1.1.5 Network Security
Both Edge and Transit ISPs find peering links cost-effective (mainly due to lower transit
costs). But deploying and operating peering links involves capital and operating
expenditures even though no financial transactions usually take place between peers.

8
http://www.productsandservices.bt.com/consumerProducts/displayTopic.do?topicId=34239
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 26 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Furthermore, if the traffic ratio of outbound versus inbound traffic (between the two peers)
is unbalanced then it can lead to renegotiating the interconnection agreements. Thus ISPs
have the incentive to use traffic monitoring tools for checking whether each traffic ratio is
balanced, or not.
The fear of imbalanced traffic ratios can negatively affect incentives for deploying cache
servers of popular content and making it available to all peers. Thus, ISPs acting as
content distributors (a development that is supported by recent research effort on
Information-centric networking) want to be able to perform admission control on requests
for content delivery.
All types of ISPs have the incentive not to announce sensitive information such as network
topology and dimensioning (including the backup paths). This has significant implications
when ISPs must collaborate and monitor their compliance to the agreed contract terms. In
Focus Group 3 of the Athens Workshop titled ‘Interconnection agreements and
monitoring’
9
, for example, it was mentioned that ISPs have difficulties in agreeing what
traffic properties will be monitored. We should note that the consequences are more
severe for the Edge ISPs. If no adequate monitoring is in place to identify the ISP
responsible for QoS violation, then the penalty of the violation will be assigned to the Edge
ISP servicing the complaining customer.
Monitoring technologies can be also useful to Edge ISPs for tracking user conformance to
the Terms of Service. For example, Edge ISPs are expected to fight against the
introduction of technologies for sharing a user’s connection to the Internet. This is the
reason why the ULOOP project has performed a survey of the existing legal landscape
regarding the ability for connection sharing in several European countries.
At the same time, Users and Content Providers have no means to backup their claims
when they experience degraded quality. This issue has been raised during the Athens
Workshop, where the ‘User’ representative was unsure whether he would actually get the
premium experience that he had paid for. Even though, in fear of losing the customer a
Source ISP has the incentive to admit his fault, problems are expected when the disruption
is rooted at a Transit or Destination ISP.
Information Providers, such as trusted third parties or websites performing
benchmarking on ISPs’ performance are willing to bring transparency into the market. As
was discussed during the 3rd Focus Group in Athens, technically savvy users promote
competition by periodically announcing their findings to websites accessible by potential
customers.
Policy makers, such as regulators, are interested in Internet accounting for the
development of policy decisions, e.g., with respect to privacy concerns, as well as the
enactment of policies, e.g., with respect to data retention and legal interception. As
explained in more detail in Section 8 this can pose significant uncertainty and burden on
ISPs, who are interested for operational and strategic managerial reasons in a number of
accounting applications. Furthermore, different departments of the same ISP may have
conflicting interests, since what a network operator wants to account for (managerial
dimension) may be in contrast with what is able to account for (technical and economic
dimensions).
End Users have a high valuation for privacy, anonymity and protection from spam, which
deteriorates their experience. In the first case, they are worrying that other parties can
identify personal and or sensitive information based on their online behaviour. Even when

9
http://www.seserv.org/athens-ws-1/focus-groups/fg3-report
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 27 of 161
© Copyright 2012, the Members of the SESERV Consortium

a user trusts a particular information service provider that will not reuse such information
for other purposes, the increasing complexity of Internet value networks means that
several new intermediaries can be present without the user’s consent. This trend is
evidenced by the proliferation of cloud services for sot effective data processing and
storage, or ad-hoc wireless networks
10
(and similar collaborative future Internet
technologies) for low-carbon footprint networking. Furthermore, End-Users would be more
willing to participate in collaborative technology, such as relaying other peoples’ packets
towards their destination, if they were compensated for doing so through a reciprocal
scheme. User devices usually have short-range low-power communication interfaces,
such as WiFi and Bluetooth, as well as long-haul energy-hungry communication interfaces,
e.g., 3G, it may be more energy efficient to relay data via a path of low-power hops than
via one long-haul transmission.
4.1.1.6 Naming/Addressing
The Content Owner’s / Information Service Provider’s interest is to make his content
available to as many end-users as possible, as soon as the content is updated and in a
good level of quality. Furthermore, a Content Owner wants to authenticate and authorize
access to its content/service, making sure that access is restricted to those having the
right to do so. In one of the scenarios discussed in the Athens workshop, in particular the
case where a regional CDN is controlled and owned by the Edge ISP, the Content Owner
is concerned about his content being banned by the Edge ISP, unless they have
established an agreement. Therefore, the Content Owner is highly interested in who
11

resolves the name to a network address, and whether and what kind of agreements he
has with that entity; the Content Owner itself would be interested in taking this role (by
setting up DNS servers or the equivalent in Information-Centric Networking paradigm).
The Edge ISP has strong incentive to deploy such a regional CDN (Content Delivery
Network); in this way he enters the content delivery market and increases his revenues,
since he will not only sell connectivity to his customers, but also access to popular content.
Furthermore, the deployment of the localized CDN is beneficial for him because using
local caches for serving requests reduces his transit costs. In particular they would
configure the local DNS (Domain Name Service) servers so that their customers are
served from servers residing in their domain or a peering domain. When multiple sources
can serve a request, two ISPs (either Edge or Transit) can have different preferences for
the one to be used, due to their cost concerns, performance attributes, regulatory
constraints, or other local policies.
Transit ISPs are interested in carrying traffic between Edge ISPs and tend to be against
new naming/addressing schemes that support users in searching for content sources. In
case Information-centric networking was adopted, it is expected that Transit ISPs would be
forced to evolve into a provider offering directory services between islands of local CDNs,
i.e., the interconnectivity provider will not sell only connectivity but also access to content.
The supporters of this opinion in the Athens workshop claim that the Edge ISPs will need
to provide extended coverage, if they want to compete with large CDNs such as Akamai.
The reason is that Edge ISPs would very likely depend on the Transit ISPs if they need to
provide their customers with access to all the available content items.

10
Eavesdroppers can monitor and extract information from all traffic forwarded towards the ultimate destination.
11
The actor who controls the name resolution is able to restrict or even determine the available options to others (users,
other ISPs, etc).
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 28 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

The CDN provider is interested in increasing the content volume that he delivers to End-
Users in order to maximize his revenue. The big question regarding the CDN provider is
whether and why they would jump on the Information-centric networking bandwagon. Why,
for instance, would they advertise information given to them by Content Owners to other
competitors instead of improving their proprietary technology and become more attractive?
End Users have a high valuation for accessing the content and services of their choice.
Some of them are expected to worry about ISPs’ ability to restrict their experience by
resolving a name-URL to the IP address of servers run by their ISP (or deny serving URLs
of competing providers).
4.1.2 Cloud
This section focuses on the interests of major stakeholders in the cloud marketplace.
These interests are grouped based on the cloud functionalities presented in Section 3.2;
namely: Virtualization, Execution, QoS and Security.
4.1.2.1 Virtualization
Cloud operators are interested in meeting customer demand and expectations with the
lowest possible capital investments and operational expenditures. These interests are
commonplace between providers in genetal, but economies of scale and infrastructure
virtualization can give significant advantage to cloud operators when dimensioning their
resources (e.g. compute, storage and networking). However, virtualization requires
mechanisms that allow customers to configure and manage the resources they will use, to
upload workloads and data. Thus, another important driver for both provider’s cost as well
as customer satisfaction is the set of available virtual machine (VM) image configurations.
A wide range of such images increases the customer base, provides more flexibility to
customers at the expense of increased complexity and thus operational costs.
The main reasons for Cloud customers to build a business case and get into the Cloud
are economic ones. Information providers, large businesses and scientific institutions (e.g.,
those running resource intensive experiments) can benefit from lower investments in IT
infrastructure. They are also interested in quicker time to market and receiving further
operational benefits, such as flexibility and agility in accessing multi-site and
heterogeneous cloud resources (e.g. computation, storage and networking).
There are several factors that make the decision of adding Cloud Operators in the value
chain (or not) a difficult task. For example, existing Information providers have to
integrate their information systems with cloud services and educate their employees. Thus,
potential customers may require a specific software configuration (efficiency, adaptability).
Furthermore, as Javier Salcedo from Arsys had mentioned in the Brussels SESERV
Workshop, the head of the IT department may find its influence in the company being
diminished as the number of supervised employees is reduced
12
.
The above barriers can be seen as advantages for new Information providers who have
the ability to start from scratch and follow current best practices. But even in those cases,
several issues remain such as lock-in to a single Cloud Operator. Cloud customers have
interest in reusing the configuration of their infrastructure when deciding to hop from one
Cloud Operator to another. This brings increased competition to the market and minimal
service disruptions (efficiency, portability).

12
http://www.slideshare.net/ictseserv/javier-salcedo-cloud-computing-seserv-se-workshop-june-2012
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 29 of 161
© Copyright 2012, the Members of the SESERV Consortium

Brokers can help the matching of supply and demand for cloud resources by providing
useful information and added-value services to both sides. For example, they may provide
directory services based on customer preferences, including aspects like cost,
trustworthiness and eco-efficiency. Brokers are interested in improving the cloud customer
experience by providing a feature-rich and easy-to-use interface, which brings benefits to
infrastracture hosts by means of increased total workload. However, when brokers
interoperate with more than one infrastructure hosts they rely heavily on the existence of
standardized interfaces in order to keep the complexity (and consequently costs) as low as
possible. Such interfaces allow Brokers to make possible the on-demand moving of
workloads and data on different Cloud operators. But, as mentioned in the 8
th
cluster
meeting in Brussels
13
, an established Cloud Operator has limited incentives for making
easy the switching among different providers, for example by adopting standardised
technologies.
Regulators are interested in achieving competition in the market for cloud services.
4.1.2.2 Execution
In the IaaS cloud delivery model the execution functionality involves mostly decisions in
each individual machine. Infrastructure hosts are interested in maximum utilisation of
their resources and fair usage across customers.
Cloud customers, on the other hand, are interested in getting the service that providers
had promised. For example, performance can be lower than expected if the infrastructure
cannot isolate the execution environments of different customers using the same physical
machine. A usual problem in such multi-party value networks like cloud computing is the
principal agent (PA) problem
14
. Due to information asymmetry it is difficult for the customer
to monitor the results of provider’s decisions.
4.1.2.3 Cloud QoS
Some Cloud customers would highly value the ability to control where computations take
place, etc. This constraint is considered to be important for customers with special
functional requirements, for example those demanding exclusive access to some
resources for integrity of results and minimal side effects from other tasks. This means that
conflicts may arise in case of customers asking exclusive access to machines. Such a
tussle between two types of customers is described in detail in Section A.7.
Cloud operators and Brokers offering QoS-aware services such as selection of a
particular machine or reservations in the future face increased cost due to more complex
management systems and need for more spare resources. Thus, asking for premium
prices can be justified.
4.1.2.4 Cloud Security
Cloud customers may need to have access to monitoring data regarding performance of
the application and the underlying infrastructure (observability of system properties), in
order to make informed future decisions. Such information can be critical in case that the

13
Yuri Demchenko, Sergi Figuerola, “Interoperability in Provisioning Cloud based Infrastructure Services (on-demand)” 6
October 2011, Brussels, available online at http://ec.europa.eu/information_society/events/cf/fnc8/item-
display.cfm?id=7440
14
When one party - the principal - delegates a task to another party - the agent, a principal agent relationship is
established.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 30 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Cloud customer is part of a longer value chain/network (providing services to other
providers/ retail customers based on SLAs) or demand for IT infrastructure is very
dynamic. There may even be cases where customers for cloud services have special
requirements regarding the time horizon of monitoring data, in other words for how long
the data will be available in the future. Furthermore, some Cloud customers would highly
value the ability to control ‘who’ has access to their data.
Legislators are also interested about the location of the data that are being processed.
The reason is that different countries pose different restrictions on the way data should be
treated (e.g., for how long data will be available after a transaction has finished).
Cloud Operators consider releasing in public monitoring information of their IT
infrastructure, or business decisions/policies as a huge security risk. They fear that if a
customer could observe the physical machine operation it will be feasible for a competitior
to pretend being a customer and make sense the operators/brokers policies when
allocating resources. In this way the competitor could compete more effectively.
Furthermor, infrastructure hosts will also need to avoid becoming liable for illegal uses of
the software including licence violations.
4.2 Main Tussles in Networking and Cloud Functionalities
This section aims to summarize the tussles and their spillovers between functionalities,
based on the detailed tussle analysis of the 11 research projects that can be found in the
Appendix A. A spillover refers to a situation where a particular functionality impacts a
second one. Although this impact can be positive, it is important to investigate the
presence of negative effects. Such negative effects usually happen when the assumptions
made during the design phase of a technology implementing a particular functionality do
not hold anymore and allows a tussle of a different functionality to reach an unfair
outcome. These effects can also happen when a technology that is used during a tussle,
can be repurposed and used in a way that imbalances a different tussle of another
functionality (which may involve different stakeholders).
Figure 10 presents a cartography of the main tussles in the Networking and Cloud
computing research areas, the functionalities that these refer to and their relationships.
Networking functionalities are represented with a rectangle, while Cloud functionalities with
a rounded rectangle. Functionalities have different colours and include related tussles
(yellow oval shapes). Furthermore, the main tussle stakeholders appear inside the oval
shape and tussle spillovers are shown with a red, dotted arrow.
Please note that not all cloud-related functionalities appear in Figure 10. Due to limited
resources, SESERV could apply the detailed tussle analysis for OPTIMIS and BonFIRE
projects only for those tussles that were found to be most interesting to those projects. As
described in Sections 4.1.2.1 and 4.1.2.2, there are several conflicts of interest between
stakeholders of the virtualization and execution functionalities, thus several tussles could
be studied in the future.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 31 of 161
© Copyright 2012, the Members of the SESERV Consortium


Figure 10: A Cartography of Tussles
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 32 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

The following tables provide a short description of the tussles that have been identified
grouped by the functionality that these refer to. More information about the tussles and
their evolution can be found on the Appendix A.
Table 3: Short Description of Tussles Related to the Transmission Network Functionality
Name (Project,
Section)
Stakeholders Short Description Spillovers
Traffic Engineering 1
(ETICS, A.1.2)
ISP-1 vs. ISP-2
(this applies to
both Edge and
Transit ISPs)
In absence of incentive compatible
technologies for offering QoS across multiple
domains, each ISP performs traffic engineering
for optimizing its own network usage. This
selfish behaviour creates instability in the
Internet.
Incoming
spillover from
Control
Interconnection
Agreements
(ETICS, A.1.2)
Traffic Engineering 2
(PURSUIT, A.4.1)
Edge ISP vs.
Broker, Transit
ISP
Broker’s decisions (about the content provider
to serve an end-user’s request) increase the
Inter-connection fees that Edge ISP has to pay
to its Transit ISP.
Incoming
spillover from
Spam
subscriptions
(PURSUIT,
A.4.1)
Traffic Engineering 3
(PURSUIT, A.4.3)
ISP-1 vs. ISP-2
(this applies to
both Edge and
Transit ISPs)
Under the existing peering interconnection
agreement policies, ISP1 who deployed a
cache serving subscribers not only its
subscribers but those of ISP2 as well may be
asked to pay for the imbalanced traffic ratio.
Incoming
spillover from
Control Cache
Access
(Security
functionality)
Connection Sharing
(ULOOP, A.5.3)
Users,
Technology
Maker vs.
Regulator vs.
ISPs
Technology for allowing users to relay roaming
users’ traffic towards their destination can have
negative net benefit to ISPs who may start
enforcing Terms of Service of users’ contract
and ask regulator to ban such technologies.
Incoming
spillover from
Monitor User
Traffic
(Security
functionality)
Domain Optimization
(UNIVERSELF, A.2)
Domain A, User
vs. ISP vs.
Domain B
Two domains A and B of the same ISP aim at
reducing operation expenditures by a certain
factor in order to meet management goals. If,
however, decisions by a single domain are
taken independently of the state of other
domains this can lead to poor customer
experience, instability and unmet management
goals.
Outgoing
spillover to
Bandwidth
Sharing (Traffic
Control
functionality)
Traffic Forwarding
(ULOOP, A.5.1.6)
Selfish users vs.
Altruistic users
A selfish user may not forward other users’
traffic (in order to save energy) even though
altruistic users forward the former’s traffic
Incoming
spillover from
Reciprocal
Traffic
Forwarding
(Security
functionality)
Outgoing
spillover to
Reciprocal
Traffic
Forwarding
(Security
functionality)
Private Traffic
Forwarding
Relay users vs. Relay users can eavesdrop sender’s Incoming
spillover from
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 33 of 161
© Copyright 2012, the Members of the SESERV Consortium

(C2POWER, A.6.1.4) Senders communication Traffic
Encryption
(Security
functionality)

Outgoing
spillover to
Handover
(Mobility
functionality)

Table 4: Short Description of Tussles Related to the Naming/Addressing Network
Functionality
Name (Project,
Section)
Stakeholders Short Description Spillovers
Spam subscriptions
(PURSUIT, A.4.1)
ISP’s broker,
Content Owner
vs. 3
rd
party
Broker vs. User
ISP’s broker responsible for matching
subscribers and publishers issues a false
subscription for an information item he wants to
advertise to Users, receiving some kind of
revenue for this ‘advertisement’ by the
publisher (content owner)
Outgoing to
Traffic
Engineering 2
(Transmission)
Control Server
Selection (PURSUIT,
A.4.2)
Edge ISP vs.
Broker vs.
Transit ISP
In the Information-centric Networking paradigm,
brokers have control over the server that will be
used for fulfilling an end-user request. If the
IBP’s broker is selected by the user then it will
neglect servers found on an Edge ISP’s peers
(which would be the most preferred for the
Edge ISP in order to save transit costs).
-
Control Server
Advertisements 1
(SAIL,A.3.3)
Edge ISP1 vs.
Edge ISP2
In the Information-centric Networking paradigm,
an Edge ISP may need to control the set of
content items that can be advertised (and thus
will be reachable from) a peered Edge ISP’s
customers
-
Control Server
Advertisements 2
(SAIL, A.3.4)
Legacy CDN
providers vs.
Edge ISPs
deploying ICN
technology
In the Information-centric Networking paradigm,
an Edge ISP will compete directly with legacy
CDN providers. However there can be case
where both have the incentive to collaborate,
but they would need to control the set of
content items that can be advertised.
-

Table 5: Short Description of Tussle Related to the Traffic Control Network Functionality
Name (Project,
Section)
Stakeholders Short Description Spillovers
Bandwidth Sharing
(UNIVERSELF,A.2)
Existing Users
vs. Handed-over
Users
New users cause interference to existing users,
leading to poor cell performance (available
spectrum is the scarce resource)
Incoming
spillover from
Domain
Optimization
(Transmission
functionality)

D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 34 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Table 6: Short Description of Tussles Related to the QoS Network Functionality
Name (Project,
Section)
Stakeholders Short Description Spillovers
Control
Interconnection
Agreements (ETICS,
A.1.2)
Small ISPs vs.
Regulator vs.
Large ISPs
A large ISP wants to have control over major
properties of premium interconnection services
so that smaller ISPs don’t get advantage when
competing for retail customers (since large ISP
will have increased CAPEX, OPEX in order to
upgrade its network)
Outgoing
spillover to
Traffic
Engineering 1
(Transmission
functionality)
Control Reserved
Capacity (ETICS,
A.1.3)
Source (Edge)
ISP, Destination
(Edge) ISP vs.
Transit ISP
ISPs have the incentive to keep backup paths
under-dimensioned so that return on
investment is maximised. But, if no adequate
monitoring is in place to identify the ISP who
caused the SLA violation, then the penalty
would be assigned to the service originator,
which is advantageous for Transit ISPs (since
they are rarely originators).
Incoming
spillover from
Identify Probing
Packets
(Security
functionality)

Incoming
spillover from
Balance Trust
and
Trustworthiness
(Security
functionality)
Control Content
Freshness (SAIL,
A.3.2)
Content Owner
vs. Edge ISP
Edge ISP is reluctant to regularly update
content items in local caches to avoid increase
of interconnection costs
-

Table 7: Short Description of Tussles Related to the Network Security Functionality
Name (Project,
Section)
Stakeholders Short Description Spillovers
Identify Probing
Packets (ETICS,
A.1.3)
Source (Edge)
ISP, Destination
(Edge) ISP vs.
Broker vs.
Transit ISP
ISPs would have the incentive to circumvent an
SLA monitoring mechanism based on probing
packets. If they could predict the probing
packets then they would forward them packets
preferentially and rarely pay penalties.
Outgoing
spillover to
Control
Reserved
Capacity (QoS
functionality)
Balance Trust and
Trustworthiness
(ETICS, A.1.3)
Edge ISPs vs.
Information
Provider vs.
Users, Content
Providers
In case of no end-to-end monitoring
mechanism Users, Content Providers cannot
backup their claims for SLA violation. But
Information Providers can announce
comparison results and bring some
transparency to the market.
Outgoing
spillover to
Control
Reserved
Capacity (QoS
functionality)
Control Cache
Access (PURSUIT,
A.4.3)
Edge ISP1 vs.
Edge ISP2
Imbalance on peering traffic link leads to
admission control
Outgoing
spillover to
Traffic
Engineering 3
(Transmission
functionality)
Control Content
Access (SAIL, A.3.1)
Edge ISP vs.
Content Owner
Edge ISP having cached a particular content
item can bypass the AAA of Content Owner
-
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 35 of 161
© Copyright 2012, the Members of the SESERV Consortium

Reciprocal Traffic
Forwarding (ULOOP,
A.5.1.6)
Selfish Users vs.
Technology
Maker
Technology Maker has to implement economic-
aware security mechanisms (which can be
costly) in order to incentivise users to relay
traffic from other users
Incoming
spillover from
Traffic
Forwarding
(Transmission
functionality)

Outgoing
spillover to
Traffic
Forwarding
(Transmission
functionality)
Identify User Type
(ULOOP, A.5.3)
Edge ISPs vs.
Users (resellers)
ISP has the incentive to identify whether a
sender is a roaming user or not
-
Monitor User Traffic
(ULOOP, A.5.3)
Edge ISPs vs.
Users (resellers)
Edge ISPs would like to lower their cost by
offloading traffic to wireless mesh networks but
not lose significant revenues from roaming
users.
Outgoing
spillover to
Connection
Sharing
(Transmission
functionality)
Traffic Encryption
(C2POWER, A.6.1.4)

Senders vs.
Receivers/
Gateways
Senders want encryption but this creates extra
cost for receivers and gateways of wireless
mesh network to the Internet for decrypting
packets.
Outgoing
spillover to
Private Traffic
Forwarding
(Transmission
functionality)

Outgoing
spillover to
Handover
(Mobility
functionality)

Table 8: Short description of tussle related to the Mobility networking functionality
Name (Project,
Section)
Stakeholders Short Description Spillovers
Handover
(C2POWER, A.6.1.4)
Sender vs.
Technology
Maker
If Traffic Encryption mechanism is performed
between the sender and the Gateway it poses
challenges when handing over to another base
station
Incoming
spillover from
Private Traffic
Forwarding
(Transmission
functionality)

Incoming
spillover from
Traffic
Encryption
(Security
functionality)

D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 36 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Table 9: Short Description of Tussles Related to the Cloud QoS Functionality
Name (Project,
Section)
Stakeholders Short Description Spillovers
Reserve computing
resources (BonFIRE,
A.8)
Critical
Experimenter vs.
Broker, IaaS
host vs. Non-
Critical
Experimenter
Experimenters of critical and those of non-
critical systems content for computing
resources. Using tiered-pricing and allowing
advanced requirements to be specified can
lead to equilibrium.
Incoming
spillover from
(cloud-computing
security
functionality)

Select computing
resources (OPTIMIS,
A.7)
User vs. Broker User and Broker can have conflicting
interests in site (or even physical machine)
selection. If the User is empowered and can
select site/machine then the Broker could
react by adapting the prices, leading to
system equilibrium.
-

Table 10: Short Description of Tussle Related to the Cloud Security functionality
Name (Project,
Section)
Stakeholders Short Description Spillovers
Hide sensitive
information
(BonFIRE, A.8)
Broker, IaaS
host vs.
Experimenter
Experimenters need enough information for
verifying asked resources have been used,
but Broker and IaaS host fear that will be
used by competitors. Allowing Brokers and
IaaS hosts to define several aggregation
levels of monitoring information available
(perhaps together with an extra charge) and
Experimenters to select the one that suits
their needs would lead to equilibrium.
Outgoing
spillover to
Reserve
computing
resources (cloud-
computing QoS
functionality)
4.3 A Consolidated View of Stakeholders and Tussles
This section aims to combine findings of Sections 4.1 and 4.2 and Appendix A, in order to
identify critical functionalities or even critical tussles. The criticality level can be estimated
by looking at graph-theoretic metrics. More specifically by associating tussles with nodes
and spillovers with edges we can calculate the indegree and outdegree of different
functionalities, search for loops/cycles of spillovers, etc. For example, functionalities with
high outdegree could be considered as high-priority ones because they can have negative
effects to multiple functionalities. Then, these findings will be used for making
recommendations to policy makers and technology makers on the introduction of Future
Internet technologies.
Figure 11, as a consolidated version of Figure 10, provides a high-level view of the
negative externalities between networking functionalities. These relationships focus only
on the negative aspects; our purpose is to identify missing complementary technologies. A
dotted red arrow represents one or more spillovers starting from functionality A to
functionality B. For example, there are four arrows pointing up from Security to
Transmission since four tussles in the latter functionality were found to be triggered by
tussles related to the Security functionality. These spillovers have been identified during
the detailed tussle analysis of the 8 research projects studied in detail. Each one
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 37 of 161
© Copyright 2012, the Members of the SESERV Consortium

represents a scenario where absence of some features in functionality A makes
functionality B unstable. More specifically:
1. ‘Control Cache Access’ tussle (as described in Appendix A.4.3) could not reach a
stable outcome with the available technologies, causing a spillover to ‘Traffic
Engineering 3’ tussle.
2. ‘Reciprocal Traffic Forwarding’ tussle (as described in Appendix A.5.1.6) could not
reach a stable outcome with the available technologies, causing a spillover to
‘Traffic Forwarding’ tussle.
3. ‘Monitor User Traffic’ tussle (as described in Appendix A.5.3) could not reach a
stable outcome with the available technologies, causing a spillover to ‘Connection
Sharing’ tussle.
4. ‘Traffic Encryption’ tussle (as described in Appendix, A.6.1.4) could not reach a
stable outcome with the available technologies, causing a spillover to ‘Private
Traffic Forwarding’ tussle.
Similarly, one arrow is pointing down from the Transmission to the Security functionality
because one tussle in the Transmission functionality seems to have a negative impact on
Security functionality.

Figure 11: A High-level View of the Spillovers Amongst Functionalities
In other words, if a technology a* implementing a set of missing features of functionality A
was designed in accordance to the ‘Design for Tussle’ principle, then it would help the
stakeholders of the other functionality B for which a tussle spill-over from A applies to
reach a fair and stable outcome.
Note that, given the limited resources of the SESERV action, this analysis is not meant to
be exhaustive. Although the set of projects and their case studies that have been selected
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 38 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

cover a wide range of Future Internet research activities, it is expected that tussles exist
(and their relationships/spillovers) which do not appear on this figure. Furthermore, there
are probably technologies inside a particular functionality that would lead to a stable
outcome but were not identified during our interactions with the projects or the focus
groups with participants from the wider Future Internet community. We believe, however,
that the trends identified are more broadly valid, even though the sample of tussles is
limited.
Table 11: Aggregates of Outdegree and Indegree per Functionality
Functionality Outdegree Indegree
Network Security 7 1
(Network) Transmission 3 6
Network QoS 1 2
(Network) Mobility 0 2
(Network) Naming/Addressing 1 0
(Network) Traffic Control 0 1
Cloud Security 1 0
Cloud QoS 0 1
(Cloud) Execution 0 0
(Cloud) Virtualization 0 0

Examining Figure 11 we can calculate the agregate number of outgoing and incoming
spillovers per network and cloud functionality. We can see in Table 11 that there are some
network functionalities with a high number of either originating or terminating spillovers.
For example, the ‘Security’ functionality has an Outdegree of 7, which can be considered
high relatively to the number of projects surveyed and tussles identified, and seems to
pose significant restrictions to other functionalities
15
. This means that tussles in other
functionalities could be resolved more smoothly if a number of Security-related
mechanisms were in place.
On the other hand, the ‘Transmission’ functionality (and especially routing) seems to suffer
the most from the lack of features in other functionalities. We could say that this finding is
not surprising. Transmission includes fundamental atomic functionalities for providing QoS
across ISPs, such as forwarding and routing. It is very unlikely that the existing, unstable
situation will change in the future unless security-related mechanisms, such as monitoring,
will be introduced that:
1. are implemented in a way that meet the economic requirements of ISPs regarding
that particular functionality and
2. balance the trust and trustworthiness across different ISPs.
This means that ISPs will keep on using technologies for providing end-to-end QoS, even
though they were not designed for that purpose. The best-known technology used for
traffic engineering is the BGP routing protocol, even though it was designed for a single
service class [40].

15
There are four outgoing spillovers to Transmission, two outgoing spillovers to QoS functionality and one to the Mobility
functionality.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 39 of 161
© Copyright 2012, the Members of the SESERV Consortium


Table 12: Number of Distinct Tussles Each Stakeholder Appears per Network Functionality
Network functionality
Stakeholder
Transmission
(7 tussles)
QoS
(3 tussles)
Traffic
Control
(1 tussle)
Mobility
(1 tussle)
Naming/
Addressing
(4 tussles)
Security
(8 tussles)
Edge-ISP 5 3 1 0 3 6
Transit-ISP 1 1 0 0 1 1
User 5 0 1 1 1 5
Information
Provider
2 0 0 0 2 2
Infrastructure
Provider
0 0 0 0 0 1
Regulator 1 0 0 0 0 2
Content Owner 0 1 0 0 1 1
Technology
Maker
1 0 0 1 0 1

Table 12 provides the number of tussles for each functionality that each major stakeholder
role appears. We can see that ISPs, Users and Information Providers appear in the
tussles identified more frequently than other stakeholders, and especially Infrastructure
Providers, Content Owners and Technology Makers.
Note that stakeholder roles where two (or more) instances participate in the same tussle
count only once. Similarly, the stakeholders appearing in a tussle refer only to those roles
who are directly involved. For example, even though an Edge-ISP offering connectivity
services to Users is frequently the retail arm of an Infrastructure Provider (e.g., Last-mile
providers can act as Edge-ISPs), the latter do not appear on the table whenever Edge
ISPs do. Furthermore, some stakeholder roles are top-level ones (such as Information
Providers and Infrastructure Providers), others are represented by a single 2nd-level
stakeholder role, while Connectivity Providers are decomposed into Edge-ISPs and
Transit-ISPs. Stakeholders may not appear in some functionalities because they were not
reacting to the particular tussles that have been analysed. For example, End-Users are
interested in QoS but they were not directly involved in the particular tussles we have
focused.
Similarly, Table 13 gives the number of tussles, for each cloud-related functionality, that
each stakeholder appears. As expected, Infrastructure providers (such as cloud hosts) and
special Information Providers (brokers) together with Users of cloud services are the most
popular stakeholders. Note that several stakeholder roles, most notably Information
Providers such as ASPs (Application Service Providers) and Technology Makers, could be
considered as potential users of cloud functionalities.
Based on the analysis of the wide (and sometimes conflicting) interests of the Future
Internet stakeholders as well as the resulting tussles and consequences that have been
identified, Section 4.4 provides an overview of the lessons learnt in the form of
recommendations.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 40 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Table 13: Number of Distinct Tussles Each Stakeholder Appears per Cloud Functionality
Cloud functionality Stakeholder
Virtualization
(0 tussles)
Execution
(0 tussles)
Cloud QoS
(2 tussles)
Cloud Security
(1 tussle)
ISP - - 0 0
User - - 2 1
Information
Provider
- - 2 1
Infrastructure
Provider
- - 1 1
Policy Maker - - 0 0
Content Owner - - 0 0
Technology Maker - - 0 0
4.4 Lessons Learnt
This section provides a set of 6 recommendations to research projects, providers and
policy makers for successfully redesigning and configuring Future Internet technologies.
These are:
1. Technology makers should understand major stakeholders' interests.
2. Technology makers should be neutral.
3. Technology makers should explore consequences and dependencies on
complementary technologies.
4. Technology makers and Providers should align conflicting interests through
incentive mechanisms.
5. Technology makers should increase transparency.
6. Policy makers should encourage knowledge exchange and joint commitments.
Recommendation 1: Technology makers should understand major stakeholders'
interests
One of the consequences of Internet commercialisation is the emergence of conflicting
interests amongst its stakeholders. This phenomenon, which is not restricted to Internet,
has been recognized by researchers long ago, who argue that identifying and studying the
properties of the most important stakeholders is necessary. Section 3.3 provides a
classification of Future Internet stakeholders that can be used as a starting point during the
identification phase. When identifying the important stakeholders it usually helps to think of
particular instances, but in order to study their interests more accurately it is important to
think in terms of stakeholder roles. The idea is that the same entity can perform several
tasks (or make several decisions) during a particular session and these actions may be
interrelated. For example a mobile operator bundling connectivity and communication
services in a pay-as-you-go would be interested in blocking network access to third-party
VoIP services. On the other hand, a provider of landline communications-only would
accept calls from every operator. Section 4.1 provides an overview of the interests of
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 41 of 161
© Copyright 2012, the Members of the SESERV Consortium

major stakeholders in the Internet, broken down into several network and cloud
functionalities.
The SESERV coordination action, based on the fact that the Internet is an environment of
constant and fast changes, argued for the need to study stakeholders’ interests along the
time dimension. This goal can be achieved by applying the SESERV tussle analysis
methodology, which examines how self-interested stakeholders would react when
interacting with other rational stakeholders
16
. An example of this is the OPTIMIS project,
where tussle methodology showed that the long term effects of the project technology
would slowly alter the equilibrium, favouring one party over another. We should note that
the tussle analysis can be applied in association with other complementary approaches,
such as the MACTOR methodology [32], [33] for identifying feasible future value
networks.
17
Appendix A provides a detailed analysis of the way the interests of several
stakeholders are expected to evolve over time for case studies of 8 research projects.
The tussle analysis method introduced in this section is of interest to European research
projects since one of the key findings made from the SESERV Oxford workshop (cf. [5] for
details) shows that “many of the projects interviewed focused solely on direct controlling
parties, those providing the funding, regulators and the consortium partners themselves.
This means that some relevant considerations are missed, not least in considering the
specific impact of the technology on those who will use or be affected by it.” The tussle
analysis method constitutes a suited tool for research projects to do exactly that, namely to
assess socio-economic dimensions in a structured manner considering all stakeholders of
relevance to the technology a project develops or studies. Selecting the features of a
technology in a more holistic way, by taking into account the interests of the major
stakeholders would lead to more attractive outcomes and increase the chances of that
technology to be adopted in the long-term.
In order to get a representative idea of stakeholders’ interests it is important to interact with
actual representatives of the stakeholders themselves and ensure that they are motivated
to express their genuine thoughts. SESERV has been interacting with Internet
stakeholders during its lifetime in several events and organised a number of focus groups,
engaging participants into a dialogue. More details about the methodology followed when
organising and running these events can be found in D1.5 [15], while the actual
discussions have been reflected in the current report and D3.2 [34] through the tussle
analysis of related projects and the identification of the important social Future Internet
issues.
Recommendation 2: Technology makers should allow all actors to express their
choices
Clark et al. [27] suggested that Internet technologies should follow the “Design for Tussle”
goal and be designed for allowing variation in outcome. The rationale was that Internet is a
rather unpredictable system and it is very difficult to assess whether a particular outcome
will remain desirable in the future. A technology, such as an Internet communication
protocol or an Internet-based application, compatible to the “Design for choice” principle

16
Rational stakeholders, in economics literature, refer to those entities who always act in ways that maximize their net
benefit. On the other hand irrational stakeholders have a less predictable behaviour, which can be attributed to several
factors (e.g., altrouism, imperfect assessment of the present situation). It is obvious that the latter stakeholder type is a
more realistic assumption when human entities are involved.
17
The details of combining tussle analysis and MACTOR methodologies with UBM (Unified Business Modeling) will be
explored between members of the SESERV and UNIVERSELF projects, after the end of SESERV project’s lifetime.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 42 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

should lead to a stable outcome by allowing all involved stakeholders to express their
interests and affect the outcome.
The “Design for choice” principle provides guidance in designing protocols that allow for
variation in outcome. This is related to Kelly Johnson’s KISS principle (“Keep It Simple,
Stupid”), which suggests that Internet technologies should not make any technical
assumptions or be optimized for a particular service or context. The reason is that these
assumptions may not be valid in the near future, posing obstacles to the smooth evolution
of the Internet.
Useful properties are:
• “Exposure of list of choices” suggesting that the stakeholders involved must be
given the opportunity to express multiple alternative choices and which the other
party should also consider.
• “Exchange of valuation” suggesting that the stakeholders involved should
communicate their preferences in regard to the available set of choices (for instance
by ranking them in descending order).
• “Exposure of choice’s impact” suggesting that the stakeholders involved should
appreciate what the effects of their choices are on others.
• “Visibility of choices made” suggesting that both the agent and the principal of an
action must allow the inference of which of the available choices has been selected.
Note that the “Design for choice” principle is directly related to the FIArch’s "Exchange of
Information between End-Points" seed for new design principles, described in more detail
in Section 7.2.1. The main difference is that the latter focuses on dealing with Information
asymmetry issues, and thus does not cover the “Visibility of choices made” property.
Recommendation 3: Technology makers should explore consequences and
dependencies on complementary technologies
The second principle implementing the ‘Design for Tussle’ goal suggests avoiding
spillovers to other functionalities and helps in identifying whether tussle spillovers can
appear. A protocol designer can check the following two conditions:
• “Stakeholder separation”, or whether the choices of one stakeholder group have
negative side effects on stakeholders of another functionality.
• “Functional separation”, or whether the absence of a particular feature in a set of
technologies promotes users to repurpose a technology from another functionality.
In this case, the technology maker should consider implementing the missing
functionality in a complementary technology.
Recommendation 4: Technology makers and Providers should align conflicting
interests through incentive mechanisms
There are several cases where stakeholders have conflicting interests. In case of a scarce
Internet resource, for example IP addresses or bandwidth on a common network link, a
number of stakeholders may compete for receiving enough share of the resource.
Similarly, in a value chain where a principal delegates control to an agent for performing a
task, these entities may have different preferences over the possible outcomes.
These situations are not limited to the Internet and traditionally have been dealt with as
economic mechanisms. Such market mechanisms provide the opportunity to (i) align
resource consumption with utility in case of contention for resources and (ii) align costs
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 43 of 161
© Copyright 2012, the Members of the SESERV Consortium

and benefits in case of distributed control. Absence of suitable economic mechanisms can
cause market failures; for example, if only flat pricing schemes were available, contention
for resources would lead to the ‘tragedy of the commons’ [17].
The idea here is for technology makers to provide all involved stakeholders with access to
the necessary information, as well as the necessary functionality so that each participant
can afterwards take an informed course of action and allow tussles to be resolved in a
more predictable way. This would allow Connectivity/Information/Infrastructure providers to
announce multiple service plans and available configurations for meeting customer
demand and covering their costs. For example, providing customers with the ability to
control advanced service features (such as the exact network path to be used) increases
provider’s cost and should be charged higher. This gives greater flexibility to providers for
fine-tuning their offerings and customers in selecting the service plan (covering service
attributes and contract details such as pricing scheme) that suits his/her usage profile.
There are several types of economic mechanisms that a technology maker could choose
for aligning the conflicting interests that have been identified. Such incentive structures
include charging schemes, rewarding systems and reciprocal mechanisms. The first
mechanism suggests that when a choice has negative impact on some other party, the
party making the choice should contribute to the cost. Similarly, the second mechanism
suggests that when a choice has a positive impact on other parties, the party making the
choice should also receive part of the benefits. On the other hand, reciprocal mechanisms
are suitable for relationships created for the mutual benefit between peers, e.g., transit
cost avoidance in the case of peering interconnection agreements.
The purpose of each charging mechanism should be to signal the current price at which a
supplier is willing to provide a potentially scarce resource or a costly service feature, so
that consumers can select product offerings and adjust their consumption according to the
value the usage brings them and the cost it incurs. An unconditioned flat rate is one
extreme case; being very popular to end users but giving inappropriate incentives since
they are charged independently of their consumption. Congestion charging is another
family of mechanisms that makes someone responsible for their actions’ impact (negative
externality) on the system.
The main problem with usage-based pricing schemes for charging users is that they
increase providers’ cost and customers’ uncertainty, and thus are not always easy to be
adopted. An ISP, for example, may be reluctant to apply such a scheme in fear that
revenues from the number of users willing to have a less congested network (at the
clearing price) will not cover the extra costs. Studies have shown that end users prefer
simpler prices. For example, Odlyzko noticed in [35] that there are repeating patterns in
the usage histories of several communication technologies, including telephone networks
and the Internet, showing that sophisticated pricing schemes are difficult to be widely
deployed.
18
On the other hand, the application level has readily embraced payment for
usage, through the cloud model. It should be remarked that this was partly due to much
lower barriers for new entrants (c.f. Amazon’s dramatic entry into cloud provision) and due
to a simple utility model for payment according to units of resource used. Similarly,
advanced charging schemes have been also proposed for the ISPs’ interconnection
market
19
. For example, Falk von Bornstaedt in the Athens workshop described the

18
However, using special software and/or hybrid charging schemes could mitigate users’ reluctance. For example, flat
pricing could be employed for all users with an extra charge for the congestion that they cause. But users could limit
the congestion charges by configuring their software how to react in case of congestion. This method of pricing might
reduce traffic and free some scarce resources at peak hours.
19
For an extensive overview of existing and proposed interconnection models the interested reader is redirected to [39]
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 44 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

“Sending Party Network Pays” principle as an enabler for QoS-aware Internet
20
. The
reason is that provides the ISPs with the right incentives when they receive a packet
marked with high-priority. Thus, identifying and studying the properties of the most
important stakeholder roles is necessary. Besides recognizing the set of stakeholder roles
their characteristics must be understood.
Reward systems provide incentives for either exceptional behaviour or encouraging
adoption. Reward can be in real money, virtual currencies or in-kind by the provider. The
problem with financial compensation is that it increases the cost to the provider and thus
cannot be applied systematically. For example, in the case of ULOOP project it was found
that covering part of the cost of relaying another user’s packet would help in the system
being adopted. But the economics literature suggests that no incentive compatible
mechanism can be produced without relying on subsidies from external sources
21
.
Similarly, several implementation issues hinder offering virtual currencies. On the other
hand, mechanisms that rely on providing rewards in kind (for example preferential
treatment) are much easier to implement.
Reputation systems bring memory to the system by keeping the history of participants’
interactions together so that the trustworthiness of the entities can be estimated. The idea
is that entities in collaborative environments will have the incentive to cooperate if they
know that it will affect their future ability to participate in the system. Even though
reputation systems are not trouble-free, such methods can deter the majority of the
misbehaving users. An interesting example of how Internet stakeholders can react in
absence of a technology that leads to a fair tussle outcome is the CIDR report
22
, which
provides on a weekly basis information about the ISPs who act selfishly and do not follow
good practices
23
. Making this information available to mailing lists (or even workshops)
where ISPs discuss operational issues and concerns (such as the NANOG
24
mailing list)
acts as an incentive mechanism.
In general, such incentive mechanisms can make the set of technologies, which follow the
design for tussle paradigm, more attractive to their intended users (such as end users,
ISPs, etc.) and boost their adoption. We should note that the rest technologies will
probably still be available but all users today can have access to the necessary information
for making decisions that are aligned with their interests (thus are not expected to act
irrationally). FIArch’s “Sustain the Investment” seed for new design principles, which is
discussed in Section 7.2.2. The idea is that incentive mechanisms will be necessary that
lead to favourable outcomes for all stakeholders.

20
http://www.seserv.org/athens-ws-1/webcasts#bornstaedt
21
Myerson and Satterthwaite showed that it is impossible to simultaneously achieve perfect efficiency (that is maximizing
total agent value), budget balanced (subsidies are necessary), and individually rationality (no agent would pay more
than its valuation for the goods it receives) from an incentive compatible mechanism (in which case an agent’s
dominant strategy is to simply report its private information truthfully) [36].
22
http://www.cidr-report.org/as2.0/
23
Edge ISPs have the incentive not to conform to the CIDR (Classless Inter Domain Routing) standard by announing
disaggregated BGP routes in order to perform traffic engineering. This behavior increases the size of the routing tables
and results in increased cost for the Transit ISPs who must upgrade their routers.
24
North American Network Operators' Group
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 45 of 161
© Copyright 2012, the Members of the SESERV Consortium

Recommendation 5: Technology makers should increase transparency
Based on the consolidated results
25
of the detailed tussle analysis that was performed for
8 research projects we conclude that the Network Security functionality creates many
spillovers to other networking functionalities, and especially the Transmission one. The
reason traces back to the original design goals of the ARPAnet, which is the precursor of
the Internet. These are:
1. Interconnection of Existing Networks: develop an effective technique for multiplexed
utilization of existing interconnected networks rather than imposing a new unified
global network that could become obsolete later
2. Survivability: the entities communicating should be able to continue without having
to re-establish or reset the high level state of their conversation.
3. Support for multiple Communication Service Types: the Internet should support, at
the transport layer, a variety of applications that could be distinguished by differing
requirements for such things as bandwidth, latency and reliability.
4. Support for a variety of Physical Networks: to incorporate and utilize a wide variety
of network technologies.
5. Distributed Management: Internet resources should be able to be operated and
managed by distributed stakeholders.
6. Cost Effectiveness: desire for efficient use of network resources.
7. Simple Host Attachment: desire for keeping the complexity of host protocol stacks
low, so that deployment was not hindered.
8. Resource Accountability: desire for monitoring the usage of network resources.
As Clark [38] mentions:
“This set of goals might seem to be nothing more than a checklist of all the desirable
network features. It is important to understand that these goals are in order of importance,
and an entirely different network architecture would result if the order were changed. For
example, since this network was designed to operate in a military context, which implied
the possibility of a hostile environment, survivability was put as a first goal, and
accountability as a last goal. [...] While the architects of the Internet were mindful of
accountability, the problem received very little attention during the early stages of the
design. aud is only now being considered. An architecture primarily for commercial
deployment would clearly place these goals at the opposite end of the list.”
This gives significant evidences that in order for the Future Internet to successfully provide
advanced services, technologies are necessary that will bring more transparency and
trust. Thus, technology makers (like research projects) should be given incentives to deal
with security issues (in the broader sense) in a more systematic way, without making hard
assumptions on the stakeholders’ behavior or creating spillovers to other functionalities.
Focusing on the actual user requirements and providing enough flexibility will increase the
adoption chances of a particular technology.
A nice example of a promising technology for making senders accountable for the
congestion they cause is the Congestion Exposure (ConEx) protocol
26
. It requires a sender
to inform the network about the congestion that each packet is expected to cause;

25
Figure 11 and Table 11 show that ‘Security’ functionality causes a high number of spillovers to other functionalities.
26
http://datatracker.ietf.org/wg/conex/
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 46 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

otherwise the packet will be dropped with high probability before reaching its destination.
Furthermore, the reputation systems that have been previously described are security
mechanisms that provide feedback loops for all participants and balance trust and
trustworthiness. Similarly, monitoring mechanisms can help in verifying that an entity has
considered the suggestions of the other party/parties participating to the session, following
the ‘design for choice’ principle. Thus, assuming that enough competition exists,
monitoring can also act as an incentive mechanism for providers to follow the “Exposure of
list of choices” recommendation (which is part of the “Design for Choice” design principle).
Recommendation 6: Policy makers should encourage knowledge exchange and
joint commitments
Confirming the importance of engaging all stakeholders into the process of designing
Future Internet architectures, as was stated in Dewandre’s key-note speech in the Oxford
SESERV Workshop
27
, policy makers should encourage the creation of multi-disciplinary
teams. Technologists, collaborating with social researchers, economists and policy experts
allow the exchange of valuable information and result in technologies with greater chances
in succeeding. A recent success story is the collaboration of a biologist with computer
scientist, which revealed that the behavior of harvester ants as they search for food mirrors
the TCP protocol that control traffic on the Internet
28
. Quoting Prof. Prabhakar
"Ants have discovered an algorithm that we know well, and they've been doing it for
millions of years […] had this discovery been made in the 1970s, before TCP was written,
harvester ants very well could have influenced the design of the Internet."
Given that the absence of a common vocabulary increases the time it takes for the
participants to achieve a common understanding of the main challenges and agree on
possible countermeasures, participants should be patient and open-minded.
Another important aspect is the exploitation of synergies amongst research projects as
well as projects and the industry. Given that the Internet is a complex ecosystem of
technologies and socio-economic interests of various stakeholders, a more systematic
approach is needed in order to have a positive effect in the future Internet architecture.
The main reason is that there are limited synergies; new technologies are proposed and
evaluated, but these rarely become standardised or be extended by other initiatives. We
welcome further extensions to technologies that have been proposed by finished projects,
as in the case of OPTIMIS which further developed the WSAG4J standard; a framework
for managing SLAs that came from the SMART LM project
29
. Furthermore, usually there
are limited interfaces to or documentation on the complementary technologies. A
standalone technology is less likely to deal with stakeholders’ interests successfully,
making it less attractive to those that could adopt it. For example, monitoring technologies
are considered important for providing QoS-aware network services across ISPs.
Another important aspect of collaborations is joint commitment on the success of a
partnership. The need for several technologies in order to achieve the necessary
functionality means that more than one stakeholder will have to move forward at the same
time (each one adopting a subset of those technologies). Large Integrated Projects and
recent Public Private Partnership initiatives (PPP) are promising directions towards the
goal of positive impact on the Internet.

27
http://www.seserv.org/panel/conferences-webcasts#dewandre
28
Stanford biologist and computer scientist discover the 'anternet', available online at
http://engineering.stanford.edu/news/stanford-biologist-computer-scientist-discover-anternet
29
SmartLM - Grid-friendly software licensing for location independent application execution, http://www.smartlm.eu/
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 47 of 161
© Copyright 2012, the Members of the SESERV Consortium

5 Survey of Technologies by Challenge 1 Research
Projects
In this section we present a survey of technologies proposed by a carefully selected set of
11 Challenge 1 research projects and categorize them using the taxonomy of Internet
functionalities described in Sections 3.1, 3.2; this work was carried out entirely in the 2
nd

year of SESERV. A state of the art of the different approaches followed can be useful to
the broader research community for identifying new research avenues when dealing with
existing or future technology bottlenecks. It can also be used for examining whether the
research community already focuses on those key functionalities identified in Section 4.4
(namely Recommendation 5), or more guidance is necessary.
The Challenge 1 projects that have been selected for the survey of technologies cover
several thematic areas, as shown in and include the following (in alphabetical order):
• BONFIRE: Building service test beds on FIRE.
• C2POWER: Cognitive Radio and Cooperative Strategies for POWER saving in
multi-standard wireless devices.
• ENVISION:! Enriched Network-aware Video Services over Internet Overlay
Networks.
• ETICS: Economics and technologies for inter-carrier services.
• MEDIEVAL: MultimEDia transport for mobIlE Video AppLications.
• OPTIMIS: Optimized Infrastructure Services.
• PURSUIT: Publish Subscribe Internet Technology.
• SAIL: Scalable and Adaptive Internet solutions.
• ULOOP: User-centric Wireless Local Loop Project.
• UNIVERSELF: Realizing autonomics for Future Networks.
• GEYSERS: Generalised Architecture for Dynamic Infrastructure Services.
The following tables (Table 14 through Table 19) provide a short description of
technologies proposed by 9 selected research projects in the area of future networks.
Each of those tables is dedicated to one of the networking functionalities.
Table 14: Survey of Technologies Related to Transmission Functionality as Proposed by
Selected Network Research Projects
Functionality: Transmission
ETICS ETICS IC routing protocol and the ETICS SLA offer protocol for automating the
discovery of ASQ (Assured Service Quality) goods towards a destination. ETICS IC
signalling protocol for the establishing premium, end-to-end connectivity services
SAIL Mechanisms for multi-path transmission are proposed and converged access networks
are considered.
PURSUIT Topology Managers can perform source routing using the LIPSIN protocol. Forwarding
Nodes and Relay Nodes are responsible for forwarding packets.
MEDIEVAL The Wireless Access subsystem addresses the transmission of video under the
heterogeneous wireless technologies
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 48 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

ENVISION A multicast controller explores the possibility for multicast services in order to enhance
the delivery of adaptable content to multiple users.
ULOOP Automatic discovery and selection of devices that are willing to relay data.
C2POWER Automatic discovery and selection of devices that are willing to relay data with the
overall goal of saving energy.
UNIVERSELF Several mechanisms for dynamically adjusting the coverage of several types of Base
Stations (namely 3G, 4G and femtocells) so that gaps in the intended area are
reduced and at the same time load balancing can be achieved. Furthermore,
UNIVERSELF suggests a technology for identifying Base Station outages and
assisting an ISP in taking reactive measures.
GEYSERS The GEYSERS project suggests a middleware called LICL, which is responsible for
the creation, maintenance and advertisement of optical virtual resources, as well as
their combination into virtual infrastructures.


Table 15: Survey of Technologies Related to Traffic Control Functionality as Proposed by
Selected Network Research Projects
Functionality: Traffic control
ETICS -
GEYSERS -
UNIVERSELF UNIVERSELF has proposed a mechanism for allocating the available wireless
resources to users based on each user’s class, requested bit rate and channel state of
each subcarrier.
SAIL -
PURSUIT TCP-friendly protocols are under investigation.
MEDIEVAL -
ENVISION -
ULOOP -
C2POWER -

Table 16: Survey of Technologies Related to QoS Functionality as Proposed by Selected
Network Research Projects
Functionality: QoS
ETICS PCN (Pre Congestion Notification) allow routers to mark user packets before these
become congested and provide this information to edge routers (ingress/egress) for
performing QoS functions such as flow admission or even termination.
GEYSERS The GEYSERS project suggests the NCP+ layer that selects an end-to-end path
connecting two particular end-points, or find the IT resources and the associated
netwotk path that meets the service requirements in an optimal way.
UNIVERSELF UNIVERSELF proposes an energy-aware Traffic Engineering mechanism for QoS,
which splits traffic among the available paths at the backhaul and/or core network of
an ISP. Furthermore, it proposes a load balancing scheme allowing end users
connected to the Internet through a congested base station to handover to a less
utilized gateway (e.g., wifi, 3G, 4G, etc)
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 49 of 161
© Copyright 2012, the Members of the SESERV Consortium

SAIL SAIL employs efficient management and control of content, cache placement (in
hierarchical structure), and replacement algorithms to provide QoS [37]
PURSUIT Topology Managers select paths based on QoS metrics (available using an OSPF-like
protocol)
MEDIEVAL NEGOCODE (Network Guided Optimization of Content Delivery) part of the CDN
component offers on-line network guided selection of content locations from which to
download video.
XLO (Cross-Layer Optimization module) aims at computing the traffic engineering
technique so as to solve an optimization problem. QoE requirements such as video
sensitivity and other objective parameters such as packet loss and video rate are
communicated to the XLO to compute new engineering solution.
CDNNC (CDN Node Control) part of the CDN component is responsible for load
balancing within a group of CDN nodes. TE (Traffic Engineering module) executes
engineering techniques to handle problematic traffic flows.
ENVISION Map Service allows a client to retrieve the network map from a CINA (Collaboration
Interface between Networks and Applications) server illustrating the network from the
CINA server viewpoint. Additionally, a cost map can be retrieved by the client depicting
routing costs between the defined network regions. Various network metrics can be
specified by the client as cost from the server.
On the other hand, a FootPrint Map, including information by clients, can be delivered
by the Application to the Network to illustrate the distribution of the Application end-
points over a Network Map.
ULOOP Client-Based QoS Path Selection.
C2POWER Negotiations also include QoS parameter to allow selecting a relay node that can
provide the designated QoS.

Table 17: Survey of Technologies Related to Mobility Functionality as Proposed by
Selected Network Research Projects
Functionality: Mobility
ETICS -
GEYSERS -
UNIVERSELF UNIVERSELF develops prediction models of user mobility for accurately predicting
future loads and trigger load balancing on time.
SAIL Mobility is natively and seamlessly supported in the context of Global Information
Network (GIN) since content items can be cached to the new user’s cell
PURSUIT Mobility is seamlessly supported based on proxies that handle subscriptions on behalf
of Mobile Nodes (MNs) and buffer data that correspond to matched subscriptions
when an MN moves to another proxy.
MEDIEVAL Mobility subsystem coordinates with the CDN component to achieve handover
optimization and optimal cell selection.
ENVISION -
ULOOP Several mechanisms to allow for high mobility (such as handover to other users/base
stations).
C2POWER C2POWER clusters are built by considering mobility aspects of nodes (stationary
nodes are preferred to highly mobile ones) and mechanisms for energy efficient
handovers are proposed.

D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 50 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Table 18: Survey of Technologies Related to Security Functionality as Proposed by
Selected Network Research Projects
Functionality: Security
ETICS 4 candidate ETICS network monitoring schemes for collecting the necessary
information when services are being delivered
GEYSERS LICL supports the dynamic and consistent monitoring of the physical layer and the
association of the right security and access control policies. NCP+ offers monitoring of
performance at the end-to-end path, which is currently limited to a single domain only.
UNIVERSELF -
SAIL The Content Access Management component performs AAA; the NRS component is
responsible for naming security, which means providing cryptographic strength binding
between an object name, and the form of the object returned by the ICN in response to
a request, by either use of Public Key Infrastructure (PKI) or DNS-Based
Authentication of Named Entities (DANE).
PURSUIT Rendezvous Nodes perform AAA, Topology Managers hide critical information using
local Bloom-filters
MEDIEVAL -
ENVISION CINA interface provides authentication of both CINA servers (to be trusted by the
overlay applications) and CINA clients (to be trusted for their information provided to
the network, to determine their access privileges, to share resources and provide
personalized services, to perform billing) employing PKI infrastructures, and/or the TLS
protocol.
Monitoring process (implemented in Python) is provided by the CINA interface
employing the SNMP (Simple Network Management Protocol) protocol for information
in the equipment's MIB and CLI commands for other information (e.g. neighbors, delay
between routers, etc.)
ULOOP Several AAA and cooperative mechanisms to increase security.
C2POWER -

Table 19: Survey of Technologies Related to Naming/Addressing Functionality as
Proposed by Selected Network Research Projects
Functionality: Naming/ Addressing
ETICS -
GEYSERS -
UNIVERSELF -
SAIL Named Information (NI) naming scheme for interpreting hashed names. Addressing is
performed by NRS, using either Multi-level Distributed Hash Table (MDHT) or a DNS-
based system.
PURSUIT Naming of information objects is based on pairs of RId and SId. Addressing is
performed by Rendezvous Nodes
MEDIEVAL The Video Service Provisioning function, part of the Video Services Control is
responsible for storing metadata related to the content files and ralates to the actual
location that content is stored. Additionally, it is responsible for matching content to
other services.
ENVISION Naming of CINA servers is performed by U-NAPTR/DDDS (URI-Enabled Name
Authority PoinTeR/Dynamic Delegation Discovery Service) unique strings, in the form
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 51 of 161
© Copyright 2012, the Members of the SESERV Consortium

of a DNS name. Clients need to use the U-NAPTR [24] specification to obtain a URI
for the applicable CINA service.
Addressing (U-NAPTR resolution) is performed through 3 different options: i) user
input, ii) DHCP-like option (remote configuration of the client), iii) reverse DNS lookup
on server's IP.
ULOOP -
C2POWER -

Table 20 provides a summary of the two cloud-related technologies that have been
surveyed (OPTIMIS and BonFIRE) grouped by the relevant cloud functionalities.
Table 20: Survey of Technologies Proposed by Selected Cloud Research Projects
Functionality
OPTIMIS BonFIRE
Virtualization
In OPTIMIS all the contemplated
and use case IaaS providers have
their own virtualization solution
and hardware. The project is
working to ensure computability
with all major virtualization
software.
OPTIMIS uses OVF (Open
Virtualization Format in XML to
describe services) on top of XEN,
VMWARE and QCOW2
hypervisors. This latter hypervisor
in turn works on top of XEN or
KVM.
The BonFIRE broker provides
users the choice of six different
providers. Experimenters can select
which site to run virtual machines
from and the same experiment can
run with VMs from multiple sites.
BonFIRE offers Elasticity as a
service which allows compute
resources to be dynamically
increased and decreased according
to rule.
Execution
OPTIMIS develops interfaces
allowing cloud users to specify
and manage the images as per a
commercial cloud infrastructure.
Each cloud site operates its own
execution solution and services
compliant to BonFIRE’s OCCI
specification. Infrastructure is
offered over an Open Cloud
Computing Interface (OCCI). A user
can access programmatically
against the OCCI interfaces, and
there are various language bindings
to do this, currently Ruby-based
Restfully and a JSON bespoke
domain-specific language.
Furthermore, BonFIRE allows
images with multiple configurations
to exist.
QoS

The OPTIMIS broker is set up for
user self-service in order for them
to access third party resources.
OPTIMIS has developed its own
programming model for managing
requests, although users can use
alternatives.
The OPTIMIS broker allows
clients to connect to and receive
provisioning from multiple IaaS
BonFIRE offer’s on-demand
provisioning of resources for part of
its permanent capacity through
APIs (Application Programming
Interface), while additional large
scale compute capacity is available
on request. In particular, a web
based portal allows definition of an
experiment in terms of the
computing, storage and networking
and the configuration between
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 52 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

providers. This can be on a cloud-
bursting, multicloud or federated
cloud basis. A central theme of
OPTIMIS is to give the user
control over the choice of provider
according to the TREC
parameters.
WSAG4J, a Standards-based
Java framework, is used for
negotiation and management of
Service Level Agreements

them. Each experiment is allocated
resources either on a best effort or
exclusive basis.
Security
The OPTIMIS broker retains
some monitoring over whether the
OPTIMIS-brokered SLA is
respected by both parties, and
this data feeds into the trust
algorithms. However, accounting
is carried out between the IaaS
provider and client using the
WSAG4J framework.

No cost accountability is currently
supported although per-experiment
cost is being investigated. Deep
monitoring of infrastructure and
applications is possible by
configuring the Zabbix monitoring
software.

The next subsections provide a short overview of the technologies examined by the 11
research projects surveyed. The goal is to describe the approach followed in each case
and allow comparisons to be performed. Furthermore these descriptions give examples of
technologies for each functionality, helping in understanding the taxonomy adopted.
5.1 ETICS
The major goal of the ETICS architecture is to automate the discovery and establishment
of Inter-carrier assured service quality connectivity services.
The ETICS IC routing protocol [10] and the ETICS SLA offer protocol [10] help in
automating the discovery of suitable Assured-Service Quality (ASQ) goods and thus
belong to the transmission
30
functionality of the taxonomy used in this deliverable. The
former refers to the generic protocol rules and mechanisms by which topology information
is exchanged between NSPs. More specifically, this protocol helps in
announcement/discovery of border nodes (local/remote), interconnection points
(local/remote) and interconnection service capabilities (bandwidth, etc). The latter refers to
the protocol rules and mechanisms by which the connectivity offers are exchanged in both
the pre-computed and the on-demand models.
The ETICS IC signalling protocol [10] is used for the establishment of Inter-carrier assured
service quality connectivity services, which corresponds to the QoS functionality. This
protocol is responsible for the exchange of service requests between the different NSPs,
covering both phases of negotiation/ordering and activation (including stitching of multiple
such services). Furthermore, ETICS contributes to the standardization of Pre-Congestion
Notification (PCN), a QoS protocol for enabling admission control at the borders of an

30
These two protocols actually perform a functionality similar to BGP, the routing protocol of today’s Internet, but ISPs
instead of disseminating Best-Effort routing prefixes to their neighbours they disseminate ASQ goods and associated
SLA details (e.g., prices, duration, etc.). Thus, these protocols are part of the Transmission functionality.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 53 of 161
© Copyright 2012, the Members of the SESERV Consortium

ISP’s domain. Routers mark user packets before these become congested and provide
this information to border routers (ingress/egress). The purpose is to protect the quality-of-
service of established inelastic flows within a DiffServ domain when congestion already
exists, or is about to happen.
Several approaches are under investigation in [10] about the ETICS network monitoring
technology, which is part of the Security functionality. Besides “OAM monitoring” that is
based on existing OAM (Operations, Administration, and Maintenance) standards (e.g.
Ethernet Loopback) that are suitable only for intra-domain settings, three other
technologies are under study. The “Centralised monitoring” that assumes the presence of
a trusted operator and two distributed schemes; the “autonomous monitoring” relying on
the coordinated sampling of packets to be monitored combined with the ability for
hierarchical access to this data, and the last one called “Active flow based monitoring” that
is based on the active flow technology which provides the control of network devices to be
handled programmatically.
5.2 GEYSERS
GEYSERS proposes an architecture capable of offering end-to-end service delivery by
combining and virtualizing Information Technology and network resources, which can be
owned by multiple Infrastructure Providers. These Infrastructure providers advertise their
owned physical resources, allowing brokers (called Virtual Infrastructure Providers) to
lease them and subsequently offer them as virtual resources. It should be noted that virtual
resources could be either aggregates, or partitions of physical network and IT resources.
The full set of available virtualized resources constitutes a virtual resource pool, which
allows Virtual Infrastructure Operators to find resources when a request from a Service
Consumer arrives (or to reserve resources for later usage).
GEYSERS follows a layered approach for achieving virtualisation and supporting SLAs for
IT and optical network resources, either independently or in conjunction. Central to the
GEYSERS architecture are the enhanced Network Control Plane (NCP), the Logical
Infrastructure Composition Layer (LICL) and the Service Middleware Layer (SML). Each of
the three layers is responsible for implementing multiple functionalities in the full end-to-
end service delivery, while the fourth one (Physical Infrastructure) is taken for granted.
LICL is mainly responsible for performing the transmission functionality. This involves the
creation, maintenance and advertisement of optical virtual resources, as well as their
combination into virtual infrastructures. These logical infrastructures form end-to-end paths
between end points by performing fiber or lambda switching.
NCP+ offers functionalities for setup, modification and tear-down of QoS-aware transport
network services. It can select (and possibly reserve) an end-to-end path connecting two
particular end-points, or find the IT resources and the associated netwotk path that meets
the service requirements provided by the SML layer in an optimal way. In the latter case,
this is achieved by running routing algorithms able to take into account constraints for both
the network (e.g., required bandwidth) and the IT resources (e.g. required computing
power, memory or storage).
Furthermore, security is supported at several layers. LICL supports the dynamic and
consistent monitoring of the physical layer and the association of the right security and
access control policies. This is important for guaranteeing isolation between the partitioned
virtual resources. Finally, NCP+ offers monitoring of performance at the end-to-end path,
which is currently limited to a single domain only.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 54 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

5.3 UNIVERSELF
UNIVERSELF has proposed a transmission configuration mechanism for capacity and
radio resource optimization while maintaining continuous coverage service. The strategy is
to activate and deactivate Base Stations in a coordinated way. Each Base Station (BS)
periodically collects and provides to the Central Manager data related to the topology and
its operational status. Then, the Domain Manager proceeds to the identification of any
coverage optimization opportunities, e.g., (de)activation of low-utilised Base Stations
without deteriorating QoE of existing users.
Similarly, a distributed mechanism is proposed for dynamically adjusting the coverage of a
group of femtocells so that the intended area is served well (gaps are reduced and at the
same time load balancing is achieved). The algorithm works by configuring transmission
parameters (such as power level) after having collected statistics pertaining to load,
mobility events, and pilot power over a user-selected time-period. A similar mechanism is
proposed for 4G networks and more specifically for LTE eNodeB stations.
A mechanism is proposed for determining appropriate relay location in multihop relay-
assisted cellular networks, which are widely accepted to provide more uniform data rates
to users who are scattered over a cell, and to save the transmit power of a mobile
terminals in the uplink. Specifically, the aim is to achieve a good trade-off between
transmission reliability (strong signal) and traffic demand requirement (relay link
capacity).
Furthermore, the UNIVERSELF project suggests a technology for helping an ISP to react
in case of sudden Base Station outages. A central manager collects information from Base
stations and user terminals in order to select which of the neighbouring base stations
should be advised to increase the transmission power (by generating as little additional
interference as possible).
UNIVERSELF has proposed a mechanism for allocating the available wireless resources
to users. The proposed traffic control mechanism allows ISPs to define a set of traffic
classes, reserve bandwidth for each class based on its policy and then allocate the
bandwidth based on each user’s class, requested bit rate and channel state of each
subcarrier. The idea is that users asking for the same bandwidth can be allocated different
amount of resources, based on the signal quality.
UNIVERSELF proposes an energy-aware Traffic Engineering mechanism for QoS, which
splits traffic among the available paths at the backhaul and/or core network of an ISP [41].
The aim is, for each pair of ingress and egress core routers to route the demanded traffic
across the available paths by identifying the optimal set of deactivated links. Special care
is taken so that the utilisation of active links is kept balanced, thus avoiding instabilities
and unnecessary energy consumption. !ore specifically, the TE problem is formulated as
a multi-commodity flow problem, which is a NP-complete problem, in order to find near
optimal flow patterns for a given set of requests, considering a network topology.
Bandwidth requests are characterized feasible or infeasible with regards to capacity
constraints along network links. Infeasible requests are rejected while the algorithm
attempts to minimize overall network congestion and maximize potential for traffic growth
UNIVERSELF proposes a load balancing scheme allowing the network and applications to
cooperate. The QoS mechanism currently focuses on the network level but content
migration will be also explored at later stages of the project. The idea is that end users
connected to the Internet through a congested base station would handover to a less
utilized gateway (e.g., wifi, 3G, 4G, etc).
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 55 of 161
© Copyright 2012, the Members of the SESERV Consortium

UNIVERSELF develops prediction models of user mobility for accurately predicting future
loads and trigger load balancing on time.
5.4 SAIL
SAIL aims to integrate state of the art technologies, such as multi-path transmission,
converged access networks, cloud computing and information-centric networking for
moving towards the Future Internet. In this section we will focus on one particular aspect of
SAIL, called Network of Information (NetInf), which aims to improve application support via
an information-centric paradigm.
SAIL employs efficient management and control of content, cache placement (in
hierarchical structure), and replacement algorithms to provide QoS [37].
SAIL considers two types of NRS (Name Resolution Service), one DNS-based and one
based on Distributed Hash Tables (DHTs) for performing the Name resolution. The DNS-
based scheme could make use of DNS SRV records to allow a requesting node to find a
service that can further resolve the name or return the object. Furthermore, it has
proposed Multi-level Distributed Hash Table (MDHT) [12], a distributed Name Resolution
Service for ICN (Information-Centric Networking) architectures that resolves an information
ID (name) into a set of locators (network addresses) where the particular information
object can be found. The ID-locator bindings as well as any related metadata are stored in
multiple, interconnected DHT systems, which are arranged in a nested, hierarchical way,
building a DHT tree structure. NRS is equivalent to PURSUITs Rendezvous Network
(RENE) but a significant difference is that requests are either served instantly or discarded
if cannot be matched to a publication (while missed requests in PURSUIT can be handled
later when an information object has become available).
The Named Information (NI) [11] is a naming scheme, based on hash-based URIs, for
identifying resources in a location-independent way, as is the case in information-centric
networking. It suggests a way to interpret those hash strings so that entities other than the
creator of that URI can use the hash function output, as well. For example, comparing a
presented resource against the known URI is considered important in information-centric
networking.
The Content Access Management component performs AAA; the NRS component is
responsible for naming security, which means providing cryptographic strength binding
between an object name, and the form of the object returned by the ICN in response to a
request, by either use of Public Key Infrastructure (PKI) or DNS-Based Authentication of
Named Entities (DANE).
5.5 PURSUIT
The PURSUIT project follows the paradigm of information-centric networking, where as a
clean slate approach for the Future Internet, nothing – not even IP – is taken for granted.
According to this new paradigm, the network becomes aware of the information being
transmitted.
Two types of identities are used in PURSUIT for naming information objects (called RId’s)
and attributes, or scopes, for those objects (called SId's). Scopes, each one being a set of
meta-data, are treated as information items themselves and can therefore be placed in a
scope itself, allowing for building complex (directed acyclic) graphs of information. Both of
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 56 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

them are “flat” (the name structure is not informative about other aspects, such as routing)
and statistically unique within the scope in which these are placed. For example, different
pieces of information with the same RId can be published under different scopes.
Moreover, the same piece of information may be published under two different scopes [7].
Addressing in PURSUIT is achieved by associating a particular information object
(belonging to a specific scope) to the network identifier of a suitable publisher.
Rendezvous nodes perform the task of matching requests to published items and, if the
targeted item is not locally known, more than one may have to collaborate. The set of
those nodes is called Rendezvous Network, or RENE [7].
Routing (Transmission) in PURSUIT, or finding a network path between a pair of
publishers and subscribers (in the simplest case) is performed by Topology Managers. If
multiple subscribers request a specific information object, a multicast tree will be created in
order to deliver the publication. Such entities collect network information (new forwarding
nodes, new links, current conditions, etc.) by receiving Link State Announcement (LSA)
messages of traditional protocols for topology discovery, like OSPF. These routing
decisions indicate the exact path to be used by following the LIPSIN protocol [9]. This
solution uses Bloom Filters for encoding the delivery tree into a constant-length forwarding
identifier. Furthermore, the Bloom filter-based Relay Architecture (BRA) suggests that
each domain’s Topology Manager can use nested Bloom Filters (used local only) in order
to be flexible and hide sensitive information [7].
Forwarding (Transmission) can be performed by two entities, either the Forwarding
Nodes or the Relay Nodes. The former simply follow the instructions provided by Topology
Managers, while the latter have the capability to change the forwarding information in the
data packet’s header in order to comply with the “local” Bloom filter [7].
QoS is under the responsibility of Topology Managers, e.g., when multiple paths are
available the selection is based on QoS metrics [7].
Security features of PURSUIT architectures, such as authentication, authorization and
accounting are performed by the Rendezvous Network. In addition, Topology Managers
are responsible for hiding critical network information by using the Bloom filter-based
Relay Architecture [7].
Traffic control is under investigation by using TCP-friendly protocols [7].
Mobility is seamlessly supported based on proxies that handle subscriptions on behalf of
Mobile Nodes (MNs) and buffer data that correspond to matched subscriptions when an
MN moves to another proxy. For mobility two approaches are in accordance to the
PURSUIT architecture, a proactive one and a reactive one.
5.6 MEDIEVAL
MEDIEVAL (MultiMEDia transport for mobIlE Video AppLications) aims at designing a
video transport architecture suitable for commercial deployment by mobile network
operators. The key objectives of the project include the following:
• Support for network mechanisms optimally customized to the specific needs of
video services. This is provided by means of the specification of an interface
between video services and underlying network mechanisms that allows video
services to optimally customize the network behaviour, thereby improving user
experience.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 57 of 161
© Copyright 2012, the Members of the SESERV Consortium

• Enhanced wireless access to optimize video performance by exploiting the features
of each available wireless technology in coordination with the video service
requirements.
• Design of a novel dynamic mobility architecture for next generation mobile
networks, adequate to video traffic.
• Optimization of the video delivery by means of Quality of Experience driven network
mechanisms, including Content Delivery Networks techniques adapted for the
mobile environment.
• Support for broadcast and multicast video services, including Video on Demand and
Personal Broadcasting, by introducing multicast mechanisms at different layers of
the protocol stack.
The MEDIEVAL architecture defines four subsystems, namely: the Wireless Access
Subsystem [19]; the Mobility Subsystem [20]; the Transport Optimization Subsystem [21];
and the Video Services Subsystem [18].
The Wireless Access subsystem addresses the transmission under the heterogeneous
wireless technologies and provides to the upper layers a technology independent, video-
aware set of functions to steer IP connectivity.
The Mobility subsystem coordinates with the CDN component of the Transport
Optimization subsystem [21] to achieve handover optimization and optimal cell selection.
It provides IP mobility procedures taking into account distributed and dynamic mobility
management, IP flow mobility and traffic offload, aiming to cope with the requirements
imposed by the sharp video traffic increase.
The Transport Optimization Subsystem achieves QoS by providing the measurements
tools and the necessary intelligence to perform video traffic engineering across the
wireless, access and core network of the mobile operator. Furthermore, the XLO (Cross-
Layer Optimization module) component [21] aims at computing traffic engineering metrics
so as to solve an optimization problem. QoE requirements such as video sensitivity and
other objective parameters such as packet loss and video rate are communicated by the
QoE driven optimization (subcomponent of the XLO) to the TE to compute new
engineering solution.It also includes the CDN management optimized for the mobile
environment. In particular, the NEGOCODE (Network Guided Optimization of Content
Delivery) component [21] offers on-line network guided selection of content locations from
which to download video. CDNNC (CDN Node Control) is another component of the CDN
component (of the Traffic Optimization subsystem [21]) that is responsible for load
balancing within a group of CDN nodes.
Finally the Video Services Subsystem leverages the novel interface with the network
elements to derive the necessary information to best adapt service delivery to the end
user. More specifically, the Video Service Provisioning function [18] is responsible for
storing metadata related to the content files and relates to the actual location that content
is stored. Additionally, it is responsible for matching content to other services
(addressing).
5.7 ENVISION
ENVISION (ENVISION: Co-optimization of overlay applications and underlying networks)
proposes a cross-layer approach, where the problem of supporting demanding services is
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 58 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

solved cooperatively by service providers, ISPs, users and the applications themselves, in
order to deliver both content-aware networks and network-aware applications.
The ENVISION cross-layer approach is built upon three pillars:
• Intelligent overlay applications are optimized for true end-to-end performance at a
global scale according to the actual capabilities of multiple underlying ISPs.
• Network resources are dynamically mobilized to where they are most needed.
• The access to the content and it is distributed are adapted on-the-fly to what the
network is able to deliver.
The main component of ENVISION's architecture is CINA (Cooperation Interface between
Networks and Applications) [22]. CINA creates a new opportunity for over-the-top
application providers to request special services from network providers. This can
generate new revenue streams for network providers from content distribution and allow
for joint collaboration in the optimization of the network and application logic.
The SIS (Server Information Service) or CINA server [23] offers information to the client
about the services the server supports, as well as network cost types (e.g. routing cost,
latency, etc.). A Multicast controller [23] is integrated in the CINA server to provide
multicast transmission services to enhance the delivery of adaptable content to multiple
users.
A Map Service [23] integrated in the CINA interface allows a client to retrieve the network
map from a CINA server illustrating the network from the CINA server viewpoint.
Additionally, a cost map can be retrieved by the client depicting routing costs between the
defined network regions. Various network metrics can be specified by the client as cost
from the server for making QoS-related decisions. On the other hand, a FootPrint Map
[23], including information by clients, can be delivered by the Application to the Network to
illustrate the distribution of the Application end-points over a Network Map.
The CINA interface provides authentication of both CINA servers (to be trusted by the
overlay applications) and CINA clients (to be trusted for their information provided to the
network, to determine their access privileges, to share resources and provide personalized
services, to perform billing). This security-related functionality is offered by employing PKI
[25] infrastructures, and/or the TLS protocol [26]. The Monitoring process [23]
(implemented in Python) is provided by the CINA interface employing the SNMP protocol
for information in the equipment's MIB and CLI commands for other information (e.g.
neighbors, delay between routers, etc.)
Naming of CINA servers is performed by U-NAPTR/DDDS (URI-Enabled Name Authority
PoinTeR/Dynamic Delegation Discovery Service) [24] unique strings, in the form of a DNS
name. Clients need to use the U-NAPTR specification to obtain a URI for the applicable
CINA service. Addressing (U-NAPTR resolution) is performed through 3 different options:
i) user input, ii) DHCP-like option (remote configuration of the client), iii) reverse DNS
lookup on server's IP.
5.8 C2POWER
C2POWER aims to increase energy efficiency in wireless ad-hoc networks by relaying
data via a path of low-power hops than via one long-haul transmission. In order to initiate
collaboration, relays send bids to “data sources” of how much compensation they want to
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 59 of 161
© Copyright 2012, the Members of the SESERV Consortium

forward the sources data. This compensation is calculated in dependency on the energy a
relay has to invest to forward the data. Each source estimates the amount of energy it
would cost to send data directly to an access point and then decides whether it is worth
relaying it, taking into account the best/cheapest bid. The relay selection process may be
repeated several times, where a relay may improve its bid if it got rejected and the relay
whose bid got accepted will raise the price. In this way, offers will converge.
Transmission is realized beginning with an auction of forwarding services. As mentioned
above, each node, that has to send data, selects the relay that offers the best bid. In this
way a forwarding topology is generated that data flows along. It is important to mention
that C2POWER also develops energy-efficient protocols and routing schemes for
cooperative networks, meaning that not only the "greedy" next best hop may be chosen,
but also even more energy efficient data paths with respect to overall energy-efficiency. It
is important to stress, that C2POWER is also reasearching mechanisms to enable
transparent relaying, i.e., when data of a user is relayed by another, it looks to the access
point as if this data is directly transmitted by the former. However, at the time this
Deliverable is released, C2POWER’s progress on such mechanism is still in a too early
stage to allow for a reasonable inclusion in this survey.
Traffic control in C2POWER is inherent to auctioning of relaying efforts. A node will only
send bids that, if accepted, will generated a profit for it. Similarly, a node will only accept a
bid, if it will generate a profit for it. Since congestion will result in dropped packages and
therefore will ultimately decrease the overall energy efficiency of the network and thereby
overall profit, these mechanisms will avoid congestion, although it might still occur, if a
node’s assumptions are wrong. Nonetheless, nodes are able to learn from such wrong
assumptions and adjust their bids or be more careful with the selection of relays
respectively. Therefore, traffic control is reached alongside with the common goal to
achieve energy savings.
QoS functionality is implemented by C2POWER through a QoS parameter in the bids
that are send, i.e., not only the price of forwarding activities is a parameter under
negotiation but the corresponding QoS as well. QoS is measured in bits potentially in error
and throughput. It is important to node that the project not only considers the potential
change to QoS that is caused to the source, when relaying data instead of sending it
directly, but also the degradation in QoS that may be caused to the relaying node, when
the sending of its own data is compromised by the need to also relayed data as well.
Mobility functionality is addressed by the formation of clusters. Although also highly
mobile nodes can participate in the C2POWER networks, less mobile nodes are preferred
as communication partners in general. To form a cluster, nodes send beacons to
neighboring nodes, that may already belong to a cluster. To prefer less mobile nodes, a
newly joining node only gets accepted in a cluster when a certain number (depending on
the designated stability of the cluster) of its beacons is received consecutively, as this is an
indicator, that it is likely not moving (frequently/fast). Furthermore, the reachability of a
node is checked periodically, ones it joined the cluster, by the use of these beacons, to
drop it from the cluster if necessary. Note that, energy efficient handovers are also a core
research area of C2POWER.
Security functionality is central to the C2POWER technology, as C2POWER tries to
achieve energy savings by collaboratively relaying data. Since the technology is to be
applied in ad-hoc networks, there is likely no trust relationship between the owners of
different network devices. Therefore, in order to prevent eavesdropping attacks, traffic has
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 60 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

to be encrypted either End-to-End or directly on the C2Power layer. For more details see
Appendix A.6.
Naming/Addressing is not particularly addressed by C2POWER. Although there have to
be mechanism to identify nodes to enable transitivity of services, i.e., realize necessary
reputation systems, and handling of auctions, such mechanisms are not explicitly
addressed by C2POWER.
5.9 ULOOP
The ULOOP project follows an evolutionary approach for the Future Internet, suggesting
that overlapping Wi-Fi access networks, operated mostly by end-users, could form a
“wireless local-loop” that complements or in some cases substitutes the ISPs’
infrastructure. The idea to allow communication between users in such ULOOP network
directly, instead of being routed through the backbone internet, is central. Further, by
relaying of data, coverage of an access point is extended. The probably most important
aspect of the functionality of ULOOP is the sharing of connectivity provided by an access
point, that is, users belonging to the same “ULOOP community” may use Internet
connectivity provided by any of these users.
Transmission is addressed in several forms by ULOOP. First, ULOOP allows for the
relaying of data between users to expand network coverage. Second, ULOOP allows
users to share their internet access (with users in the same ULOOP community) and
thereby granting access even to users, that would not have internet access without the
ULOOP technology. Third, ULOOP seeks to offer the possibility of easily connecting
directly through a ULOOP network, without having to use the backbone.
QoS functionality on a per user basis is foreseen to be realized by using AAA
mechanisms. However, at the time the deliverable is written, not more information is
available.
Mobility functionality is one of the key concerns of ULOOP, wherefore several
approaches are deployed to increase user mobility. As mentioned above, the relaying of
data is deployed to extend network coverage and thereby increase user mobility. The
direct exchange of data that is designated through ULOOP will even enable data
exchange between devices without any access point, that is to say, location independent.
To increase mobility even further, efficient handover techniques are also investigated by
ULOOP.
Security functionality is central to the ULOOP technology, as ULOOP enables relaying
of data, which is always prone to eavesdropping attacks: Since the technology is to be
applied in ad-hoc networks, there is likely no trust relationship between the owners of
different network devices. Security in ULOOP is enabled by AAA mechanisms.
Furthermore, collaborative monitoring is applied to (i) detect abnormal/malicious
behaviors, (ii) to propagate information about security holes, (iii) to isolate both the
attacker and the devices under attack (as these could unwillingly relay these attacks), (iv)
to trigger counter measures, and (v) to handover end-users from broken network
termination points.
Naming/Addressing is not particularly addressed by ULOOP. Although there have to be
mechanism to identify nodes to enable transitivity of services, i.e., realize necessary
reputation systems and form ULOOP communities. However, such mechanisms are not
explicitly addressed by ULOOP.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 61 of 161
© Copyright 2012, the Members of the SESERV Consortium

5.10 OPTIMIS
OPTIMIS aims at optimizing IaaS cloud services by producing an architectural framework
and a development toolkit. The optimization covers the full cloud service lifecycle (service
construction, cloud deployment and operation). OPTIMIS gives service providers the
capability to easily orchestrate cloud services customized for the unique needs of their
applications and make intelligent deployment decisions based on their preference
regarding trust, risk, eco-efficiency and cost (TREC), as well as data protection
requirements. It also gives service providers the choice of developing once and deploying
services across all types of cloud environments – private, hybrid, federated or multi-clouds.
In OPTIMIS all the contemplated and use case IaaS providers have their own virtualization
solution and hardware. The project is working to ensure computability with all major
virtualization software. OPTIMIS uses OVF (XML to describe services) on top of XEN,
VMWARE and QCOW2 hypervisors. This latter hypervisor in turn works on top of XEN or
KVM. The OPTIMIS broker allows users to take advantage of these cloud benefits
according to the capabilities of the providing IaaS and through brokering cloudbursting,
federated and multicloud scenarios.
OPTIMIS allows cloud users to specify and manage the images as per a commercial cloud
infrastructure. In order to do so, OPTIMIS develops interfaces to several IaaS execution
platforms, such as Emotive, OpenStack, OpenNebula or VSphere.
QoS (akin to the Routing / provisioning network functionality) in OPTIMIS is related to the
selection of the right IaaS provider for a given user. A central theme of OPTIMIS is to give
the user control over the choice of provider according to the TREC parameters. This can
be on a cloud-bursting, multicloud or federated cloud basis. It is enabled by the OPTIMIS
Broker, which in addition to decision making components, also includes the necessary
components to permit workloads to be channeled into one or more IaaS providers’
infrastructures. In particular, OPTIMIS has developed its own programming model for
managing requests, although users can use alternatives. The OPTIMIS broker was
designed to overcome inadequacies in the brokerage process which favoured the broker
over the user (namely by incorporating TREC), however tussle analysis shows this may
hamper the company behind the OPTIMIS broker when managing its business relations.
The solution to this is through the pricing model, and financial incentives. Finally, SLAs are
managed by WSAG4J (WS-Agreement for Java), a Standards-based Java framework for
negotiation and management of Service Level Agreements in distributed systems. It is an
implementation of the OGF WS-Agreement standard.
The OPTIMIS broker retains some monitoring over whether the OPTIMIS-brokered SLA is
respected by both parties, and this data feeds into the trust/security algorithms. However,
accounting is carried out between the IaaS provider and client using the WSAG4J
framework.
5.11 BONFIRE
BonFIRE is developing a multi-site cloud facility for experimentation and testing of cloud
technologies. The facility aims to provide services that allow RTD teams to study the
cross-cutting effects of clouds and networks. The project is funded by the European
Commission as part of the Future Internet Research and Experimentation (FIRE) Unit.
Much has been written about the cloud computing model enabling on-demand networked
access to a shared pool of configurable computing resource that can be rapidly
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 62 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

provisioned, elastically scaled and released with minimal management effort or service
provider interaction. Cloud consumers utilize and pay for what they need, require no
upfront capital investment and benefit from reduced costs due to the efficiency gains of
providers. This all sounds fantastic but helping businesses make the transition to cloud
computing models has significant technical, operational and legal challenges. Cloud is a
disruptive technology, it changes the way applications are developed and operated.
BonFIRE targets the RTD (Research and Technology Development) phase of the
technology lifecycle where developers experiment with and test technology, they
investigate new ideas and perform verification and validation of technology prior to
production deployments.
Experimentation and testing of distributed systems is a complex endeavour. Computer
systems are made up of many interacting components whose behaviours exhibit
significant degrees of uncertainty. Predicting the behaviour of even the most basic
computer programme running on a single processor machine is a hard task considering
the interplay between processor architecture, memory, cache, etc. Layer on top of this
huge bodies of software providing middleware and applications, and then deploy on
infrastructure across distributed locations under different domains of control and you begin
to understand the challenge. In the fields of both scientific investigation and software
engineering methodologies have developed to understand and validate the behaviour of
systems. Each requires the definition of a system under test (software/experiment),
instrumenting that system and controlling sources of systematic and random errors. These
requirements are the drivers for the key architectural principles of observerability and
control adopted by BonFIRE.
The BonFIRE broker enables virtualization by providing users the choice of six different
providers. Experimenters can select which site to run virtual machines from and the same
experiment can run with VMs from multiple sites. BonFIRE offers Elasticity as a service
which allows compute resources to be dynamically increased and decreased according to
rule.
Each cloud site operates its own execution solution and services compliant to BonFIRE’s
OCCI specification. Infrastructure is offered over an Open Cloud Computing Interface
(OCCI). A user can access programmatically against the OCCI interfaces, and there are
various language bindings to do this, currently Ruby-based Restfully and a JSON bespoke
domain-specific language. Furthermore, BonFIRE allows images with multiple
configurations to exist.
BonFIRE offer’s on-demand provisioning of resources for part of its permanent capacity
through APIs, while additional large scale compute capacity is available on request. In
particular, a web based portal allows definition of an experiment in terms of the computing,
storage and networking and the configuration between them. Each experiment is allocated
resources either on a best effort or exclusive basis (QoS support).
In terms of security, BonFIRE provides a complete monitoring infrastructure that allows
monitoring of applications, virtual machines, or the physical infrastructure where those
applications and virtual machines are running in a manual or programmatically way. This
infrastructure is based on the Zabbix monitoring software
31
. Furthermore, no cost
accountability is currently supported although per-experiment cost is being investigated.

31
http://doc.bonfire-project.eu/R3/monitoring/bonfire_monitoring.html
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 63 of 161
© Copyright 2012, the Members of the SESERV Consortium

5.12 Lessons Learnt
The tables below provide a summary of the analysis for the network-related projects
(Table 21) and the cloud-related projects (Table 22). Note that this survey is not intended
to be exhaustive and that technologies may be updated or withdrawn by the projects.
Table 21: Comparison of Functionalities Focused by Selected Network Research Projects
Transmission Traffic control QoS Mobility Security Naming/
Addressing
ETICS
!

!

!

GEYSERS
!

!

!

UNIVERSELF
! ! ! !

SAIL
!

! ! ! !
PURSUIT
! ! ! ! ! !
MEDIEVAL
!

! !

!
ENVISION
!

!

! !
ULOOP
!

! ! !

C2POWER
! ! ! !


All projects propose technologies related to the transmission and QoS functionalities.
Technologies dealing with network security and mobility are suggested by more than half
of those surveyed (6/9). This gives evidence that the research community acknowledges
the need for more security in the Future Internet. However, it may be necessary to re-
examine the assumptions on the stakeholders’ needs and possible reactions. On the other
hand, Naming/Addressing components are suggested by fewer projects (4/9), even though
this happens for all projects dealing with advanced methods for content delivery, such as
those following the Information-centric paradigm (SAIL, PURSUIT) as well as those based
on overlay applications (MEDIEVAL, ENVISION). We also notice that new traffic control
mechanisms (e.g., extensions to TCP algorithm) are investigated by fewer projects (3/9).
Looking at Table 22 we see that OPTIMIS shares many similarities with BonFIRE. Both
projects develop technologies for all functionalities of the cloud taxonomy. The technology
underlying the two projects is also similar: multiple virtualised infrastructures delivered as a
service, brokered by a central broker for the use of users. However the focus of the two
projects is quite distinct: OPTIMIS is researching methods of brokering between clouds
and working on top of multiple clouds. This is an emerging model in industry. BonFIRE is
not carrying out research into cloud per se, but rather offering a facility, where researchers
and developers of cloud or software services can test and monitor their components.
Table 22: Comparison of Functionalities Focused by Selected Cloud Research Projects
Virtualization Execution QoS Security
OPTIMIS
! ! ! !
BonFIRE
! ! ! !
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 64 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

6 Standardization Activities in ITU
From the beginning of 2011 onwards, members of SESERV have been deeply engaged
with the ITU-T, namely with Study Group 13 (SG13), Working Party 5 (WP5/13), Question
21 (Q21/13). SG13
32
and Q21/13
33
are concerned with standardization activities regarding
Future Networks (FN). SG13 is, in addition, one of the Study Groups that is involved in the
Next Generation Networks Global Standards Initiative (NGN-GSI)
34
.
SESERV’s engagement in SG13 is driven content-wise primarily by the design for tussle
goal (as stated initially by Clark) and the accordingly developed tussle analysis method –
in other words, the engagement is content-wise driven by SESERV’s WP2. In
consequence, the engagement is backed resource-wise by UZH and AUEB. UZH is the
formal link from SESERV into the ITU-T as it has become Academic Member of the ITU-T
by 2012. In the first months, UZH has been in a comparable situation de facto, but
contributions and all interactions happened formally through the Swiss Administration, the
BAKOM/OFCOM. AUEB has been supporting the standardization activities at several ends
as well, in particular in the formal role of an invited external expert at the Q21/13
Rapporteur meeting in early 2012 and in supporting the drafting of a socio-economics
oriented Question Description for the next Study Period of the Study Group.
These explanations given on the formal involvement of two SESERV partners into the ITU-
T’s Q21/13 refer to the ITU-T’s restrictive publication rules with regard to meeting in-
/output and non-final Recommendation work – and the impact of these publication rules to
the section at hand. This section contains a publishable digest of ITU-T Q21/13-oriented
coordination actions and the respective impact made. This section, hence, becomes
available in the public version of the deliverable at hand. However, this section includes
several cross-references to ITU-T documents that saw contribution from SESERV
members – this content may, at this point, only be regarded as project-internal and
targeting the reviewer team in addition. In consequence, the affected documents in
Appendix are not available in the public version of D2.2.
The initial contact between SESERV and the ITU-T has been established in February
2011 during the 7
th
Future Networks FP7 Concertation
35
meeting in Brussels, Belgium.
Alojz Hudobivnik (Assistant Rapporteur in Q21/13) gave a presentation on the ITU-T (at
the time draft, now final) Recommendation Y.3001. Y.3001 reflects one of the primary
documents out of Q21/13 in the ongoing Study Period. It determines 4 objectives and 12
design goals for FNs. The objective of “Social and economic awareness” and the design
goal of “Economic incentives” drew the particular interest of the UZH representatives
participating in the Concertation meeting. The discussion between Alojz Hudobivnik and
the UZH representatives started still during the Concertation meeting resulted in Alojz
Hudobivnkik inviting UZH and SESERV to contribute to Q21/13 and to Y.3001 in particular.
This first contact has led to an intense exchange between SESERV and the ITU-T at
various ends. Figure 12 presents the timeline of those various interactions and the many
contributions of SESERV members to the ITU-T.

32
http://www.itu.int/ITU-T/studygroups/com13/index.asp
33
http://www.itu.int/ITU-T/studygroups/com13/sg13-q21.html
34
http://www.itu.int/en/ITU-T/gsi/ngn/Pages/default.aspx
35
http://ec.europa.eu/information_society/events/future_networks/concertation/programme/7th/index_en.htm
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 65 of 161
© Copyright 2012, the Members of the SESERV Consortium


Figure 12: Timeline of SESERV Interactions with the ITU-T
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 66 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

The interactions in the past one and a half year between SESERV and the ITU-T may be
structured into different phases along the respective content item that was primarily
worked on. Relevant content items were the following:
• ITU-T Recommendation Y.3001: SESERV members were involved in the drafting
of Y.3001 when the draft Recommendation was close to finalization. Accordingly,
only smaller contributions were possible at the time. Nonetheless, a contribution out
of SESERV’s interest in the design for tussle principle and the tussle analysis
method resulted in the fostered representation of the tussle concept in Y.3001
right before the draft Recommendation was finalized and approved (May 2011).
Section 6.1 describes the details for this phase (February to May 2011).
• ITU-T Draft Recommendation Y.FNsocioeconomic: Already in April 2011, the
SESERV-driven contribution proposed to extend Y.3001 by methods that help
achieve the objectives and design goals the Recommendation determines. Y.3001
lists example technologies to achieve objectives and design goals, but no methods.
This need was acknowledged by Q21/13 members, while it could not be reflected in
Y.3001 anymore due to the late moment in the process towards Recommendation
finalization. This became then the major motivation to start a new working item in
Q21/13, finding expression in a new draft Recommendation being started,
Y.FNsocioeconomic. The idea for a new Recommendation emerged in Summer
2011, was discussed in several informal meetings, proposed to Q21/13 in October
2011 by a SESERV-driven contribution, and accepted in October 2011. UZH was
nominated editor of the draft Recommendation and provided a scope definition and
a rough document structure right away. The main approach of Y.FNsocioeconomic
consists in tussle analysis being the meta-method to anticipate a FN technology’s
potential for adoption during technology design and standardization phase. Section
6.2 details the way from the idea to the opening of a new working item, while
Section 6.3 explains how Y.FNsocioeconomic has been developed content-wise
thus far.
• Question Description for next Study Period: The ongoing Study Period for
Q21/13 is going to be completed by the end of 2012. Out of a single Question,
Q21/13 has proposed three Questions for the next Study Period (2013-2016). What
a Question embraces in terms of scope and objectives is described in a so-called
Question Description. SESERV members have been involved multiple times since
being engaged in Q21/13 in drafting one of the three Question Description out of
Q21/13. This Question will focus on environmental and socio-economic
awareness of FNs. It will, thus, be the Question in which Y.FNsocioeconomic will
be further developed and finalized in 2013. Section 6.4.2 summarizes the state of
this interaction.
• Liaison Statements from SG13 to SG3: Study Group 3
36
is concerned with tariff
and accounting principles including related telecommunication economic and policy
issues. As Y.FNsocioeconomic shows economic dependancies albeit being
technology driven, SG13 (by way of Q21/13) has issued multiple liaison statements
towards SG3 in order to inform the economics-driven SG3 about the progress in
this working item and to give SG3 the opportunity to comment. Thus far,
Y.FNsocioeconomic has been noticed and acknowledged by SG3, while no
stronger interaction between the two Study Groups has emerged. Section 6.4.3
summarizes the state of liaising between SG13 and SG3.

36
http://www.itu.int/ITU-T/studygroups/com03/index.asp
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 67 of 161
© Copyright 2012, the Members of the SESERV Consortium

• Outreach Activities: It is important to promote and explain a working item,
especially if it is opening a new area of work as is the case for Y.FNsocioeconomic.
The ITU-T is the technology standardization branch of the ITU. Socio-economics is
a new emerging field not only in the European Future Internet research
environment, but also in the ITU-T. UZH has presented Y.FNsocioeconomic and
tussle analysis at three different occasions as Figure 12 outlines. It was presented
to the TTC (Telecommunication Technology Committee), a Japanese regional
standardization body, in January 2012 on the invitation of the Rapporteur of
Q21/13, Takashi Egawa. In April 2012, Y.FNsocioeconomic and the tussle analysis
method were presented and discussed – for instance with regard to its application
to the specific circumstances and requirements in developing countries – in an ITU-
T workshop organized by Q15/13 (applying IMS and IMT in developing country
mobile telecom networks) in Kampala, Uganda. In June 2012, the outreach was
made to the ISO which sees a working party on FNs as well. Y.FNsocioeconomic
was presented in the joint ITU-T / ISO workshop on “Future Networks
Standardization”.
6.1 Contribution to Y.3001
On February 10, 2011 SESERV efforts in the ITU were initiated at the Future Networks 7th
FP7 Concertation Meeting. At that meeting, Alojz Hudobivnik (AH)(Iskratel) presented
ITU’s interests and activities in Future Networks. He stated that SG13 covers FNs
including mobile and NGN (Next-Generation Networks) and one of the current design
goals of FNs are “Economic incentives”, in that they should provide sustainable
competition to various participants in an ICT ecosystem by providing proper economic
incentives. Because of this shared interest (in economics) with SESERV, the latter
established contact with AH (and the ITU) after his talk. AH invited SESERV to contribute
to the ITU draft recommendation Y.3001. Therefore, on April 26, 2011, SESERV, by proxy
of the University of Zurich and the Swiss Administration’s ITU delegation, submitted
contribution 1137 entitled "Socio-economic Changes to Draft Recommendation Y.3001”
37

to be discussed at the NGN-GSI meeting in Geneva, Switzerland, in May 2011.
The NGN-GSI meeting mentioned took place on May 9-11, 2011, and saw a collocation of
SG13 and Question 21 (Q21) meetings. SESERV was represented during the final
editorial meeting by the Swiss Delegation, in which UZH had been appointed to. Since ITU
rules enforce the membership of a contributor to any SG and work within the ITU, the
Swiss Delegation head had accepted the membership of the SESERV coordinator UZH in
their national delegation for the purpose of contribution 1137.
Since the draft recommendation for Y.3001 was already in a late stage only editorial
changes were allowed to be made and consequently only some changes proposed by
UZH were accepted. In particular, due to concerns that the tussle analysis does not draw a
clear line between technical and non-technical (i.e., business modelling, economic)
dimensions, the notion "tussle analysis" was not incorporated into the changes applied to
the new draft recommendation for Y.3001. Also UZHs' proposal for a new Appendix
“Methods for achieving the design goals” was rejected. Nonetheless, it was agreed that it
is reasonable to open a document on such methods. Due to argumentative efforts
undertaken by SESERV members during the meeting and the term tussle being well
defined in the Oxford and Cambridge dictionaries (the insertion of a definition at such a

37
http://www.itu.int/md/T09-NGN.GSI-C-1137/en
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 68 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

late stage of the document would not have been possible, wherefore it was mandatory that
the term tussle was already “officially” defined), the term tussle was integrated into the new
proposal for Y.3001.
On May 20, 2011 the new proposal for Y.3001 – including the strengthened tussle concept
– was accepted in the final formal vote on changes to Y.3001 in the plenary session of the
NGN-GSI meeting. Thus the input of the Swiss Delegation entitled “Socio-economic
Changes to Draft Recommendation Y.3001” was partially accepted (as just discussed)
implying that the final formal version of Y.3001 shows the contribution of SESERV with
respect to the term tussle.
6.2 New Document Proposal (Y.FNsocioeconomic)
On June 23, 2011, UZH (on behalf of SESERV) visited BAKOM/OFCOM representative
Leo Lehmann to discuss options for SESERV/UZH/Swiss Administration coordination
actions in the ITU SG13 as well as possibilities for opening and editing a new
Recommendation focusing on methods to achieve the socio-economically driven design
goals and objectives for FNs (as pointed out in Y.3001). It was agreed that UZH would
send a contribution to the upcoming SG13 meeting discussing the need, the scope, the
overall structure, and the overall planning for a new standard document to be opened in
relation to Recommendation Y.3001. In this contribution, which has been sent on October
05, 2011, it was, in particular, argued that Y.3001 lists 12 design goals but did not address
methodological aspects of how to achieve these, wherefore it was proposed to focus the
new document on the methodological approach to socio-economic design goals
(“Economic incentives”; Section 8.7 in Y.3001) and objectives (“Social and economic
awareness”; Section 7.4 in Y.3001). Note that this argumentation continues the discussion
that started by the Appendix proposed by SESERV for Y.3001 being rejected at the ITU-T
NGN-GSI meeting in May.
The document proposal was presented by UZH at the ITU-T SG13 Meeting on October 11,
2011, and content-wise accepted by a majority of meeting participants. Additionally the
Rapporteur of Q21 Takashi Egawa (TE) asked MW in an unofficial conversation during
that meeting for a question description addressing socio-economics to be prepared by
SESERV. Costas Kalogiros (CK) sent the requested question description to TE the next
day. As the SESERV document proposal was content wise accepted by a majority of
meeting participants, MW submitted to the Question 21 mailing list a scope definition and
table of contents for the respective new document on October 15, 2011. Two days later
the document proposal was discussed, in details adapted by the rapporteur of Question
21, and accepted by SG13. This implies the opening of a new ITU-T document entitled
“Methods to Achieve Socio-Economic Design Goals and Objectives for Future Networks”
with working title Y.FNsocioeconomic. However, due to concerns that SG13 has limited
expertise in socio-economics a liaisons statement of SG13 to SG3 with title “New work
item to achieve socio-economic design goals for Future Networks” has been approved at
the closing plenary of the ITU-T SG13 October Meeting.
6.3 Preparation of the New Document (Y.FNsocioeconomic)
As discussed with LL prior to the presentation of the document proposal at the ITU-T
SG13 Meeting on October 11, 2011, UZH had invited the Swiss administration’s
representative in SG3, Raphael Scherrer, and LL to evaluate if the proposed document
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 69 of 161
© Copyright 2012, the Members of the SESERV Consortium

better suits his study group (SG3 is concerned with economic issues, SG13 is concerned
with technical). On November 8, 2011, this meeting took place at UZH.
On January 17, 2012, UZH participates in the ITU-T SG3 Meeting to observe/aid the
discussion of SG13s’ liaisons statement “New work item to achieve socio-economic design
goals for Future Networks” that was approved at the closing plenary of the ITU-T SG13
October Meeting. Based on a consultation with Raphael Scherrer, the Study Group 3
delegate of the Swiss OFCOM, the decision has been taken to observe Study Group 3's
reaction to the liaison statement rather than directly interact.
As had been asked for by the rapporteur of Q21, MW presented the contents, in particular,
the tussle analysis, of Y.FNsocioeconomic to the TTC (Japanese Regional Standardization
Body) on January 23, 2012. On February 8, 2012, the first draft of "NGN-GSI –
CONTRIBUTION 1339", which was submitted by SESERV on January 24, 2012, delivers
contents for YFN.socioeconomic and was discussed at the ITU-T SG13 Q21 meeting,
which has been collocated with the NGN-GSI Meeting. The draft got accepted but several
formal changes were demanded. Furthermore, since YFN.socioeconomic is determined to
be of possibly huge impact, a further liaisons statement was agreed to be prepared for
SG3 (SG3s' reply to the former liaisons statement is briefly discussed at a latter point of
this meeting). Also plans for smaller updates to the Question Description text submitted by
CK on October 12, 2011 were discussed.
On February 13, 2012, the changes demanded on February 8, 2012, had been
implemented and were presented by MW to Q21 in a telco. On February 16, 2012 a further
such editing loop (addressing editorial changes agreed on in the preceding telco) has been
closed by a further telco and the edited contribution "NGN-GSI – CONTRIBUTION 1339"
was accepted by Q21.
On February 17, 2012, TE proposed Y.FNsocioeconomic as a formal meeting output of
SG13. Furthermore, an updated version of CKs’ Question description has been approved
as one of five questions proposed, just as the second liaisons statement has been also
approved.
On May 25, 2012, Contribution 1415 is proposed to fill empty sections of and add an
appendix to Y.FNsocioeconomic. The contribution is accepted on June 5, 2012, and UZH
is asked for an update of the socio-economic oriented Question Description following
Q.21/13 in the next study period, which is provided on the same day. The Contribution as
well as the Question Description are accepted in the SG13 plenary meeting on June 15,
2012.
On June 11, 2012, UZH presents Y.FNsocioeconomic and in particular the tussle analysis
methodology at the joint ITU-T SG 13 and ISO/IEC JTC 1/SC 6 Workshop on “Future
Networks Standardization”.
6.4 Summary and Assessment
The coordination activities with ITU-T’s SG13, Q21/13 have been intense and diverse
since initiated in January 2011. As apparent from the activities outlined above, SESERV
had impact on ITU-T activities in three different areas, which are the contributions to
recommendations, a question description within Q21, and the coordination of SG3 and
SG13. The results of SESERVs’ activities within these three fields are all highly positive.
The work on Y.FNsocioeconomic is of key interest to the FISE community as this
Recommendation embraces tussle analysis as the main approach to assess a FN
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 70 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

technology’s adoption potential during design and standardization phase. While
Y.FNsocioeconomic is now content-complete and planned to be promoted, e.g., by an
article in the ITU News magazine, it still has to be finalized in 2013. The primary focus,
hence, towards 2013 will be on the further development and finalization of
Y.FNsocioeconomic. The involvement in drafting a Question Description dealing with
environmental and socio-economics driven aspects of FNs shows to be of strategic
interest to the FISE community as SG13 and Q21/13 moves from the current Study Period
into the one starting in 2013. This new Question with its dedicated scope in socio-
economics will determine the suited environment for the successful finalization of
Y.FNsocioeconomic in 2013. Therefore the efforts undertaken in delivering the Question
Description paid off twice: not only that this Question Description addresses socio-
economic issues and therefore creates awareness for these topics but it will also result in
a dedicated Study Group, which, in turn, can appropriately finalize Y.FNsocioecomic. In
particular, Question Description Q21/13 has proposed three follow-on Questions for the
next Study Period, where one of them is centred on environmental and socio-economic
awareness of FNs. In addition to these achievements, the Liaison Statements, that may be
considered a byproduct of SESERV’s activities within the ITU, brought together technically
and socio-economically oriented Study Groups, which shows SESERV’s qualities in
bringing together experts of these domains. In addition to the awareness creation through
these activities, UZH’s presentations of Y.FNsocioeconomic and tussle analysis at three
events (TTC, ITU-T workshop in Kampala, joint ITU-T/ISO workshop) had the same impact
on experts around the globe.
It can be concluded, that SESERV’s interactions with the ITU-T were as intense as
conductive. Not only that that awareness for socio-economic issues in the Future Internet
was created within the organization, but also formal results achieved, that will have a
sustainable impact. In particular, the formal agreements motivated within the ITU-T will
ensure that socio-economic issues are adequatly represented, with the key driving factor
and overall goal being the finalization of Y.FNsocioeconomic.
6.4.1 Y.FNsocioeconomic
The contribution prepared for the ITU-T SG13 Q21 meeting in January 2012 was accepted
and only a few smaller changes became necessary to be implemented. The current
version of Y.FNsocioeconomic is now a formal meeting output. This means that:
• The development of this future Recommendation relied heavily on efforts and
contributions out of SESERV. Substantial support in the Study Group 13, Question
21 for this work, especially from the Rapporteur and two vice-chairs is seen.
• Tussle analysis (as the main method presented in the document) has now a very
real chance to become an ITU-T Recommendation eventually.
6.4.2 Question Description
The study period for Question 21 and Study Group 13 is going to end in next year.
Question 21 is going to propose 5 new Questions to be built -- one of which is termed
'Social and economic awareness properties of FNs' (Future Networks). We would like to
make the following observations:
• The respective Question covers input from SESERV.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 71 of 161
© Copyright 2012, the Members of the SESERV Consortium

• SESERV as well as FISE are both explicitly mentioned as bodies of interest to the
ITU to liaise with.
• Such a Question will be accepted only after SESERV's formal end, but the fact
alone that this Question Description is going to be proposed shows support within
Question 21 and it documents SESERV impact.
6.4.3 Liaison statement
With the progress in Y.FNsocioeconomic, another follow up liaison statement is sent out to
Study Group 3, which deals with economic questions. The liaison statement is going to
inform about this work and it asks for comments. It is to be seen how a response of Study
Group 3 will look like, but this is at least a way within ITU common practices to create
awareness.
6.4.4 Next Steps
While Y.FNsocioeconomic is now content-wise complete it still has to be finalized in 2013.
The primary focus, hence, towards 2013 will be on the further development and finalization
of Y.FNsocioeconomic. The involvement in drafting a Question Description dealing with
environmental and socio-economics driven aspects of FNs shows to be of strategic
interest to the FISE community as SG13 and Q21/13 moves from the current Study Period
into the one starting in 2013. This new Question with its dedicated scope in socio-
economics will determine the suited environment for the successful finalization of
Y.FNsocioeconomic in 2013.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 72 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

7 Design Principles for the Future Internet Architecture
The FIArch (Future Internet Architecture) Group aims at stimulating and steering
coordinated research towards Future Internet (FI). In this section, the SESERV
participation in the FIArch activities over the last one year, i.e. 2011-2012, is described,
and especially towards the identification and specification of the Design Principles that will
govern the FI (Future Internet) architecture and protocols.
7.1 Motivation
The current Internet was originally designed to serve research purposes and for a limited
use. The research community believed then that connectivity was more valuable than any
application, and intelligence ought to be at the edges of the network rather than within the
network itself [29]. Towards this direction, the current architecture of the Internet is based
on a number of design principles that include: simplicity, modularity, scalability, the self-
describing datagram packet, the end-to-end argument, heterogeneity in technology, and
global addressing
38
. These design principles play a central role in the architecture of the
Internet as driving most engineering decisions at conception level but also operational
level of communication systems.
However, the Internet has already long since evolved and moved from the original
research-oriented network of networks into a highly innovative and competitive
marketplace for applications, services, and content. Due to the wide-spread access to the
Internet (e.g., mobile devices, etc.), the ever-growing number of broadband users world-
wide, the lower entry barriers for non-technical users to become content and service
providers, and trends like the Internet-of-Things (IoT), the success of cloud services, or the
dramatically increased popularity of overlay applications, new requirements have
emerged. The aforementioned requirements can no longer be adequately addressed by
the current Internet design principles. Therefore, current design principles need to be
revisited and updated in order to define the context and rules that will govern the FI
architecture.
7.2 Contribution
Based on the FI objectives and limitations identified by the FIArch Group [30] and taking
into account contribution from researchers, scientists, engineers, etc., a new document
towards the specification of the Design Principles that will govern the FI has been
produced by the FIArch Group. SESERV's contribution in the Design Principles document
consists of identifying new objectives taking into account significant socio-economic
aspects of the FI, proposing one 'seed for a new design principle' entitled "Exchange of
Information between End-Points " and co-authoring along with Dimitri Papadimitriou one
more entitled "Sustain the Investment".
The phrase 'seed for a new design principle' refers to a concept or a notion at the inception
of a well-formulated design principle. The term seed acknowledges that i) formulating
principles is a complex exercise, ii) research is still ongoing in proving their value and utility
(some of the analysis and exploitation of research results may not be mature enough) but
also impact, and iii) the proposed seeds may not be flourishing (out of many proposals few
will materialize). Two seeds for new design principles are presented subsequently.

38
These design principles were adopted for achieving the design goals mentioned in Section 4.4 (Recommendation 5).
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 73 of 161
© Copyright 2012, the Members of the SESERV Consortium

7.2.1 Exchange of Information Between End-points
As stated in Section 7.1, the Internet has evolved over the years from a research-oriented
network of networks to a playground for many different stakeholders such as Internet
Service Providers, Content Distribution Network providers, Content Owners (COs), end-
users, etc. The set of stakeholders can be divided either vertically in players, e.g. two
ISPs, or horizontally in layers, e.g. the Transport layer; all three terms will be used below
equivalently.
As described in Section 4.1, these stakeholders try to optimize own utilities (or more
generally benefits) e.g. ISPs to reduce inter-domain costs, CDNs to improve content
routing, users to benefit from different choices (e.g. by making choices of application
providers or ISPs, or of application parameters, etc.), each one on the basis of the
incomplete information available thereto. The so-called “Information Asymmetry” between
different stakeholders of the Internet leads often the ecosystem to a suboptimal
performance; e.g. see [28]. Addressing the information asymmetry problem may allow
stakeholders to make alternative decisions that would lead them collectively to a more
beneficial state.
Furthermore, Clark et al. [27] proposed the “Design for Choice” principle that suggests that
Internet technologies should be designed so that they allow variation in outcome, rather
than imposing a particular outcome. The rationale behind this is that the Internet is a rather
unpredictable system and it is very difficult to assess whether a particular outcome will
remain desirable in the future.
In order to both enable the Design for Choice principle and address the Information
Asymmetry problem, we introduce the “Allow the Exchange of Information between End-
Points”, which suggests that different stakeholders should be able to provide to others
information on possible choices and their preferences. In this way, stakeholders that are
provided this information are able to express their interests, to coordinate their objectives
and to align their incentives, if these are indeed compatible; as well as to appreciate what
the effect of their choices to others will be. Incentive compatibility between players applies
when one player’s selfish action implies the improvement not only of his own objective but
also of those of the other players. This information diffusion can possibly lead to the so-
called "all-win" situation, whereby all existing players are better off, at least temporarily. In
the long term, if new stakeholders also enter/exit the ecosystem, then further actions are
anticipated by the remaining ones.
In practice, the application of the proposed principle implies the design and deployment of
more “open” systems and interfaces for the interaction/communications between different
stakeholders anticipating also users’ reactions in cases of unsatisfactory quality of
experience. Therefore, all stakeholders, including users, will have the ability to react, by
means of making new choices, in cases of unsatisfactory benefit (for users: quality of
experience or value for money).
The exchange of information between stakeholders implies a flow of information from one
stakeholder to another, and the “processing” by each stakeholder; therefore the
constituent capabilities of this principle include:
• The exposure of information to a stakeholder.
• The abstraction/aggregation of information to be exchanged.
• The collection of information by a stakeholder.
• The assessment of information by a stakeholder.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 74 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

• The decision making.
The exposure of information addresses the Information Asymmetry problem but should be
restricted to the necessary level, so that no “sensitive” information is exposed that could
damage its original owner/producer. This is taken care of by the second capability, which
is very important and in fact which also provides efficiency in the information exchange
procedure. The idea behind this capability is that critical details must be hidden to avoid
being exposed to competitors, while required information should be exchanged in a way so
that no repurposing of this information can take place. The implementation of suitable
interfaces is required for this to be attained.
The remaining three capabilities are incentive compatible, since the stakeholder that each
time collects, assesses the information, or makes a decision based on that information will
have available more (if not full) information to optimize his own utility.
Two open research questions remain to be further explored:
• How to ensure the application of the principle doesn't partition the shared common
infrastructure between islands where certain information gets exchanged that
becomes at the end detrimental for the end-user (lock-in)?
• How to ensure fairness of gain among participants/stakeholders (how to prevent
that the "rich gets richer") meaning that exchanges of information does not
progressively falls into hands of a minority of highly connected hubs?
The aforementioned issues need to be addressed by the use of more “open” systems and
interfaces that will allow for creative solutions by small players to flourish (expressed as
new, revolutionary technologies), thus leading to a more fair distribution of the gains
among them, despite the inherent differences among them w.r.t. their ability to invest.
7.2.2 Sustain the Investment
“Coopetition” is the result of competing antagonistic actions due to conflicting interests
between parties implicitly cooperating in technological terms, usually resulting into
negative global return - this technical term has its associated economic term: "tussle" as
introduced by D. Clark in [27]. Investigating candidate tussles in possible Internet evolution
scenarios is a way to understand what the socio-economic incentives of the different
stakeholders are [13].
One possible tool for this is to employ tussle analysis in possible Internet evolution
scenarios. Indeed, addressing the inevitable tussles "correctly" (by giving adequate control
to actors to influence/negotiate the outcomes that make sense from a technology point-of-
view, e.g., not per packet) should reduce the global negative return. On the other hand,
this does not mean that the Internet should be designed to sustain conflicting interests or
steer them to unfair outcomes (i.e., not just be Designed for Tussle) but instead be
designed so as to lead to a global positive return for the all of its users (as individuals but
also as member of various communities), the so-called “all-win” situation but also for the
society at large.
Instead, it is important that the Internet is designed to sustain brain investment, innovation
investment and resource investment toward a global positive return. For this purpose, it is
fundamental to first recognize here the capability of the Internet to accommodate new
applications and services over a commonly shared infrastructure (and this is attributed to
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 75 of 161
© Copyright 2012, the Members of the SESERV Consortium

the fact that the architecture was not designed with the idea to privilege one class of actor
against another). It is thus essential to keep the entry barrier as low as possible and
structure the design of the Internet so as to allow various communities and people's
involvement by, e.g., steer open applications development but without impeding the
genericity, evolutivity, openness, and accessibility design objectives. Over time, the
Internet shall thus cultivate the opportunity for new players to take benefit of the
infrastructure foundation without sacrificing on its global architectural objectives and
design principles. Moreover, the Internet architecture should be able to accommodate and
sustain its actors and stakeholders’ needs in terms of fundamental capabilities, e.g.
forwarding and processing capacity.
However, it is not technically possible neither operationally feasible to homogenize user
satisfaction, utility functions and individual interests across the entire Internet.
Nevertheless, it should be made possible for Internet communities (e.g. users, developers,
enterprises, operational communities) to reward (with positive feedback) architectural
modules/components (together with the interactions among them) that deliver positive
returns; in turn, leading to positively weighted modules (i.e., strong modules as one could
refer here to the “strength” of a module as a measure of its reward by the community) and
to progressively deprecate modules/components with negative return (i.e., weak modules).
7.3 Summary and Next Steps
The outcome of this activity is included in a document entitled "Future Internet Design
Principles", which has been produced and published by FIArch, and is currently available
at http://ec.europa.eu/information_society/activities/foi/docs/fiarchdesignprinciples-v1.pdf.
As design principles have played and will play a central role in the architecture of the
Internet as driving most of its engineering decisions at the conception level but also the
operational level, this document investigates their potential evolution (adaptation and/or
augmentation which arguably cover already a significant part of their evolution).
Acknowledging that new principles are emerging, this document also explores a non-
exhaustive set of new seeds (including also the two aforementioned ones in Section 7.2.1
and 7.2.2) translating current architecture research work being realized. Altogether, the
result of this investigation by the FIArch group has led to the identification of the design
principles that will expectedly govern the architecture of the Future Internet if corroborated
by further proofs and experimental evidences. Consequently, this work may serve as a
starting point and comparison basis for many research and development projects that
target Future Internet Architecture.
Currently, the FIArch Group focuses on research efforts towards the transformative
evolution of the Internet architecture; evolution that cannot be addressed either by capacity
and incremental infrastructure investment or by incremental and reactive improvement of
Internet protocols, rather by iterative multi-disciplinary research cycles. For this purpose,
FIArch has initiated a task on performing systematic analysis of the experimental results
these efforts have produced since so far, determine which ones are progressively reaching
a certain level of maturity and what are the missing pieces that remain to be realized in
order to propose a foundational baseline for the architecture evolution of the Internet.
Next steps of this FIArch new task include the preparation and publication of two new
documents:
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 76 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

• One document on the systematic analysis, evaluation and comparison of
technologies and architectures studied by research projects that support the
identified Design Principles.
• One document describing the evaluation, measurements (criteria/metrics) and
analysis grid/methodology itself.
To begin with gathering input for the two documents from various research projects, a call
for position papers has been published that requests researchers in the aforementioned
areas to contribute towards this gap analysis and provide their contributions by September
3
rd
, 2012 to the FIArch mailing list (i.e. fiarch@future-internet.eu). The FIArch will evaluate
all contributions and the authors of selected ones will be invited to present their ideas in a
workshop to be hosted in EU INFSO premises in Brussels, on September 27
th
, 2012.
Drafts of the planned documents are expected to be released within the Q4 of 2012, while
the finalization and publication of the two documents is expected within 2013.
Although SESERV will not be active during Q4 of 2012 and 2013, when these documents
are to be prepared, members of SESERV plan to keep participating in and contributing to
these activities.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 77 of 161
© Copyright 2012, the Members of the SESERV Consortium

8 Techno-Socio-Economic Challenges for High-Speed
Accounting
This section describes further SESERV coordination activities related to the paper on the
sociop-economics of high-speed Internet accounting (HSA) jointly developed with a team
of European experts in the area. This section includes an excerpt of the paper, including
lessons learnt and overall conclusions drawn with respect to technical and economic
feasibility of high-speed Internet accounting from an ISP’s perspective.
8.1 Abstract
Traffic traversing high-speed links is of great interest to network operators, policy makers,
and users; the latter comprising individuals as well as service and content providers. With
the Internet becoming ubiquitous and a critical infrastructure in the service economy, the
demand for high-speed accounting increases for manifold reasons while feasibility
becomes more and more problematic due to bandwidth growing faster (Gilder’s law [47])
than computing power (Moore’s law [48]). This paper adopts the perspective of a network
operator and determines a position towards technical and economic feasibility of
accounting in high-speed networks. This position is determined with the help of an
assessment framework that has been developed to facilitate a structured discussion along
four socio-economic dimensions of relevance to a network operator: What a network
operator wants to account for (managerial dimension), what a network operator is able to
account for (technical and economic dimensions), and finally what a network operator is
either allowed to or has to account for (legal dimension).
Accordingly, and based on the relevant terminology outlined as well as an introduction
given into the assessment framework, technical feasibility is looked at first. This covers an
overview of current and emerging techniques for accounting in high-speed networks. For
each technique, boundaries are identified. The second part of the discussion then looks at
economic feasibility by documenting the state of debate among network operators and
other key stakeholders with respect to conflicting interests in high-speed accounting. This
includes a collection of arguments by stakeholder as well as an analysis of European
regulations of relevance. Finally, important lessons learnt with respect to both technical
and economic feasibility are presented and key conclusions are drawn.
8.2 Introduction and Motivation
An in-depth understanding of the traffic generated in and transported by the many
autonomous systems that build the Internet is critical to multiple stakeholders. Network
operators are interested for operational and strategic managerial reasons in a number of
accounting applications. Examples include, but are not limited to, packet capturing- or
flow-based reports and threshold alerts, for instance for Quality-of-Service monitoring,
intrusion detection, denial of service detection, or the accounting of resource and service
usage for cost optimization and charging purposes [44], [45]. Users, both in terms of
individuals as well as service and content providers, are affected by a working and efficient
accounting as they are interested in a reliable, secure, and available network for whose
use they want to be charged correctly, whereas personal data collection and profiling
activities shall be kept minimal. Policy makers are interested in Internet accounting for the
development of policy decisions, e.g., with respect to privacy concerns, as well as the
enactment of policies, e.g., with respect to data retention and legal interception.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 78 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Given a clear demand for Internet accounting, on the one hand, the observed shift from 10
Gbit/s to 40 Gbit/s backbone link speed (and 100 Gbit/s in the future, cf. IEEE 802.3ba
[46]) raises considerable challenges to accounting on high-speed links, on the other hand.
Technical feasibility of high-speed accounting becomes problematic for network operators.
At a link speed of 10 Gbit/s only, the time left for handling a single packet is approximately
5 ns. And the data volume collected has the potential to grow extremely large. Technical
feasibility is becoming more and more challenging, since bandwidth is found to grow three
times faster (cf. Gilder’s law) than computing power (cf. Moore’s law). Furthermore, the
increase of encrypted traffic renders high-speed accounting very difficult.
As many stakeholders shape the demand for Internet accounting – some of which are
increasing demand, while others determine limits for accounting – an assessment
framework along four socio-economic dimensions has been developed to facilitate a
structured discussion. The assessment framework reflects the perspective of a network
operator. This perspective was chosen, since a network operator is the stakeholder that
will implement accounting functionality in the respective communications infrastructure it
operates and manages. A network operator is also the stakeholder that is confronted with
its own as well as external demand for accounting. Consequently, questions of technical
and economic feasibility become central concerns to a network operator. The paper’s main
objective, thus, is to determine a position with respect to managerial and technical
feasibility of accounting in high-speed networks, on high-speed links.
With this objective in mind, a number of questions needs to be addressed:
1. How can high-speed accounting be defined and delineated from related terms?
2. Which are the primary (existing or emerging) technical approaches to high-speed
accounting, and how scalable is each approach (cf. Gilder’s law)?
3. Which are the relevant stakeholders in high-speed accounting, what is the state of
debate (including relevant legislation) with respect to key stakeholder conflicts, and
what implications with respect to economic feasibility may emerge from this debate?
4. Considering those conflicts, the ongoing debate, and the available or currently
being worked on regulation, which are the key lessons learnt regarding technical
and economic feasibility of high-speed accounting?
These four questions relate to the approach developed for and adopted in the paper.
Initially, the basic terminology with the respective relevant background information is
defined and delineated (question 1). Terminology relates to the respective applicable
understanding of accounting and high-speed. In addition to terminology, an assessment
framework is introduced. The assessment framework has been developed as an
instrument to facilitate a structured discussion along four socio-economic dimensions of
relevance to a network operator: What a network operator wants to account for
(managerial dimension), what a network operator is able to account for (technical
dimension and economic dimension), and finally what a network operator is either allowed
to or has to account for (legal dimension). Terminology and assessment framework are
discussed in the paper’s Section 2.
Out of those four socio-economic dimensions that span the assessment framework,
technical feasibility is looked at first (question 2) in the paper’s Section 3. This covers an
overview of current and emerging techniques for accounting in high-speed networks. Each
technique is characterized and technical boundaries – determining limits faced with the
respective technique when applied in a high-speed network – are identified. The second
part of the discussion then looks at economic feasibility (question 3) in the paper’s Section
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 79 of 161
© Copyright 2012, the Members of the SESERV Consortium

4 by documenting the state of debate among network operators and other key
stakeholders with respect to conflicting interests in high-speed accounting. This includes a
collection of arguments by stakeholder as well as an analysis of supra-national and
national regulations of relevance. The set of three debates investigated in detail covers
data retention, legal interception, and usage-based charging. Finally, important lessons
learnt with respect to both technical and economic feasibility (question 4) are presented
and key conclusions are drawn.
8.3 Lessons Learnt
Driven by the analysis framework introduced in the paper, high-speed accounting has
been investigated from a (technical and economic) feasibility point of view and out of the
perspective of an ISP. A number of key lessons learnt have become available. While the
detailed assessment of technical and economic feasibility of high-speed accounting can be
found in the paper’s Sections 3 and 4, respectively, this section gives insight into the
respective lessons learnt. For each lesson, a dedicated recommendation is made.
8.3.1 Lessons from Comparing High-speed Accounting Approaches
The analysis of various approaches to high-speed accounting as summarized in the
paper’s Table 1 shows that high-speed accounting systems are possible. No strong
indicator implying another assessment on a general basis was found. Table 1 states for
the majority of approaches looked at that they scale with network speed. It also states that
for each of the three debates looked at a suited high-speed approach exists—NetFlow and
sFlow are even qualified as applicable approaches for all three uses.
As positive this overall assessment may seem with respect to technical feasibility of high-
speed accounting, it is of utter importance to realize that the assessment holds in principle.
Scalability with network speed is given in principle. Application of various approaches to
one or several uses is given in principle. Any assessment might look different if taken for a
specific use case. Table 1 therefore means that technical feasibility may be assumed,
while the respective requirements or the context of a specific application case may impose
so high challenges that technical feasibility of an approach risks to turn into mere theory.
The important lesson learnt here is that the assessment of Table 1 in the paper determines
a starting point. ISPs should prepare for future (legal as well as managerial) demands
regarding high-speed accounting as those demands are expected to grow rather than
diminish from this point in time onwards. What ISPs really need for this purpose is a solid
basis for decision making. Table 1 can give an overall, general assessment. It reveals the
urgent necessity for ISPs to study technical feasibility in a case-based manner.
Recommendation: ISPs should carry out and promote research to study technical
feasibility, gains, and trade-offs of HSA approaches to a representative number of
specific application cases. Cases refer to varying uses of HSA and to varying
jurisdictions with different legal frameworks. HSA approaches refer to an emphasis on ex
ante promising approaches such as NetFlow and sFlow.
8.3.2 Lessons from the Data Retention Debate
The ongoing data retention debate among European Union and German institutions draws
a picture of immense problems for an ISP. It results in a situation of uncertainty. And it tells
a story of substantial investments into high-speed accounting solutions as well as the
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 80 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

related operations costs to be expected. The deadlock situation faced between
supranational and national law exposes ISPs to high risks. The outcome of this debate is
so far unclear, its impact on the actual legal frame to be consistent with is uncertain. It
seems rather unlikely that any investment would be compensated if it was overruled at
some point.
This debate is, however, nothing more than an exemplary case that reflects a fundamental
problem. Uncertainty comes from a lack of a harmonized legal frame. ISPs, especially
larger ones, are not uncommon to span multiple jurisdictions with their infrastructure.
Different jurisdictions impose different, potentially even conflicting, requirements on high-
speed accounting. The European Union-wide regulatory framework determines a case of a
regionally harmonized legal frame—which is positive with respect to legal certainty, in
principle—, but the studied case shows that the situation in practical terms is considerably
more complex.
Recommendation: Legislators, policy makers, and lobbying organizations are
recommended to work towards internationally harmonized legal frameworks for
HSA. Harmonization should at least be achieved on a regional level, i.e., the region of an
economic area. It will be crucial for their success that legal frameworks constitute binding
law in ratifying jurisdictions. It will be equally crucial that institutional and procedural
questions are addressed. The objective here should be to minimize disruptions and
phases of uncertainty by fast and effective dispute resolution.
8.3.3 Lessons from the Legal Interception Debate
The legal interception debate may be seen to a certain extent as a more narrowed down
case of the data retention debate—structurally and outcomewise, not in the subject matter
itself. The legal interception debate is studied on a national level, moving away from
conflicts arising between supranational and national regulation. Nonetheless, the detailed
investigation of the relevant Swiss federal law and the respective enactment reveals,
again, a situation of uncertainty for ISPs. This case refers to a technical argument of
uncertainty though. The enactment lacks important information for an ISP regarding
required information quality, roles and responsibility in the procedure, costs, and so on.
Despite obvious deficiencies in the actual implementation of the enactment, the overall
dual legal model of a longer lasting, objective-oriented, and technology-independent law
on the one hand, and a more dynamically adapted, implementation-oriented, and
technology-dependant enactment on the other hand is appreciated in general. This model
accounts for different longevity of requirements on high-speed accounting and the actual
implementation. While primary objectives and high-level requirements on high-speed
accounting are expected to stay valid for longer periods, network technology and usage
patterns are much more dynamically evolving. This difference is captured better by a dual
law/enactment model. In addition, procedures are typically less stringent for adaptations in
an enactment than in a law.
Recommendation: Legislators, policy makers, and lobbying organizations are
recommended to work towards adoption of a dual legal model for HSA regulations,
combining instruments of a law and an enactment (as a by-law). The law should cover
longer term objectives and it should abstract away from specific technology. The
enactment should reflect all relevant implementation specifics, and it should foresee
regular updates to reflect advancements in technology. ISPs should work closely with the
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 81 of 161
© Copyright 2012, the Members of the SESERV Consortium

en- actment drafting and revisions so that the enactment avoids uncertainty with respect to
technical implementation questions.
8.3.4 Lessons from the Usage-based Charging Debate
The studied case on usage-based charging is a call for more homogeneity and for better
coordination. This relates on the one hand to the stakeholder group of ISPs. On the other
hand, it relates to technical solutions to implement high-speed accounting. Regarding the
group of ISPs, the debate pin-points market disproportion and heterogeneous incentives
adopted by smaller and larger ISPs, respectively. The financial burden expected—even in
a situation when cost estimations are rare and show varying numbers—is clearly
threatening smaller ISPs in their existence. Economies of scale are to be expected for
high-speed accounting. Consequently, the implementation of high-speed accounting
systems seems more affordable for large ISPs – as cost calculations of different sources
indicate –, while small ISPs have to rely on favorable public opinion.
With many small ISPs being actually re-sellers, and with the offered communications and
connectivity services being essentially standardized converging to a smaller number of
protocols and technology in use, there could be a common technology basis for high-
speed accounting in networks of large and small ISPs. The use of common interfaces,
protocols, or data models might enable ISPs of different sizes to carry the burden more
efficiently.
Recommendation: ISPs should work towards common technology for implementing
HSA. In consideration of diverse and potentially diverging incentive sets, ISPs should
coordinate and agree on the respective interfaces, protocols, and/or data models to follow.
8.4 High-speed Accounting Conclusions
This paper adopts the overall objective to determine from an ISP’s perspective a position
with respect to economic and technical feasibility of accounting in high-speed networks, on
high-speed links. Driven by the set of four accordingly derived questions to be answered in
this paper, the following conclusions have been drawn.
8.4.1 Conclusions in Relation to Question 1
How can high-speed accounting be defined and delineated from related terms?
The applicable accounting notion was defined type-wise as technical accounting, more
specifically as Internet accounting which may be configured with respect to the suited level
of information granularity and time-based resolution. Process-wise, the accounting process
was determined to depend on a predecessing metering process, and to feed any
subsequent process, such as charging. This differentiated notion of accounting drove to
the more specific definition of high-speed accounting. High-speed accounting was defined
as accounting in high-speed networks and on high-speed links. While the understanding of
high-speed is a dynamic one, at this point, any link at speeds of 10 Gbps or more is
considered a high-speed link. In conclusion, a definition of the relevant terminology was
found, and it is important to note that this definition was found in consideration of
established terminology. The resulting high-speed accounting definition is therefore in-line
with existing concepts, leaving them intact, while determining the precise dedicated niche
for high-speed accounting within established terminology structure.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 82 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

8.4.2 Conclusions in Relation to Question 2
Which are the primary (existing or emerging) technical approaches to high-speed
accounting, and how scalable is each approach (cf. Gilder’s law)?
The primary technical approaches to high-speed accounting were identified and
characterized along multiple dimensions. The vast majority of approaches were found to
follow a fully centralized approach, or an approach to accounting with centralized
elements. Distributed accounting is in place, while it may rather be considered as an
emerging than as a standard approach in high-speed accounting. Only a few approaches
were found to apply sampling techniques. Similarly, only a few approaches render user
traffic visible. On the other hand, all but one approaches were termed to scale with
network speed, in principle. In conclusion, scalability is thus far seen to be achievable by
use of most high-speed accounting approaches. Nevertheless, with increasing link speeds
and increasing demand for high-speed accounting, the technical challenges are only
expected to increase as well. Eventually, scalability will not be given anymore. Plus, real
world cases show that ISPs face a combination of technical and socio-economic
challenges—not just an isolated challenge out of a single technical, societal, or economic
dimension.
8.4.3 Conclusions in Relation to Question 3
Which are the relevant stakeholders in high-speed accounting, what is the state of the
debate (including relevant legislation) with respect to key stakeholder conflicts, and what
implications with respect to economic feasibility may emerge from this debate?
As diverse the three debates investigated in this paper are, for instance, with respect to
the specific geographical and legal context each debate takes place in, all debates share
commonalities when it comes to the set and type of stakeholders involved. Three groups
of stakeholders were concluded to be typically engaged in any debate. The first group
consists of players that may shape in their individual role the legal frame for high-speed
accounting. This group includes legislators such as members of parliaments and the
political parties they belong to. It includes governmental bodies and administrations—thus,
the executive power in a state—which prepares businesses for legislation and which drafts
enactments from laws. And it includes policy makers in a wider sense. The second and
third groups consist of players which are mainly affected by the legal frame for high-speed
accounting, namely network operators that have to implement high-speed accounting and
the civil society that builds in the end the user base and, thus, the source of any accounted
traffic. The group of network operators includes ISPs of various sizes, telecom providers,
and cable providers. The group of civil society includes all sorts of organizations, often in
the form of associations, representing the interests of citizens.
The analysis of the three debates investigated and documented showed that none of these
groups is conflict-free in itself. For instance, different players in the group of network
operators might follow diverging interests. Consequently, and despite the three
stakeholder groups spotted, the set of debates emerging in and out of high-speed
accounting is diverse and complex in nature. Alliances might change from one debate to
another. For instance, ISPs were observed to form alliances based on their debate-specific
assessment of technical and (primarily) economic feasibility.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 83 of 161
© Copyright 2012, the Members of the SESERV Consortium

8.4.4 Conclusions in Relation to Question 4
Considering those conflicts, the ongoing debate, and the available or currently being
worked on regulation, which are the key lessons learnt regarding technical and economic
feasibility of high-speed accounting?
A number of key lessons learnt—and the respective recommendations— were determined.
On an overall basis, conclusions on technical and economic feasibility differ from each
other. Technical feasibility of high-speed accounting is given, in principle, for the time
being and in consideration of the applicable definitions. Significant technical challenges
are obvious though. And the implementation of specific high-speed accounting solutions
might not be technically feasible for all ISPs everywhere, but on general terms, even at link
speeds of 10 Gbps and multiples thereof, technical solutions may be found still to cope
with emerging requirements on storage and computation. In other words, Gilder’s law has
to be kept in mind and much innovation will be needed in the near future, but today the
technical boundary for high-speed accounting has not yet been hit.
Regarding feasibility of high-speed accounting from an economic point of view, however,
the situation looks considerably less favorable. The ongoing debates around high-speed
accounting—some of which depicted in this paper—reflect the struggle ISPs, large and
small, are confronted with already today. ISPs suffer from legal uncertainty as key
questions remain essentially unclear: What kind of information and in which data
granularity has to stored (and for how long) and/or provided? With which time-wise
resolution shall or may data be accounted? Within which time frame is data to be returned
to the ordering party? Where is traffic data sufficient and where is payload required as
well—and if so, is this fundamentally in-line with basic civil rights? All of these questions,
and more, show an impact on economic feasibility. Even more so, since they might find
different answers in different jurisdictions. There are valid concerns especially of smaller
ISPs that the burden imposed by external demand for high-speed accounting may be
threating their very existence. Empirics-based numbers on investment and operating costs
are rare. On the other hand, cost estimations from different sources provide at least a
strong indicator that costs will be substantial. Finally, even large ISPs have shown to fail at
accounting data in the intended quality. In other words, a vast number of stakeholder
statements collected and a lower number of indicators imply that certain boundaries with
respect to economic feasibility are upon some ISPs already today.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 84 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

9 Summary and Conclusions
The SESERV Coordination Action has been coordinating the research within the European
Future Internet community with the aim to increase the awareness on prevalent and
emerging socio-economic (SE) issues and discuss the suitability of particular technologies
in addressing them. This report describes the outcomes of those activities related to the
economics of the Future Internet, mainly focusing on incentive-compatible mechanisms for
effective collaboration and high-speed accounting.

Figure 13: The WP2 Framework
SESERV provided a framework which helps technology developers and policy makers to
understand the complex interplay of technology and economics in the Internet. This
framework is composed of a methodology for evaluating Internet technologies (called
“tussle analysis”) and a set of taxonomies. The latter include:
a) An extensive taxonomy of Internet functionalities as presented in Sections 3.1, 3.2,
which covers both aspects of how services are being hosted (cloud-related
functionalities) and their actual delivery (network-related functionalities).
b) A generic classification of Internet stakeholder into seven high-level stakeholder
roles, as documented in Section 3.3, where each one is further decomposed into
more detailed instances.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 85 of 161
© Copyright 2012, the Members of the SESERV Consortium

c) Four socio-economic dimensions of the influencing factors on the demand for high-
speed Internet accounting, presented in the paper on the socio-economics of high-
speed accounting (discussed in Section 8.2), providing an assessment framework
for respective technologies.
Collaborating with a wide range of Challenge 1 research projects and members of the
FISE community (e.g. participants at SESERV events
39
, cluster meetings organized by
EU, FIA events, ITU
40
or FIArch meetings and other workshops) the above taxonomies
have been extended to incorporate suggestions and were utilized in applying the SESERV
tussle analysis methodology. This methodology encourages and guides technology
developers in identifying the stakeholders of the technologies under
development/investigation, their interests and assessing whether these would be met with
a particular implementation of each such technology. The idea is that designing a
technology in a more holistic way, by taking into account the interests of major
stakeholders early in the process, would lead to more sustainable socio-economic
outcomes and increase the chances of that technology being adopted in the long-term.
The outcomes of these bilateral discussions, wider focus groups and meetings resulted in
a set of seven recommendations to research projects, providers and policy makers for
successfully redesigning and configuring Future Internet technologies. These are:
a) Technology makers should understand major stakeholders' interests:
Towards this objective, Section 4.1 provides an overview of the interests of major
stakeholders (including possible conflicts) for all Internet functionalities.
b) Technology makers should allow all actors to express their choices: Section
3.4 gives a list of generic economic challenges, grouped into classes in order to
provide technology makers guidance when looking for candidate tussles in which
the functionality provided by their technology may be involved, while Section 4.2
provides an extensive list of tussles in the Networking and Cloud computing
research areas. These tussles and tussle groups can help designers understand
how unsatisfied stakeholders could react in their case. Furthermore, the Appendix A
documents all the details of how the tussle analysis has been applied and
especially on finding technologies that are compatible to the stakeholders’ interests
(or “designed for tussle”).
c) Technology makers should explore consequences and dependencies on
complementary technologies: Section 4.2 provides a cartography of the tussles
that have been identified, the functionalities that these tussles entail and their
relationships (spillovers). Furthermore, the Appendix A documents how the tussle
analysis can be used for exploring consequences and dependencies on
complementary technologies.
d) Technology makers and Providers should align conflicting interests through
incentive mechanisms: Appendix A provides several examples where tussles
could have been dealt effectively with the appropriate economic mechanisms in
place. Furthermore, Section 7 provides two related seeds for Future Internet design

39
For example the workshop entitled “The interplay of economics and technology for the Future Internet” that was held
in January 2012.
40
This particular version of D2.2 is restricted to members of the SESERV consortium and the reviewers’ team due to
applicable publication rules of the ITU-T with regard to meetings’ inputs/outputs as well as non-final Recommendation
documents. The public version of D2.2 containis the results of the coordination activities along with a publishable digest
of the interactions with ITU-T, while this restricted version at hand includes all contents from the public version together
with the complete output of ITU interactions.

D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 86 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

principles, which have been contributed by SESERV to the FIArch working group;
namely the "Exchange of Information between End-Points" and "Sustain the
Investment".
e) Technology makers should increase transparency: By examining the tussles
cartography, Section 4.3 provides a useful insight in critical functionalities that are
missing and have negative effects to other functionalities. In particular it was found
that if a number of Security-related mechanisms (especially monitoring) were in
place, then this would help tussles in other functionalities to be resolved more
smoothly. Furthermore, Section 8 considers the managerial and technical feasibility
and other socio-economic challenges of high-speed Internet accounting in a world
of increasing volumes of real-time communication.
f) Policy makers should encourage knowledge exchange and joint
commitments: Performing the detailed tussle analysis resulted in suggesting
candidate technologies that follow the “Design for Tussle” goal, which was not
always feasible for the projects to include in their architectures. There were several
valid reasons for this, such as lack of expertise in other research projects and
groups on the economic issues on which SESERV focused, lack of resources and
need to focus on the contracted workplan. Given the limited duration of the research
projects and the need for a systematic approach in dealing with the complex
socioeconomic challenges, it is recommended that projects are encouraged to
announce shortcomings of their technology and dependencies on other
technologies in a way that makes possible their continous evolution by other
entities. To this end, Section 5 provides a survey of technologies proposed by a
carefully selected set of 11 Challenge 1 research projects.
The joint work with a team of European experts in high-speed accounting resulted in the
following set of four recommendations, to be published in a whitepaper on high-speed
accounting:
a) ISPs should carry out and promote research to study technical feasibility, gains,
and trade-offs of high-speed accounting approaches to a representative
number of specific application cases.
b) Legislators, policy makers, and lobbying organizations are recommended to work
towards internationally harmonized legal frameworks for high-speed
accounting.
c) Legislators, policy makers, and lobbying organizations are recommended to work
towards adoption of a dual legal model for high-speed accounting
regulations, combining instruments of a law and an enactment (as a by-law).
d) ISPs should work towards common technology for implementing high-speed
accounting. In consideration of diverse and potentially diverging incentive sets,
ISPs should coordinate and agree on the respective interfaces, protocols, and/or
data models to follow.
Besides the legacy described above, several research projects benefited from the
interactions we had during the SESERV action lifetime. For example, it was found that
“announcing the percentage of altrouistic users does not incentivize selfish users to relay
traffic”. This result together with the focus group discussions confirmed the need for the
ULOOP project to design economic mechanisms. Recent results from the economic theory
related to the restrictions of economic mechanisms have also been discussed with the
ULOOP project. Furthermore, the tussle analysis for the ETICS project revealed that the
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 87 of 161
© Copyright 2012, the Members of the SESERV Consortium

technologies designed for managing QoS-aware paths between ISPs allow involved
parties to express their choices and at the same time reduce instability in the transmission
functionality, while currently explores policies for dealing with the regulators’ concern that
ISPs would have no incentives for investing in Best-Effort Internet. Similarly, the SAIL
project confirmed the need identified by SESERV tussle analysis for allowing Content
Owners to influence the update frequency of the content items stored in caches.
Concluding, based on the feedback SESERV received from other project representatives
and members of the FISE community, as in the case of the Athens workshop, it is
apparent that SESERV managed to identify, discuss and increase the awareness of
Future Internet stakeholders on key (socio)economic issues related to FI technologies. It is
interesting to mention that the collaboration of SESERV members with some projects and
institutions will continue after the end of SESERV project’s lifetime as well. For example,
the combination of tussle analysis, MACTOR and UBM methodologies will be explored
together with members of the UNIVERSELF project for identifying feasible future value
networks. Similarly, SESERV members will continue providing their expertise to the ITU
and the FIArch Group.

D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 88 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

10 References
[1] N. Thi-Mai-Trang, “On The Way To a Theory for Network Architectures”, World Computer Congress,
Network of the Future Conference, Brisbane, Australia, September, 2010
[2] A. Bogliolo: User-Centric Wireless Networks – A case study for Tussle analysis. SESERV Workshop on
the Interplay of Economics and Technology, pp. 1-25, Athens, Greece, January 31, 2012.
[3] P. Demestichas, Y. Kritikou, D. Kavounas, A. Georgakopoulos: Opportunistic Networks and Cognitive
Management Systems for Efficient Application Provisioning in the Future Internet. SESERV Workshop
on the Interplay of Economics and Technology, pp. 1-21, Athens, Greece, January 31, 2012.
[4] The SESERV Coordination Action: First Report on Economic Future Internet Coordination Activities.
SESERV Deliverable D2.1, pp. 1-119, September 8, 2011.
[5] The SESERV Coordination Action: First Report on Social Future Internet Coordination Activities.
SESERV Deliverable D3.1, pp. 1-67, September 8, 2011.
[6] Y. Shoham, K. Leyton-Brown: Multiagent Systems - Algorithmic, Game-Theoretic, and Logical
Foundations. Cambridge University Press 2009: I-XX, 1-483
[7] The PURSUIT project: Deliverable D2.3, http://fp7pursuit.ipower.com/PursuitWeb/wp-
content/uploads/2011/12/INFSO-ICT-257217_PURSUIT_D2.3_Architecture_Definition_Components_
Descriptions_and_Requirements.pdf
[8] The SAIL project: Deliverable DA.7, http://www.sail-project.eu/wp-content/uploads/2011/08/SAIL_DA7-
Final-Version_public.pdf
[9] P. Jokela, A. Zahemszky, C. E. Rothenberg, S. Arianfar, and P. Nikander, LIPSIN: line speed
publish/subscribe inter-networking. In Proceedings of the ACM SIGCOMM 2009 conference on Data
communication (SIGCOMM '09). ACM, New York, NY, USA, 195-206.
[10] The ETICS research project: Revision of ETICS Architecture and Functional Entities, Deliverable D4.3,
2012, https://bscw.ict-etics.eu/pub/bscw.cgi/d37005/ETICS_D4.3_v1.0.pdf
[11] S. Farrell, D. Kutscher, C. Dannewitz, B. Ohlman, and P. Hallam- Baker, “The Named Information (ni)
URI Scheme: Core Syntax,” IETF, Internet-Draft – work in progress 00, October 2011.
[12] D’Ambrosio, Dannewitz, Karl, Vercellone, MDHT: Hierarchical Name Resolution Service for Information-
centric Networks, ACM SIGCOMM 2011 ICN Workshop, August 2011
[13] C. Kalogiros, C. Courcoubetis, G. D. Stamoulis, M. Boniface, E. T. Meyer, M. Wald- burger, D. Field,
and B. Stiller, “The future internet,” ch. An approach to investi- gating socio-economic tussles arising
from building the Future Internet, pp. 145–159, Berlin, Heidelberg: Springer-Verlag, 2011.
[14] The SESERV Coordination Action: Second Year Report on Scientific Workshop, SESERV Deliverable
D1.4, draft version, February 2012.
[15] The SESERV Coordination Action: Methodology for SESERV 2nd Year Discussions. SESERV
Deliverable D1.5, February 2012.
[16] E-communications household survey, Special Eurobarometer 381, available online at
http://ec.europa.eu/public_opinion/archives/ebs/ebs_381_en.pdf
[17] Garrett Hardin, “The Tragedy of the Commons”, Science 13 December 1968: 162 (3859), 1243-1248.
[18] The MEDIEVAL project, D2.1: Requirements for video service control
[19] The MEDIEVAL project, D3.1: Concepts for Wireless Access in relation to cross-layer optimization
[20] The MEDIEVAL project, D4.2: IP Multicast Mobility Solutions for Video Services
[21] The MEDIEVAL project, D5.1: Transport Optimization: initial architecture
[22] The ENVISION project, D3.1, Initial Specification of the ENVISION Interface, Network Monitoring and
Network Optimisation Functions, February 2011
[23] The ENVISION project D3.2, Refined Specification of the ENVISION Interface, Network Monitoring and
Network Optimisation Functions, January 2012
[24] RFC4848, Domain-Based Application Service Location Using URIs and the Dynamic Delegation
Discovery Service (DDDS)
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 89 of 161
© Copyright 2012, the Members of the SESERV Consortium

[25] FC5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL)
Profile
[26] RFC5246, The Transport Layer Security (TLS) Protocol
[27] D.D. Clark, J. Wroclawski, K.R. Sollins, R. Braden, Tussle in Cyberspace: Defining Tomorrow’s Internet.
IEEE/ ACM Trans. Networking 13, 3, pp. 462-475, June 2005.
[28] Y. Liu, H. Zhang, W. Gong, D. Towsley, On the Interaction Between Overlay and Underlay Routing,
Proc. IEEE INFOCOM 2005
[29] RFC1958, B. Carpenter, “Architectural Principles of the Internet,” June 1996
[30] EC FIArch Group, “Fundamental Limitations of current Internet and the path to Future Internet,” March
2011,
[31] I. Papafili, S. Soursos, G. D. Stamoulis, A Novel Game-Theoretic Framework for Modeling Interactions
of ISPs Anticipating Users' Reactions, to be published in the Proceedings of the 6th International
Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS 2012), October 9–
12, 2012, Cargèse, France
[32] Godet,M.,“Actors’ moves and strategies: the MACTOR method,” Futures, 1991, pp.605-622
[33] Godet, M., “From Anticipation to action. A handbook of strategic prospective”, Unesco Publishing, 1994,
Paris, France
[34] The SESERV Coordination Action: Final Report on Social Future Internet Coordination Activities.
SESERV Deliverable D3.2, August 2012.
[35] A. Odlyzko, "The History of Communications and its Implications for the Internet" 2000. Available at
SSRN: http://ssrn.com/abstract=235284
[36] R. B. Myerson and M. A. Satterthwaite. Efficient mechanisms for bilateral trading. Journal of Economic
Theory, 29:265–281, 1983.
[37] The SAIL project: D.B.2: NetInf Content Delivery and Operations
[38] D. Clark. 1988. The design philosophy of the DARPA internet protocols. SIGCOMM Comput. Commun.
Rev. 18, 4 (August 1988), 106-114. DOI=10.1145/52325.52336
http://doi.acm.org/10.1145/52325.52336
[39] The ETICS research project: Business and legal framework for Network Interconnection, Deliverable
D3.1, 2010, available online at https://bscw.ict-
etics.eu/pub/bscw.cgi/d18622/D3.1%20Business%20and%20legal%20framework%20for%20Network
%20Interconnection%20%28v1%29.pdf
[40] Quoitin, B.; Pelsser, C.; Swinnen, L.; Bonaventure, O.; Uhlig, S.; , "Interdomain traffic engineering with
BGP," Communications Magazine, IEEE , vol.41, no.5, pp. 122- 128, May 2003, doi:
10.1109/MCOM.2003.1200112,
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1200112&isnumber=27016
[41] George Athanasiou, Kostas Tsagkaris, Panagiotis Vlacheas, and Panagiotis Demestichas. 2011.
Introducing energy-awareness in traffic engineering for future networks. In Proceedings of the 7th
International Conference on Network and Services Management (CNSM '11). International Federation
for Information Processing, Laxenburg, Austria, Austria, 367-370.
[42] The C2POWER project: D5.1 “Cooperative short-range strategies and protocols for power saving,
available online at http://www.ict-c2power.eu/images/Deliverables/C2POWER_D5.1.pdf
[43] OPTIMIS project Whitepaper, “Why Use OPTIMIS”, available online at http://www.optimis-
project.eu/sites/default/files/OPTIMIS%20White%20Paper.pdf
[44] P. "eleda, R. Krejci, J. Bariencik, M. Elich, V. Krmicek: HAMOC - Hardware-Accelerated Monitoring
Center, Networking Studies V : Selected Technical Reports. Prague: CESNET, z.s.p.o., 2011. ISBN
978-80-904689-1-7, pp. 107-133, 2011
[45] J. Coppens, E.P. Markatos, J. Novotny, M. Polychronakis, V. Smotlacha, S. Ubik: SCAMPI - A
Scaleable Monitoring Platform for the Internet, Proceedings of the 2
nd
International Workshop on Inter-
Domain Performance and Simulation (IPS 2004), Budapest, Hungary, 22-23, March 2004
[46] The Institute of Electrical and Electronics Engineers (IEEE): IEEE Standard for Information
Technology–Telecommunications and Information Exchange be- tween Systems–Local and
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 90 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Metropolitan Area Networks–Specific Requirements. Part 3: Carrier Sense Multiple Access with
Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications. Amendment 4:
Media Access Control Parameters, Physical Layers, and Management Parameters for 40 Gb/s and 100
Gb/s Operation; IEEE Std 802.3ba-2010, Amendment to IEEE Std 802.3-2008, pp. 1-433, New York,
USA, June 2010.
[47] G. F. Gilder, “Telecosm: How Infinite Bandwidth Will Revolutionize Our World”, Free Press, 2000, ISBN-
10: 0684809303, ISBN-13: 978-0684809304
[48] G. E. Moore, :Cramming More Components onto Integrated Circuits”, Electronics, Vol. 38, No. 8, pp.
114-117, 1965



Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 91 of 161
© Copyright 2012, the Members of the SESERV Consortium

11 Abbreviations
3G 3rd Generation (Network)
4G 4rd Generation (Network)
AAA Authentication, Authorization, Accounting
ANP Access Network provider
API Application Programming Interface
ASP Application Service Provider
ASQ Assured Service Quality
BGP Border Gateway Protocol
BONFIRE Building service test beds on FIRE
BRA Bloom filter-based Relay Architecture
BS Base Station
BT British Telecom
C2C Consumer-to-Consumer
C2POWER Cognitive Radio and Cooperative Strategies for POWER saving in
multi-standard wireless devices
CAPEX Captial Expenditures
CDN Content Delivery Network
CDNNC CDN Node Control
CIDR Classless Inter Domain Routing
CINA Collaboration Interface between Networks and Applications
CLI Command-Line Interface
CMS Cache Management Service
CP Content Provider
CPU Central Processing Unit
CRM Customer Relationships Management
CSA Coordination and Support Action
DANE DNS-Based Authentication of Named Entities
DDDS Dynamic Delegation Discovery Service
DHCP Dynamic Host Configuration Protocol
DHT Distributed Hash Table
DNS Domain Name Service
e2e end-to-end
EC European Commission
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 92 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

ENVISION Enriched Network-aware Video Services over Internet Overlay
Networks
ETICS Economics and Technologies for Inter-Carrier Services
ETICS Economics and technologies for inter-carrier services
FI Future Internet
FIA Future Internet Assembly
FIArch Future Internet Architecture
FIRE Future Internet Research and Experimentation
FISE Future Internet Socio-Economics
FN Future Network
GEYSERS Generalised Architecture for Dynamic Infrastructure Services
GIN Global Information Network
GPRS General packet radio service
HSA High-speed (Internet) Accounting
IaaS Infrastructure as a service
IBP Internet Backbone Provider
IC InterCarrier
ICN Information-Centric Networking
ICT Information and Communications Technology
IMS IP Multimedia Subsystem
IMT International Mobile Telecommunications
INFSO Information Society
IO Information Object
IoT Internet-of-Things
IP Internet Protocol
ISO International Organization for Standardization
IT Information Technology
ITU International Telecommunication Union
JSON JavaScript Object Notation
KISS Keep It Simple, Stupid
LICL Logical Infrastructure Composition Layer
LSA State Announcement
LTE Long Term Evolution
MDHT Multi-level Distributed Hash Table
MEDIEVAL MultimEDia transport for mobIlE Video AppLications
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 93 of 161
© Copyright 2012, the Members of the SESERV Consortium

MIB Management Information Base
MN Mobile Node
NANOG North American Network Operators' Group
NCP+ Network Control Plane
NEGOCODE Network Guided Optimization of Content Delivery
NGN Next Generation Networks
NGN-GSI Next Generation Networks Global Standards Initiative
NRS Name Resolution Service
NSP Network Service Provider
OAM Operations Administration Maintenance
OCCI Open Cloud Computing Interface
OPEX Operating Expenditures
OPTIMIS Optimized Infrastructure Services
OSI Open Systems Interconnection
OSPF Open Shortest Path First
OVF Open Virtualization Format
PA Principal Agent
PaaS Platform as a service
PCN Pre Congestion Notification
PDU Protocol Data Unit
PKI Public Key Infrastructure
POP Point of Presence
PPP Public Private Partnership
PURSUIT Publish Subscribe Internet Technology
QoE Quality of Experience
QoS Quality of Service
RENE Rendezvous Network
RId Rendezvous Identifier
ROI Return-on-Investment
RTD Research and Technology Development
SaaS Software as a service
SAIL Scalable and Adaptive Internet solutions
SESERV Socio-Economic Services for European Research Projects
SG13 Study Group 13
SId Scope Identifier
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 94 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

SIS Server Information Service
SLA Service Level Agreement
SML Service Middleware Layer
SNMP Simple Network Management Protocol
SP Service Provider
SRV Service record
TCP Transmission Control Protocol
TE Traffic Engineering
TLS Transport Layer Security
TREC Trust Risk Eco-efficiency Cost
TTC Telecommunication Technology Committee
UBM Unified Business Modeling
UCN User-centric Networking
UK United Kingdom
ULOOP User-centric Wireless Local Loop Project
UMTS Universal Mobile Telecommunications System
U-NAPTR URI-Enabled Name Authority PoinTeR
UNIVERSELF Realizing autonomics for Future Networks
URI Uniform Resource Identifier
VM Virtual Machine
VoIP Voice over IP
VNC Value Network Configuration
WiFi Wireless Fidelity
WP Work Package
WS Workshop
WSAG4J WS-Agreement for Java
XLO Cross-Layer Optimization module)
XML Extensible Markup Language

Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 95 of 161
© Copyright 2012, the Members of the SESERV Consortium

12 Acknowledgements
This deliverable was made possible due to the large and open help of the SESERV team,
and especially WP2 and WP3, within this CSA. Furthermore the authors of D2.2 address
many thanks to the participants of the events organized by SESERV for contributing with
their expertise in the discussions, as well as to representatives of Challenge 1 projects
with whom bilateral interactions were carried out.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 96 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Appendix A Detailed Tussle Analysis for a Subset of
FP7 Research Projects
This section provides a detailed tussle analysis of selected research projects. These
projects are: ETICS, UNIVERSELF, SAIL, PURSUIT, ULOOP, C2POWER, OPTIMIS, and
BONFIRE.
A.1 Detailed Tussle Analysis for ETICS Technologies
A.1.1 Introduction to the ETICS System

The ETICS project proposes a set of technologies and the associated economic
mechanisms so that participating providers can jointly offer premium connectivity services
to their customers. These services to ETICS customers are provided by stitching
connectivity agreements (called Assured Service Quality (ASQ) agreements or goods)
from several ETICS providers. The ETICS products are not just Best Effort connectivity
products: they provide tangible Quality of Service assurance in terms of reliability,
bandwidth, delay, jitter etc over a certain ASQ path; this path may be different than the
path returned from Border Gateway Protocol (BGP) in order to meet the QoS constraints
demanded by the customer.
In a federated environment such as the ETICS marketplace, service composition is the
process of establishing the end-to-end path and the technical parameters of the
associated Service Level Agreements between pairs of ETICS network operators. In order
to do so, participants must be aware of the available services/products and all the
necessary information (prices, availability, etc.), which takes places through a service
discovery mechanism. Other types of functionality explored are admission control for
service establishment and SLA monitoring mechanisms for collecting the necessary
information during service provisioning to validate conformance to contract terms.
This new ecosystem gives incentives for ETICS Communication Providers to invest in new
network infrastructure and extend their business model by offering advanced connectivity
services in a cooperative way, and for Information Providers to offer premium-quality
services to their customers (such as stereo VoIP services offered by Communication
Providers).
ETICS Communication Providers can be categorized based on their role during service
provisioning as “Edge Internet Service Providers (ISP)” who serve ETICS customers and
“Transit ISPs” who interconnect Edge ISPs. Several scenarios studied involve Brokers
(called “Facilitators”) who provide supporting services to Edge and Transit ISPs, such as
the catalogue of available ASQ products.
The main ETICS customers are Service Providers (SP) that can be further decomposed
based on the type of traffic into Content SPs, Communication SPs, Application SPs and
Online Gaming SPs. Furthermore, the uptake of cloud-enabled services has increased the
interest of corporate and residential customers for premium connectivity services.
Figure 14 below provides a simple example of a potential conflict within the ETICS
ecosystem that serves as an introduction to tussle identification and analysis. Suppose a
specific SLA exists between an end-user and ISP-1 governing the expected Quality of
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 97 of 161
© Copyright 2012, the Members of the SESERV Consortium

Experience (QoE) in terms of throughput and response time to a Content Provider. This
customer-provider relationship appears as a green solid line with diamond-shaped edges.
In ETICS individual service providers will typically maintain their own SLAs for the
expected Quality of Service from one provider to the next. Such advanced interconnection
agreements describe the handling of traffic between providers (possibly for multiple ETICS
customers) and thus exist in isolation from the QoE SLA governing the end-to-end (e2e)
service. Furthermore, these SLAs can refer to composite services of more than one ISP by
stitching a set of atomic ASQ goods. In the ‘cascaded pull model’ scenario studied by
ETICS, atomic ASQ goods are setup by each ISP between border routers and advertised
to their neighbours, who can possibly advertise it further.
In Figure 14 ISP-3 has created an atomic ASQ good with certain properties between its
routers F, H. ISP-2 who learned about that specific interconnection service offer decides to
use it together with a local ASQ good between routers C, D and disseminate this new
composite ASQ good to ISP-1. Thus ISP-1 can reach through ISP-2 and ISP-3 the
Content Provider and serve the customer’s request for premium connectivity.
Two SLAs will be created for this end-to-end service at the ETICS marketplace (shown as
blue solid lines with diamond-shaped edges); one among ISP-1 and ISP-2 for the path
between C and H and another one between ISP-2 and ISP-3 for a path between F and H.
Both ISP-2 and ISP-3 will have to manage their network so that all SLAs are honoured, for
instance select one out of many intra-domain paths available to route that particular traffic.
Furthermore, ISPs usually interconnect at multiple locations for more routing options and
increased reliability.

Figure 14: A Scenario of Premium Interconnection Services Under the ‘Distributed Pull’
Coordination Model

Suppose now that there is a failure of some kind on the network of ISP-2. Typically,
failover will be catered for and the service can continue across any backup path. However,
before the recovery completes, the end-user may notice degradation in the quality of the
service provided by their ISP and the QoE SLA could have already been violated.
The question now is who should be held responsible for the failure? Without sufficient
monitoring as well as cross-checking between all pairs of interconnected ISPs (ISP-1, ISP-
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 98 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

2 and ISP-2, ISP-3), it is not clear whether the delay in traffic forwarding occurs as the
traffic leaves or arrives at the respective service provider. What if the content provider
experiences local network issues, as well? For ETICS, how are they to decide where to
apportion blame? Tussle analysis can help identify, define and ultimately assess solutions
to the problem.
In what follows we provide examples of using the tussle analysis framework in the context
of ETICS to predict and analyze potential tussles. We also critique (where possible) the
technology choices that lead to poor tussle outcomes.
A.1.2 Case study A: QoS-aware Transmission and Transit Competition
The basic idea is that two competing Tier-2 Internet Service Providers may benefit by
establishing an ASQ agreement appropriate for transporting and terminating each other’s
(customers’) traffic, since this will allow more lucrative end-to-end (e2e) services requiring
QoS to be provisioned. The basic conflict of interest in such an ETICS-enabled
marketplace is improving the value of their service to their own end-customers and
becoming more competitive in the provisioning of such e2e services. An interesting
observation is that if such an ASQ is established without the right controls in place, then it
may cause, as a side effect, a market failure for premium transit services.
The larger ISP who is directly attached to more content providers and hence can provide
content with better quality to their customers (ISPs buying transit services) is concerned of
losing this competitive advantage and become just the same as any other ISP due to the
ASQ that interconnects them. This might prohibit the large ISP establishing the ASQ even
though it could be beneficial for both during the first phase of the tussle evolution. We
analyse this phenomenon and using the principles for design for tussle (described in [4])
show that if enough control is available in the definition of the ASQ agreement, then the
effects of a spillover can be reduced and hence the ASQ agreement can function as
originally envisaged.
Let us suppose that a large operator, called ISP-1, has attached the cache of a popular
Content Provider (such as YouTube) to its network and no financial transactions take
place (peering agreement). Furthermore it has a peering link with ISP-2, which allows
them to exchange their customers’ traffic for “free”. A third operator, called ISP-3, buys
transit connectivity from ISP-1 as a result of the higher quality connectivity to the Content
Provider. All ISPs have a number of end customers, but ISP-1 has the largest market
share, followed by ISP-2.
The main set of stakeholder roles includes ETICS Communication Service Providers or
ETICS ISPs for short (mainly Tier2, Tier3 ISPs) and Content Providers. Other involved
roles are consumers of ICT services and Regulators. For brevity, we will concentrate on
the first set of actors and stakeholder roles.
In today’s Internet peering links are usually under dimensioned. More specifically, ISP-1
has no incentive to upgrade the capacity of the peering link in order to maintain its
competitive advantage over #SP-2 for communication providers that buy transit services.
Thus, unless peered ISPs are perfectly symmetric in terms of volume exchanged and
networking services supported, such a tussle outcome - for example the service
composition functionality – would not be reached.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 99 of 161
© Copyright 2012, the Members of the SESERV Consortium


Figure 15: A Scenario for Internet Connectivity Market
Furthermore in current Best Effort Internet, ISPs try to improve the quality of the services
offered to their customers by performing traffic engineering. This means that the inability to
compose network services with QoS features has a spillover to the routing functionality, as
Figure 16 shows. More specifically, ISPs will enter a loop of performing routing in a way
that optimizes the peering link usage
41
in a selfish way.
At the same time, as was mentioned in the final SESERV Workshop, many Gaming
Providers want to offer gamers a neutral playing field regardless of the location of the
gaming server. In order to do so, these employ a bonus-malus system for balancing users’
QoE. After benchmarking the network response times (one-way delay) of each gamer they
route traffic of high-delay users using faster paths than the paths used for low-delay users.
What is interesting is that this behaviour actually cancels out the traffic engineering efforts
by ISPs.
Now let’s consider what happens in the QoS functionality. Assuming in this toy scenario a
stable initial outcome (green circle with dotted-line border), then two possible cases are
shown depending on what SLA properties can be configured.
In the first case an ASQ good is assumed to describe QoS-related properties only, like
bandwidth, delay, jitter, etc. If such an ASQ good has been setup between ISP-1 and ISP-
2 then the former would increase the quality that its customers perceive when interacting
with customers of the latter for services like video etc. Similarly, ISP-2 would get premium
quality connectivity for both the Content Provider as well as the rest of ISP-1’s end-
customers without increasing its cost. This gives him an advantage in competing with the
ISP-1 for end customers. The reason is that customers of ISP-3 can access a popular
destination (the Content Provider) with similar quality across both transit providers ISP-1
and ISP-2. As we described above this tussle outcome is not desired by ISP-1 and thus is
not stable (shown as a blue circle).

41
A classic example being the “hot-potato routing” case between 2 peered ISPs that are interconnected in
multiple locations.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 100 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium


Figure 16: Candidate Tussle Evolution for QoS-aware Service Composition

One way for ISP-1 to deal with this tussle would be to stop offering that ASQ good and
exchange their customers’ traffic through another – transit – provider (a Tier-1 one). In this
outcome both providers would experience increased cost in relation to the initial state and
would not be satisfied. Similarly, ISP-2 would find it beneficial to peer with the Content
Provider for free. If ISP-1 had performed this analysis before adopting the ETICS solution
then, under certain assumptions related to its effect on demand for other market
segments, the expected Return-on-Investment (ROI) would not justify adoption.
In the second case a mechanism is introduced for determining the set of IP addresses
serviced by an ASQ agreement. This would allow ISP-1, for example, to setup an ASQ
agreement for the Content Provider’s range of IP addresses by asking a fee and another
one for the rest of its customers for free. This is in line with the existing situation where
peering links are not suitable for moving data sensitive to congestion effects. In this case,
the actor who can compose an ASQ product has more power in controlling its properties
than those who have to buy it. This power can be balanced by giving customers the ability
to request the creation of an ASQ product on demand. It is important to note that in either
case, the introduction of prices helps the parties involved to find an equilibrium that is fair
for both of them (shown as a blue-bordered circle). Furthermore, a regulator would be
asked to intervene should an anti-competitive tactic be identified
42
.

42
We should note that the Content Provider will have deployed several caches around the world. This
means that alternative paths will be available (for instance through a Tier-1 provider, which is not shown in
the figure above).
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 101 of 161
© Copyright 2012, the Members of the SESERV Consortium

To illustrate this, let’s investigate further the consequences of introducing separate ASQ
agreements for different service attributes. Service attributes include destination IP prefix
ranges, capacity, quality constraints and price. Suppose that the details of the ASQ
agreement regarding the Content Provider have been agreed. Now, ISP-1 and ISP-2
negotiate the details of the other ASQ agreement considering as a first option a reciprocal
scheme (without payments). Assuming that respective retail market share has not
changed since the adoption of the ETICS solution by ISP1, then ISP2 will be sending more
traffic than it receives (otherwise the peering relationship would not be in equilibrium in the
pre-ETICS period). In peering terms this means that the peering traffic ratio is not
balanced. Thus operators would have to renegotiate the terms of the ASQ agreement. It is
important to note that they could find an equilibrium point (meaning that they would be
happy with the outcome) by configuring the service balance attributes alone, without the
need to explore work-arounds.
In the Athens Workshop the representative of the Regulator stakeholder role expressed
his concern whether having multiple Facilitators, who are likely to be the larger NSPs,
would make it more likely or easier for those larger NSPs to exclude smaller ones when
negotiating for interconnection or setting-up paths. The representative of the “small ISPs”,
who, also shared this concern, would consider either asking the regulator to intervene or
team up with other small ISPs in order to increase their power. We believe that both
reactions could bring the system to a more stable state, even though those requiring less
intervention should be preferable.
What is interesting is that the new outcome of the network service composition
functionality has a positive impact on routing and traffic engineering functionality. The
reason is that ISPs don’t have to perform complex traffic engineering anymore to improve
the QoE of their customers. This could have led to a stable outcome, but we expect that
some ISPs with less spare capacity would still rely on traffic engineering for meeting their
SLAs. Thus, routing instabilities may still exist but these should have smaller impact on
other ISPs.

Figure 17: The Scenario for Internet Connectivity Market Using ASQ Goods
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 102 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

A.1.3 Case study B: Customer SLA Monitoring and Incentives for
Backup ASQ Provisioning

SLA monitoring is the process of collecting the necessary information when services are
being delivered, in order to establish conformance to contract terms. This functionality is
considered to be important even when trusted operators (as in the case interested in
ETICS) must collaborate in order to provide QoS-assured services end-to-end.
Discussions during the third focus group revealed that press plays significant role in
bringing transparency into the Internet connectivity market, at least for the ISPs who are
interested for their reputation. This importance can be attributed to the fact that such
premium transport services are secured by SLA terms and can trigger payments to the
customer in case some of the ISPs in the ASQ path fail to meet its requirements. The
existence of monitoring technology when network operators offer services that are not
under their complete control (since more than one ISP is involved) would lower their
exposure to SLA violation assuming they have kept their commitments. On the other hand,
the monitoring solution must be carefully designed in order to keep capital and operating
expenses low.
In this case study we look at another interesting implication of monitoring related to the
reservation of backup capacity, but it can be easily shown that a carefully designed
mechanism can deal with cases where some ISPs underperform systematically. Although
not directly mentioned in the SLAs, ISPs are expected to keep backup capacity available
in case the original path used by the ASQ agreement has a failure point. If a failure occurs
in its network, the ISP will need to reroute the traffic of the ASQ agreement which might
cause a QoS degradation and hence an SLA violation. This can happen either because
the new path in his own network violates what is promised by the SLA, or because he
directed the traffic to some different ingress point of the next ISP in the path and this ISP
had not been geared to offering the appropriate QoS on this new path through their
network (or both). Of course whether this happens depends on the amount of backup
capacity available in the network of both ISPs.
Monitoring can help provide the right incentives for keeping backup capacity since it
enables finding the ISPs who can be considered responsible for the QoS failure. In simple
terms, if no adequate monitoring is in place to identify the ISP who caused the rerouting
and the QoS violation, then the penalty of the violation will be assigned to the service
originator (the first ISP that interfaces to the customer of the ASQ agreement), or can be
divided equally among the ISPs. One can easily see that at the equilibrium, ISPs will pick a
strategy to provide minimal backup. There is obvious free-riding since the effects of low
backup provisioning are shared among a large number of ISPs. By contrast, if we can
isolate the cause of the failure, the appropriate ISP can be identified for payment of the
penalty, which pushes the incentives in the right direction.
So, in summary, the tussle here is related to responsibility: what technology decisions
Internet stakeholders make that can lead to unfair allocation of SLA violation penalties?
The aim is to identify candidate technologies and their implementation details so that
communication providers have the incentive to honour their SLAs and, whenever this is
not the case, violating parties to be identifiable and contribute on a fair manner.
The ETICS project initially examined three candidate schemes for end-to-end metrics
monitoring. A centralized one that assumes the presence of a trusted operator and two
distributed schemes; one relying on the coordinated sampling of packets to be monitored
combined with the ability for hierarchical access to this data, and the other based on the
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 103 of 161
© Copyright 2012, the Members of the SESERV Consortium

active flow technology which provides the control of network devices to be handled
programmatically. We will apply the tussle analysis methodology to the distributed
hierarchical scenario, as the more likely to be deployed.


Figure 18: A Scenario of SLA Violation Identification Using the Hierarchical Monitoring
Approach
Building upon the scenario of Figure 14, each ISP collects raw data by specialized border
routers, called probes. In order to keep the operational cost tractable, data sampling is
performed. Furthermore, sampling suggests that monitoring data stored by each ISP along
the path refer to the same packets; otherwise not all SLAs and their metrics (for example
the end-to-end one-way delay for short time contracts) may be checked. The monitoring
data is stored at dedicated databases, called proxies, operated by each ISP to overcome
confidentiality issues. In the case of an SLA violation ticket, a collector queries all relevant
proxies and compares retrieved data in order to check the validity of each SLA violation
ticket. ISPs or trusted third parties can act as Brokers operating collectors.
The main set of stakeholder (roles) includes ETICS Information Providers (Brokers),
ETICS Communication Service Providers and more specifically Edge and Transit ISPs.
Other involved roles are Content Providers, consumers of ICT services and Regulators.
Again, for brevity, we will concentrate on the first set of actors and stakeholder roles.
Figure 19 shows a possible evolution of the tussle described above between a Transit ISP
(ISP-2 in our example) and Source, Destination ISPs (such as ISP-1, ISP-3 respectively).
Investigating the Transmission functionality, as long as Best Effort is the only traffic class
available on the Internet, no SLA monitoring is needed and thus we assume that in the
beginning we have a stable outcome (green circle).
The introduction of inter-domain ASQ goods by ETICS creates the need for backup paths
that will be used in case of a sudden failure. All ISPs however have the incentive not to
announce sensitive information such as network topology and dimensioning (including the
backup paths). Furthermore, they tend to keep failover capacity low to avoid unused and
therefore unbilled capacity. This means that the new tussle outcome is not stable at the
second phase, but no SLA violation has been reported and thus the outcome is still fair.
Suppose that the SLA between the transit ISP-2 and the destination ISP-3 allows
occasionally the former to dynamically reroute traffic to another ASQ agreement of the
latter, involving another Point of Presence (POP). Furthermore assume that the interface
used at the egress router of ISP-2 goes down due to some hardware failure, which
affected its ability to continue using the same ASQ. The ISP-2, taking advantage of the
respective SLA terms, reroutes Customer traffic to a backup ASQ agreement, through
router E. Now, ISP-2 manages to satisfy the QoS constraints inside its network, whereas
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 104 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

the ISP-3 fails to meet the terms of the SLA with ISP-2, because the backup ASQ is not
properly dimensioned.

Figure 19: Candidate Tussle Evolution for ETICS Network Service Delivery
In absence of any SLA monitoring mechanism (lower part of Figure 19) ISP-1 cannot infer
whether it’s his responsibility or another ISP along the path (and even if he does he would
have difficulties in justifying its claims). Assuming that the Customer will issue an SLA
violation ticket, penalties will be unfair (against ISP-1 in this scenario but both Edge ISPs
on average). This could lead to a market failure due to misaligned incentives; an
increasing number of ISPs will have to lower their cost, having a negative effect on
customer’s QoE and thus on demand for premium services.
At the same time, Users and Content Providers have no means to backup their claims
when they experience degraded quality. This issue has been raised during the Athens
Workshop, where the ‘User’ representative was unsure whether he would actually get the
premium experience that he had paid for. Even though, in fear of losing the customer a
Source ISP has the incentive to admit his fault, problems are expected when the disruption
is rooted at a Transit or Destination ISP. So, in absence of a monitoring mechanism the
outcome of that particular tussle is in favour of the ISPs and especially the Transit ones.
As was discussed during the 3rd Focus Group in Athens, currently, technically-savvy users
bring transparency into the market by periodically announcing their findings to websites
accessible by potential customers. Furthermore, the fact that the metrics and applications
being measured are constantly changing, was indicated to give incentives to reputable
ISPs to keep their effort high. This means that experts and social media can bring the
system into a more balanced state. In an ideal scenario such a reputation system would
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 105 of 161
© Copyright 2012, the Members of the SESERV Consortium

lead the system to a stable state as well, but this is not to be expected soon since the
majority of ISP customers don’t base their decision on performance-related aspects [16].
Let us examine the case where a trusted third party (a broker) implements the hierarchical
SLA monitoring mechanism and ISPs agree to allow the collector to access data from their
proxies. Such a technology, together with an incentive mechanism for calculating a fair
allocation of the compensation to the ISPs, could lead to a stable tussle outcome.
However, depending on the implementation of the SLA monitoring mechanism
identification may not always be feasible. We identify two cases for the SLA monitoring
technology for illustrative purposes; a simple one and an advanced one.
In the first case where sample packets were known in advance, Transit and Destination
ISPs could forward probing packets preferentially. Thus the responsible ISPs may not
always be identified. This means that the total payments made for customer SLA violation
is analogous to the number of end-customers that ISPs have. Assuming that a Transit ISP
has fewer customers than an Edge ISP we can conclude that this tussle outcome is more
beneficial to the former one.
Another case could be that Brokers signal to all ISPs along the path which packets to
probe during service provisioning and they have to save a timestamp for those packets at
the egress router. Following a secret algorithm for statistically selecting those packets, a
Broker could make harder the expedite forwarding of sample packets only
43
, giving the
right incentives for correct dimensioning of backup paths and lead to a stable outcome for
the Transmission functionality. Otherwise, the ISP being responsible for SLA violation
would be highly likely to be identified and thus be asked to compensate the customer.
Such a mechanism could be configured so that a timestamp is stored at the entry point of
the Source ISP’s network as well as the egress point of the Destination ISP, allowing the
broker to examine whether a customer’s complaint is valid, or not. However, it should be
evaluated whether this mechanism would lead to a stable state without posing negative
externalities to other functionalities.


43
Other issues that should be considered are the integrity of data stored for the sample packets at proxies
and synchronization of routers’ clocks. An ISP, for example, could report biased data by subtracting few
milliseconds from actual data.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 106 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

A.2 Detailed Tussle Analysis for UNIVERSELF technologies
The UNIVERSELF project follows an evolutionary approach for the Future Internet, aiming
to introduce self-management techniques that allow convergence and harmonization of
different networking technologies. It focuses on a single-operator network, highlighting the
significant (techno-economic) challenges to be addressed when offering converged
services in a highly heterogeneous environment, even if most stakeholders belong to the
same organisation.
Figure 20 below gives a high-level overview of an operator’s infrastructure, consisted of:
• Several network types, such as access, backhaul and backbone networks.
• Divergent access network technologies, like UMTS (3G) and LTE (4G) for
wireless/mobile communications and xDSL, FTTx for fixed connectivity, as a result
of the provider’s attempt to meet demand for higher speeds.
• Equipment (even for the same network technology) and management systems from
multiple vendors, trying to avoid provider lock-in.
UNIVERSELF deals with the situation that a Network Operator (NO) wants to deploy new
services and/or accommodate new traffic on top of its multi-vendor and multi-technology
infrastructures, with the focus being placed on IP/MPLS (Multiprotocol Label Switching)
backhaul/core segments and OFDM based Radio Access Networks (RANs). Several
problems need to be tackled in the context of service deployment so as to achieve a
coordinated, end-to-end performance.


Figure 20 High-level Overview of an Operator’s Infrastructure
Figure 21 provides the goal of UNIVERSELF through an example. Such a vertically-
integrated operator wants to successfully accommodate new traffic stemming from a local
flash-crowd in an effective manner. UNIVERSELF technologies will enable operators to
describe their goals and translate those objectives into low-level policies for governing
their network, in an end-to-end manner. An important aspect of the envisaged solution is
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 107 of 161
© Copyright 2012, the Members of the SESERV Consortium

coherence between technology segments through cooperation, negotiation and federation,
which determines whether the desired performance can be achieved. This means that
optimisation decisions are not taken myopically, e.g., by considering each network domain
in isolation, but by estimating what the effects will be on the network as a whole.

Figure 21: A UNIVESELF Scenario
Even though autonomic network management can bring cost-savings to operators, the
latter need to have the ability to control, manage and intervene in the operations by having
the necessary information in a timely-manner in order to deal with exceptions, change
policies and/or impose new constraints. For example, the big challenge for broadband
wireless systems is to provide a right balance between traffic demands with coverage
range to offer good signal quality and service reliability at a reasonable cost. This requires
flexible control and service management planes as well as standardized and technology-
agnostic interfaces for federated service provision. Furthermore, common performance
metrics across the network are necessary in order for the provider to monitor performance
in a joint manner.
It should be noted here that such vertically-integrated providers are usually organised into
multiple administrative domains; for example a separate department for wireless and
wireline services, or even for 3G and 4G networks. This structure may be a result of ex-
ante regulatory intervention or an operator’s decision. In this case, conflicts of interest can
appear amongst different departments even though these try to accomplish the same high-
level management goal. Such a situation is depicted in Figure [], where each of the two
departments A and B aim at reducing operation expenditures by a certain factor. If,
however, decisions by a single domain are taken independently of the state of other
domains this can lead to poor customer experience, instability and unmet management
goals.
Activating and deactivating Base Stations is part of the Network management functionality
for Traffic Control. The major stakeholders in this particular scenario are the departments
A and B (in particular the entities responsible for managing the domains, called Cell
Controllers A and B), the Mobile Users in that area and the Central NOC of the Operator.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 108 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

The interests of the Cell Controllers and Central NOC are to balance user satisfaction with
the cost of operating the network, while Mobile Users are interested in getting as much as
bandwidth as possible without service disruptions.

Figure 22: Candidate Tussle Analysis Evolution for UNIVERSELF
Suppose, for example, that Domain Controller A (the entity responsible for managing the
4G domain) has identified an opportunity for reducing energy consumption by switching-off
a low-utilised Base Station. Assuming that the ISP in our example uses labour-intensive
network management systems or operates a heterogeneous network
44
, this action would
increase the chances of department A in achieving the management goal, represented by
a new blue circle at the upper part of the rectangle.
However, the deactivation described above could trigger the concurrent handover of
several terminals to a Base Station of department B (the operator’s 3G network). The
reason being that end-user terminals would react by registering to the most suitable base
station in terms of Signal to Interference plus Noise Ratio (SINR). Based on the Tussle
Analysis methodology, this is a spillover from the Transmission functionality to the Traffic
Control functionality. In particular, this tussle involves mainly the new and existing users,
as well as the ISP
45
. Assuming an unstable initial state (which means that the existing
users are unsatisfied by the way the ISP had allocated the available resources) the
spillover will result into an increase of interference received by existing users.
Furthermore, the negative effect on existing users will be even more significant if there are
users far away from the base station. The reason is that the path loss experienced by a
user is highly dependent on its distance to the base station and thus the more
subchannels would be required to provide her with a defined data rate.
Some existing users may start using download managers that open several TCP
connections for the same session. Even though this reaction could increase their

44
where departments use equipment from different vendors and management systems are not completely interoperable
45
who is still interested in balancing user satisfaction with the cost of operating the network
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 109 of 161
© Copyright 2012, the Members of the SESERV Consortium

throughput in the short term, such a selfish reaction in the long term could be considered
harmful
46
, which means that this outcome is unlikely to be stable.
Let’s consider the case where terminals provide to Base Stations information about the
channel properties and users
47
reveal their willingness to pay (or utility). After collecting all
the necessary information, the Base Station could allocate to each registered user with the
appropriate amount of resources. This new outcome could be considered fair if users were
charged based on the reported traffic class
48
. Furthermore ISPs would have the incentive
to intervene, given the poor customer satisfaction and inefficient use of the resources.
UNIVERSELF studies the introduction of such a traffic control mechanism. The proposed
mechanism would allow ISPs to define a set of traffic classes, reserve bandwidth for each
class based on its policy and then allocate the bandwidth based on each user’s class,
requested bit rate and channel state of each subcarrier. Users are expected to consider
such a resource allocation as fair and ISPs should have the flexibility to manage the
reserved bandwidth per traffic class on demand.
Furthermore, UNIVERSELF has proposed a transmission control technology that allows
the coordination of neighbouring Base Stations in order to achieve the ISP’s goals. Each
Base Station (BS) periodically collects and sends to the Central Manager data related:
a) To the topology (nearby BSs).
b) Its operational status (number of the associated UEs, the used capacity and total
available capacity of the BS).
Then, the Domain Manager proceeds to the identification of any coverage optimization
opportunities, e.g., (de)activation of low-utilised Base Stations without deteriorating QoE of
existing users.
The proposed mechanism is a centralised one and, even though decisions are taken
based on input from involved departments, the inability of the latter to express their
preferences is an indication that the “design for choice” principle is not fully realized. Such
a centralized mechanism has the potential to produce optimal allocations, if inputs are
honestly provided. Considering the possibility that departments have the incentive to act
selfishly, this means that the long-term success of the mechanism would rely on the
tamper-proofness of base stations.




46
Assuming that a large number of end-users will do the same.
47
Or a software agent based on user’s configuration
48
There is always the possibility of modified terminals that report biased channel state information, but their impact can
be restricted through pricing.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 110 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

A.3 Detailed Tussle Analysis for SAIL Technologies
In this section, we analyze tussles identified initially in [4] which are related to specific
functionalities/roles described within SAIL’s deliverable [8] and in particular the Pure ICN
VNC.
A.3.1 Use-Case: Content Delivery and Access Control
We consider the following setup (illustrated in Figure 23) where an ANP (Access Network
provider) employs ICN to offer content delivery services to his end-users. Additionally, the
ANP has deployed a network of caches where the content of various publishers (i.e. CPs
(Content Providers) as well as end-users) can be cached. Furthermore, a CM (Cache
Management) is provided by the ANP which decides which IOs (Information Objects)
should be cached locally, i.e. in the ANP’s premises.
Additionally, within the ANP’s network an NRS (Name Resolutions Service) entity exists,
which is controlled by the ANP. Each publisher (either CP or end-user) is supposed to
issue a related announcement (i.e. publication) to the NRS every time a new IO is
published. Therefore, the NRS is responsible for matching potential subscribers for a
particular IO to its publishers. Within the ANP’s network several subscribers exist i.e., S
1
,
S
2
, …, while we currently assume only one CP with whom the ANP has established a
business agreement for delivering his content over ANP’s infrastructure.
Furthermore, Content Access Management (CAM) including also AAA (Authentication,
Authorization, Accounting) is performed by the CP.


Figure 23: Content Delivery in SAIL’s NetInf Architecture – Content Access Management
and AAA Functionality
Let us assume that subscriber S
1
requests an IO from the CAM service of the CP. After
the AAA takes place, the subscription is sent to the NRS of the ANP. Then, the NRS
matches the original subscriber S
1
with the content server of the CP that possesses the
IO. As a next step, the IO is transmitted to S
1
. Additionally, since the IO was not locally
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 111 of 161
© Copyright 2012, the Members of the SESERV Consortium

available, the CMS of the ANP decides to cache the specific IO in his cache, so as to be
able to serve potential future subscriptions for it directly, offering better QoS to his end-
users, as well as to avoid potential inter-connection costs, if the originating content server
is located in a domain inter-connected with the ANP through a transit link.
Let us now assume that a second subscriber S
2
bypasses the CAM and sends directly his
subscription for this specific IO to the ANP's NRS. Since the IO has been cached in the
ANP's cache, the RENE is in position to directly match S
2
's subscription with his cache (as
a publisher) without informing the CAM and practically bypassing the CP. In this case,
AAA is not performed and CP loses revenues from unauthorized delivery of his IO.
To overcome with the revenues loss, the CP could response then by requiring that the
ANP ‘buys’ once each IO sent towards his premises (subscribers or cache) in a significant
larger price, instead of paying a much smaller price per transaction/download. As a result,
the ANP would only be interested in downloading only popular content (i.e. with many
subscribers) in order to meet a break-even point for the purchase of the IO and also make
a profit. Obviously, content that does not fulfill the ‘popularity’ requirement would not be
bought at all; as a result, ANP’s customers would only have access to a very limited set of
IOs with small variety, which could imply also loss of customers for the ANP (detrimental
impact).
Alternatively, the CP could deploy his own NRS serving mostly his own content and
possibly also other publishers’ content too. In this case, the CP-owned NRS would perform
matching based on his own optimization criteria selecting also non-local publishers without
taking into account the physical network’s requirements (re-presented in this scenario by
the ANP); therefore, the ANP would see an increase of his inter-connection costs. To
overcome with the inter-connection costs increase, the ANP could block subscriptions
issued by his customers to third-party NRSs, which would be fatal as he would lose both
revenues from content delivery and possibly customers due to inability to deliver IOs to
them.
A third option for the CP would be the deployment of a DRM-like (Digital Rights
Management) mechanism, e.g. watermark on video. For instance, a video could be
delivered directly by the CDN of the ANP; however it would be decoded and ready for
watch only if a specific key is bought available only through the official AAA procedure.
The aforementioned tussle outcome is depicted in Figure 24.

Figure 24: Candidate Tussle Evolution for Content Access Control

D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 112 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

A.3.2 Use-Case: Content Delivery and Content Freshness
We consider the same setup as in Section A.3.1 (illustrated in Figure 25) with the only
difference that we do not focus on the AAA functionality but the content update performed
by the CMS.


Figure 25: Content Delivery in SAIL’s NetInf Architecture – Cache Management and
Content Update
Let us assume again that subscriber S
2
subscribes with the NRS for an IO that was
previously subscribed by S
1
and cached in the ANP’s cache. In the meantime, the IO has
been updated in the CP’s premises (e.g. CNN site), but not in the ANP’s premises (i.e. the
cache).
In [8], a technical interface is described between the CM role and the CNM (Content
Network Management) role through the CLM (Cache Location Management) role.
Additionally, the CNM is supposed to have another technical interface with the NRS.
Although it is not very clear by the VNCs in [8], how the CM is informed for potential
content updates and what the sequence of actions for the content management is, the CM
should be notified either directly by the CP itself as soon as he notifies the NRS for an IO’s
update, or indirectly by the NRS. Then, in collaboration with the CNM, the exact location
and number of copies for that specific IO would be decided (within ANP’s CDN).
In this setup, we assume that the CMS is somehow aware of the content update (e.g.
through the NRS), but he chooses not to re-cache the (updated) IO to avoid increase of
transit cost due to the content re-transmission. Therefore, when the NRS will match S
2
’s
subscription with ANP’s cache, S
2
will receive outdated content, receiving practically
decreased QoS.
If the CP is in position to identify the outdated content offered to the end-users, he may
then decide to issue his future subscriptions to different NRS providers only to improve his
(and his customers’) QoS. This would practically mean break of the business agreement
between the ANP and the CP, and loss of revenues from the content delivery service, as
well as possible increase of the inter-connection costs due to inefficient content replication
by another CDN provider for the ANP.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 113 of 161
© Copyright 2012, the Members of the SESERV Consortium

Instead of breaking their business agreement, the CP and the ANP could negotiate and
come in agreement regarding the content update frequency. For instance, they could
establish an SLA which would assure that x updates of a IO should be performed per hour,
otherwise a penalty will apply to the ANP. Of course, in order to get this service quality the
CP should either be charged by the ANP, or charge him for the IO in a lower price. The
negotiation and the interaction between the ANP and CP probably require the
development of a new interface between the CM (Cache Management) and the CPM
(Content Provisioning and Management).
The identified tussle is analyzed in Figure 26.

Figure 26: Candidate Tussle Evolution for Content Freshness
A.3.3 Use-Case: Content Network Management
We consider a different setup now in the context of the SAIL NetInf architecture [8] two
ANPs, i.e. ANP1 and ANP2, have deployed ICN architectures and offer content delivery
services to their customers. They are both connected to each other and to a CP through a
transit provider, i.e. the IBP (see Figure 27). Additionally, the two ANP have established
agreements with the CP to provide his content to their customers.

Figure 27: Content Delivery in SAIL’s NetInf Architecture – Content Network Management.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 114 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Suppose that subscriber S
3
has issued a subscription for an IO to his local NRS, i.e.
NRS2. Since the IO is not locally published, the NRS will issue the subscription to the
global NRS of the IBP (or a third-party) and then, the IO will be fetched from the originating
content server of the CP through the IBP. Then, another subscriber from ANP1, e.g. S
1
,
issues a subscription to his NRS for the same IO. Similarly to the previous case, the NRS1
will re-direct the request to the global NRS, and the global NRS having already previously
matched S3 with the originating server of the CP for the same IO, will issue the
subscription to the NRS2. Normally, NRS2 should provide information on the local
publisher(s), and the IO would be sent to S
1
from S
3
.
However, in order to avoid inter-connection cost increase, the NRS2 could hide local
publishers for this IO, i.e. either S
3
, or even his cache server in case the IO was selected
to be cached there (e.g. based on its popularity). In this case, the global NRS with match
S
1
's subscription to the originating server of the CP, and the content will be fetched to S
1

crossing the backbone network of the IBP, which implies increase of IBP's operational
costs and possibly lower QoS (i.e. higher latency) for S
1
.


Figure 28: Candidate Tussle Evolution for Controlling Server Advertisements Tussle
Between Two Edge ISPs.
To avoid this situation, a peering agreement for content delivery could be established
between ANP1 and ANP2. In the context of the peering agreement, each time an IO is
fetched to the cache server of one ANP, a publication for this IO is announced to the NRS
of the other ANP. Additionally, if a subscriber in the latter ANP issues a subscription for
this object to his local NRS, then the IO would be fetched to him through the peering link
avoiding transit costs (see also Section A.4.3). The outcome of this tussle is depicted in
Figure 28.
A.3.4 Use-Case: ICN Content Delivery and Competition with Legacy
CDNs
Next, we consider a similar case to the one discussed in Section A.3.3, the difference is
that we focus here our investigation on the possible tussles that may arise between the
ANP-controlled CDN and a legacy CDN (see Figure 29).
We consider an ANP, i.e. ANP1, that employs ICN and deploys his own CDN by installing
local cache servers. Additionally, ANP1 establishes business agreements with CPs to
provide their content through his own CDN to his customers. As expected legacy CDN
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 115 of 161
© Copyright 2012, the Members of the SESERV Consortium

providers such as Akamai
49
, Amazon's CloudFront
50
, LimeLight
51
, BitGravity
52
, etc. are
going to react to this competing action due to reduction of revenues and loss of control out
of the content delivery market.

Figure 29: Content Delivery in SAIL’s NetInf Architecture – Competition with legacy CDN.
In particular, to overcome with the losses, legacy CDNs could exploit their global presence
(especially larger ones, e.g. Akamai) to enter the internet service provision market which is
so far controlled by ISPs (ANPs and IBPs) and provide either backbone or edge
connectivity. Such a reaction practically would make legacy CDNs and ANPs/IBPs
competitors in both markets (i.e. connectivity and content delivery). It should be noted
however, that edge connectivity to end-customers is much harder and more expensive to
provide, and ANPs have a competing advantage over legacy CDNs in this area. On the
other hand, legacy CDNs due to their global presence and business agreements with
many CPs and IBPs are more easily in place to provide a larger and broader variety of
content to their end-users. Therefore, it is difficult to predict which stakeholder will prevail.
An alternative case would be the establishment of business agreements between legacy
CDNs and ANP-controlled CDNs so as to collaborate and provide integrated content
delivery services. ANP-controlled CDNs would constitute edge CDNs or CDN-islands with
direct access to the content consumers, while the legacy CDNs would become the actors
that inter-connect these CDN-islands with other islands of content/information, as well as
CPs. Such an approach is the Content Distribution Network Interconnection (CDNi)
53

which proposes the interconnection of standalone CDNs so that they can interoperate as
an open infrastructure to provide end-to-end delivery of content from Content Service
Providers (CSPs) to the end-user regardless of the latter's location of attachment network.
An an example, a CSP would establish a business agreement and related technical
interfaces with an authoritative CDN provider (i.e. a legacy CDN) for the delivery of

49
http://www.akamai.com/
50
http://aws.amazon.com/cloudfront/
51
http://www.limelight.com/
52
http://www.bitgravity.com/
53
Available online at http://datatracker.ietf.org/doc/draft-ietf-cdni-framework/, http://datatracker.ietf.org/doc/draft-ietf-cdni-
problem-statement/ and http://datatracker.ietf.org/doc/draft-ietf-cdni-use-cases/.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 116 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

content; this authoritative CDN provider would then establish agreements and interfaces
with one or more downstream CDN providers (i.e. ANP-controlled ones) so as to distribute
and deliver content on his behalf to end-users. In such a scenario, many operational and
capital expenses could be avoided by legacy CDNs, e.g. installation and maintenance of
content servers in ANPs' premises, while revenues from the content delivery would be split
between legacy CDNs and newcomers, i.e. ANPs. This tussle analysis is depicted in
Figure 30.

Figure 30: Candidate Tussle Evolution for Controlling Server Advertisements Between an
Edge ISP and a Legacy CDN.
A.3.5 Discussion
Note that our studies presented in Sections A.3.3 and A.3.4 examine similar tussles. In the
first one, interactions between two ANP-controlled CDNs are investigated, while in the
second one, interactions between one ANP-controlled CDN and a legacy CDN. Practically,
the analysis for these two tussles could be integrated in one more generic one on the
tussle of an ANP-owned CDN and a competing CDN provider.
Particularly tussle between ANP-controlled CDN and legacy CDN is of high interest to the
research community and a great deal of discussion took place on it during the FG of the
2
nd
workshop in Athens (see [15]). Many of the possible reactions of the stakeholders
discussed above have been expressed during this FG.
A.4 Detailed Tussle Analysis for PURSUIT Technologies
In this section, we analyze tussles that have been identified in [4] and are related to
functionalities of ICN (Information-Centric Networking) that are described in [7].
A.4.1 Use-Case: Content Delivery and Name Resolution Provided by
Local ISP
As illustrated in Figure 31 we consider two ISPs (Internet Service Providers), i.e. ISP1 and
ISP2, that employ ICN to offer content delivery services to their customers. The two ISPs
are connected through transit links to an IBP (Internet Backbone Provider). Both ISPs
employing ICN have deployed their own networks of caches, i.e. their own CDNs (Content
Delivery Networks). Within the ISPs premises, local RENEs (Rendezvous Networks) are
also provided, which are connected to a global RENE service. The RENEs are assumed to
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 117 of 161
© Copyright 2012, the Members of the SESERV Consortium

be controlled by the respective network infrastructure provider (ISP or IBP) itself. Potential
subscribers of an information item exist in both ISPs; however, only a single publisher (P
1
)
of that specific content exists initially, in ISP
1
.

Figure 31: Content Delivery in a Pub/Sub Architecture - Local RENE.
We assume that P
1
in ISP
1
publishes an information item to his local RENE
1
, and the local
RENE advertises the publication to the global RENE. Then, S
1
in ISP
1
sends a
subscription for an information item to the local RENE
1
of its ISP. The local RENE
1

identifies that the requested information item is published within the ISP and matches P
1

with S
1
. If more subscriptions for the same information item occur, the ISP may also
decide to cache the content to another location in order to achieve load balancing and to
provide higher QoS to its customers (subscribers).
Let us now assume that S
2
in ISP
2
also subscribes to his local RENE for the same
information item. Since, the information item is not published within ISP
2
, the local RENE
2

informs the global RENE about this subscription. The global RENE, who is aware of P
1
,
matches P
1
with S
2
. Then, the requested information item is sent by P
1
to S
2
following the
path that the ITF (Inter-domain Topology Formation) has previously created in
collaboration with the two TMs (Topology Managers) and indicated to the global RENE.
ISP
2
may decide to cache the information item in ISP
2
's premises, so as to serve potential
new subscribers. In order to achieve this, RENE
2
issues a subscription for the same
information item for ISP
2
's cache. In this case, the content follows the same path but within
the ISP
2
's network it is directed to ISP
2
's cache.
A tussle may arise here if the RENE of ISP
2
decides to send an information item (e.g., a
video advertising a particular product) to an end-user, i.e., S
3,
without the latter having
subscribed for it. Here, fake subscriptions could address either a local or a remote
subscriber; in this setup, a remote one is assumed. Subscriber S3, in this case, may be
unhappy due to the unrequested content stored in his premises consuming his resources
in terms of both storage and bandwidth (which could imply lower QoS (Quality of Service) ,
e.g. available bandwidth, for his other subscriptions, too).
A possible reaction by S3 is to unregister from the ISP-controlled RENE and address his
future subscriptions towards a RENE provided by another stakeholder, e.g. a third-party. If
this happens, the third-party owned RENE may not be in position to know the exact
content items being stored locally within ISP
2
's premises and re-direct subscriptions for
information items (that exist locally) to the global RENE; consequently these subscriptions
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 118 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

will be served by remote publishers which may increase the inter-connection costs for
ISP
2
. Still though, the RENE2 could keep issuing unwanted subscriptions for S
3
. Let us
also consider the ability of Subscribers to accept or reject subscription responses, which
were initiated by third parties, e.g. the RENE2 in our setup. In this case, S
3
would reject
unwanted subscriptions (spam) and may not have incentive to change his RENE provider,
i.e., ISP2. Figure 32 depicts the tussle outcome as well as the spillover from the
Naming/Addressing functionality to the Transmission (Routing) functionality.

Figure 32: Candidate Tussle Evolution for Spam Received by Subscriber S
3
.
A.4.2 Use-Case: Content Delivery and Conflicting Optimization Criteria
As illustrated in Figure 33 we consider three Internet Service Providers, i.e. ISP
1
, ISP
2
and
ISP
3
that employ ICN to offer content delivery services to their customers. All three ISPs
are connected through transit links to an Internet Backbone Provider (IBP); and ISP
2
and
ISP
3
are inter-connected through a peering link with each other. ISP1 employing ICN has
also deployed his own CDNs. Within the ISPs premises, local RENEs are also provided,
which are connected to a global RENE service. The RENEs are assumed to be controlled
by the respective network infrastructure provider (ISP or IBP) itself. Potential subscribers
of an information item exist in all three ISPs; however, only two publishers i.e. P
1
and P
2
of
that specific content exist initially, in ISP
1
and ISP
2
, respectively.
We assume that both P
1
in ISP
1
and P
2
in ISP
2
had published an information item to their
local RENEs, and the local RENEs have advertised these publications to the global RENE.
Then, S
3
in ISP
3
sends a subscription for an information item to the RENE
3
of ISP
3
. The
local RENE
3
identifies that the requested information item is not published within the ISP
and informs the global RENE about this subscription. The global RENE, who is aware of
both P
1
and P
2
, makes a decision on the matching of S
3
's subscription with one of the two
publishers. Then, the requested information item will be sent by the selected publisher to
S
3
following the path that the ITF has previously created in collaboration with the two TMs
and indicated to the global RENE.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 119 of 161
© Copyright 2012, the Members of the SESERV Consortium


Figure 33: Content Delivery in a Pub/Sub Architecture - Global RENE.
When multiple sources can serve a request, a tussle may occur due to actors’ different
preferences for the one to be used, due to their cost concerns, performance attributes,
regulatory constraints, or other local policies. In particular here, an IBP-owned global
RENE may forward a subscription originated from a local RENE (i.e. RENE3) to publishers
that are located behind a transit link (i.e. P
1
), even if the information item was also
available to the original subscriber through a peering link (i.e. P
2
). In this case, the
revenues of the IBP increase, while also the interconnection costs of the two ISPs (i.e.
ISP
1
and ISP
3
) increase; this is naturally positive to the IBP but negative to the ISPs.
Therefore, the IBP most likely will match S
3
with P
1
.
As a response to the increased inter-connection costs, RENE
3
may decide to re-direct
subscriptions for non-locally cached content to a different global RENE provided by
another party, e.g., a third-party, i.e. Google’s DNS. However, in this case a tussle could
arise similar to the aforementioned one, if the third-party provider has conflicting interests,
e.g. performs name resolution without taking into account peering agreements between
providers, but only ‘distance’ in terms of latency.
Alternatively, RENE
3
may decide to issue subscriptions for non-locally published content to
his peers, i.e. local RENEs located in and controlled by ISPs that have established peering
agreements with ISP3. However, if publishers for the requested content items do not exist
in these peering domains, then this would insert an extra delay in the subscribers’ service,
and RENE
3
would eventually issue the subscription to the IBP global RENE. To overcome
with this extra delay (and consequently service deterioration), RENE
3
could a priori
exchange new publications with his peers, or alternatively, establish a global RENE which
will be updated only with publications originated from the respective peering domains, and
even cache non-locally available content (practically building a content-island with his
peers. In this case, only subscriptions for publications non-available in any of his peering
domains would be issued to the IBP.
In both cases of RENE
3
either issuing subscriptions to another party global RENE, or to a
RENE established by the peering domains of ISP3, IBP's global RENE would lose part of
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 120 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

his content delivery revenues (due to the RENE). The tussle outcome is depicted in Figure
34.

Figure 34: Candidate Tussle Evolution for Conflicting Optimization Criteria.
Similar tussles could appear if the local RENE is provided by a third-party, which may have
different incentives. Such conflicting optimization criteria might imply a straightforward
increase of interconnection cost for the ISP, and possibly degraded end-users’ QoS.
A.4.3 Use-Case: Content Delivery and Imbalance on Peering Link
We consider a similar setup as in A.4.2, where ISP2 and ISP3 have establish a peering
agreement among them, but we assume that ISP3 is equipped here with caches for
storing locally popular subscriptions and serving subscribers both in ISP3 and ISP2
(ISP3’s peering domain) (see Figure 35). Within each ISP’s premises, local RENEs are
also provided, which are connected to a global RENE service. The RENEs are assumed to
be controlled by the respective network infrastructure provider (ISP or IBP) itself. Potential
subscribers of an information item exist in all three ISPs.

Figure 35: Content Delivery in a Pub/Sub Architecture – Peering Agreement Between ISP2
and ISP3.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 121 of 161
© Copyright 2012, the Members of the SESERV Consortium

As aforementioned, a peering agreement is established between ISP2 and ISP3. Due to
the fact that the TM calculates paths based on both network and business information, i.e.
the TM is aware of the peering link, the cache servers in ISP3 are matched by TM also
with peers of ISP2 without any constraint in terms of e.g. bandwidth, traffic volume, etc.
This practically means that ISP2’s subscribers (and ISP2 himself indirectly) enjoys the
benefit of the cache installment by ISP3, but only the latter is the one that bears the
deployment and operational costs for the cache servers. To deal with this unfairness, ISP3
could decide to employ an admission policy on his cache servers, e.g. the policy could
restrict the cache servers to serve only ISP2’s customers (to our knowledge, admission
control on the cache is not considered so far in PURSUIT’s architecture.).

Figure 36: Candidate Tussle Evolution for Transmission/Routing Due to Imbalance on the
Peering Link.
In absence of the ability to apply such a restrictive policy, ISP3 could then break the plain
peering agreement and negotiate with ISP2 to establish a paid peering one (spillover from
the admission functionality on the transmission one). Based on this new agreement, each
time the peering ratio is violated, the peer that is responsible for this will be charged for the
exceeding traffic. Note that in content delivery schemes we assume that mainly the
inbound traffic is taken into account when calculating inter-connection charges, i.e. the
domain that consumes content is charged by the domain that serves the former one.
Therefore, the content traffic from ISP3 to ISP2 would cause an asymmetry on the traffic
ratio of the peering link; i.e. excessive traffic from ISP3 to ISP2, but only signaling traffic
from ISP2 to ISP3. This imbalance in the traffic ratio under specific peering agreements
(e.g. the aforementioned paid peering one) results in charging ISP2 for the excessive
incoming traffic.
In order to respond to the inter-connection cost increase ISP2 can monitor traffic, both
incoming and outgoing, on the peering link, and when the peering ratio is to be violated
against him to treat (though his TM) publishers in the peering domain (i.e. ISP3) as
publishers in non-peering ones. Alternatively, ISP2 could deploy his own cache servers for
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 122 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

serving subscriptions issued from both local and remote peers. The cache deployment
within ISP2 would achieve again traffic symmetry on the peering link, unless ISP3 decided
to increase his cache capacity (i.e. evaluation results reveal that the higher the cache
capacity, the higher the outgoing traffic when no policy is applied on the cache [31]). The
evolution for this tussle is depicted in Figure 36.
A.4.4 Discussion
As it is obvious from the aforementioned analysis, the actor who controls the name
resolution is able to restrict or even determine the available options to others. However,
such an actor (like an ISP when the end-user has used a different RENE provider) may
still be able to use a different source than the proposed one. Furthermore, other
stakeholders could enter the name resolution market. In an extreme case, even a CP
(Content Provider) may react by providing also his own RENE. For example, YouTube
could serve its information space by redirecting end-users to servers according to its own
criteria). Such a RENE may also be provided as a premium service to other CPs.
Additionally, traditional CDN providers (i.e. Akamai) could also react by announcing all the
content items (publishers and caches) they are aware of to multiple NRS providers, or
even deploy their own RENEs. Nevertheless, the name resolution role is central to ICN
and of high interests to the most stakeholders in this setup.
Finally, the cache location ownership is also an important factor that impacts the traffic
ratio on the inter-connection links; especially the impact on peering links is very significant
to our analysis, since it may disrupt the peering ratio and end up in breaking the peering
agreement between domains.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 123 of 161
© Copyright 2012, the Members of the SESERV Consortium

A.5 Detailed Tussle Analysis for ULOOP Technologies
The ULOOP project follows an evolutionary approach for the Future Internet, suggesting
that overlapping Wi-Fi access networks, operated mostly by end-users, could form a
“wireless local-loop” that complements or in some cases substitutes the ISPs’
infrastructure. The idea is to develop the necessary software and networking mechanisms
that would foster the creation of a collaborative environment allowing robust, trustworthy,
low-cost, and energy-efficient communications. Two main use cases are considered (each
of them with several sub-cases, called scenes); this new wireless local-loop offering:
• Expanded network coverage including 3G offloading (use case 1; see Figure 37)
• Assisting context-aware information sharing among nearby users (use case 2).

Figure 37: ULOOP Use Case “Extended Coverage/Offload” [2]
The starting point for a detailed tussle analysis with respect to user-centric networking
(UCN) as designed and developed within the ULOOP project is four-fold:
• Relevant tussles identified: The initial analysis done within SESERV for ULOOP's
UCN technology identified a number of tussles (cf. SESERV Deliverable D2.1 [4]).
These tussles were discussed in a phone conference and their relevance was
agreed upon in general. ULOOP has determined in addition three tussles (all of
which are fully in-line with those tussles outlined in D2.1) as follows:
o Requestee vs. Requester
54
: Resources on the ULOOP gateway are limited.
Extra costs may be incurred by a requestee to serve requests.
o ISP vs. ULOOP: Additional traffic may be caused without an extra income,
when users share their flat-rate accounted internet connection with other
users. ISPs may fear loss of control on end-users. Due to ULOOP’s open
approach, ISPs may be exposed to enhanced competition in the access
business.
o 3G operator vs. ULOOP: Operators may fear a loss of revenue, when
customers gain internet access through a ULOOP access point instead of
their per-volume-accounted 3G connection.

54
The first stakeholder is either an end-user relaying data or sharing the connectivity of his access point or a connectivity
provider deploying the ULOOP technology to provide an access in order to offload traffic from a certain part of his
network infrastructure (which would also make him a (business) user). The latter stakeholder is in every case a user
requesting relaying or connectivity services.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 124 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

The first tussle is discussed in Section A.5.2, the latter two in Section A.5.3.
• Overall, beneficial situation for relevant stakeholders assumed: ULOOP's
socio-economic analysis states a beneficial situation to all relevant stakeholders, in
principle, since only stakeholders that would not see need to be motivated would
lose from deploying ULOOP technology (e.g., attackers, malicious users, untrusted
users).
• Cooperation incentives framework explored: In the light of relevant tussles
identified and despite an overall beneficial situation assumed, ULOOP has identified
situations in which cooperation incentives may become necessary to motivate
different stakeholders. Consequently, ULOOP initiated a study on a cooperation
incentive framework which incorporates intrinsic motivations (direct benefits, pro-
social nature, sense of community) and extrinsic motivations or rewards (reputation,
reciprocity, and monetization by means of a virtual currency).
• Need for cooperation incentives substantiated by focus group: The focus
group on “User-centricity and Transparency of Future Internet Technology” held at
the SESERV Athens workshop substantiated clearly ULOOP’s direction of studying
cooperation incentive schemes. The tussles identified among different types of
users, between users and connectivity providers as well as between competing
connectivity providers have shown that intrinsic motivation alone is not expected to
lead to a stable tussle outcome. The need for a combination of intrinsic and
extrinsic motivation mechanisms has become apparent. This finding does not
render the second bullet point obsolete. But it reveals that there is not in each and
every situation a strong enough intrinsic cooperation incentive.
Given this four-fold starting point SESERV and ULOOP coordinate in the tussle analysis
for UCN along two complementary dimensions. First, the need for cooperation incentives
as assumed by ULOOP and as substantiated by the focus group is investigated from a
game theoretic perspective. The meaning of this approach is to build a formal basis to
explain the existence or absence of cooperation incentives. This happens by the
definition and study of games among relevant stakeholders – with a focus on users
(requester, requestee) – in order to see which payoffs result with regard to two options
being available, namely to cooperate with another user or to not cooperate. Section A.5.1
documents the game theoretic analysis as outlined and draws parallels to the prisoner’s
dilemma.
The game theorecitc analysis of the ULOOP technology suggests, that there are cases
where cooperation can take place without extrinsic motivators. Since this cooperation is
only predicted for certain parameter constellations and it can neither be assumed that
these parameters are known to the users nor that they act rationally, this finding does not
contradict the focus group finding that extrinsic motivators are necessary to stimulate
cooperation. That is to say, eventhough intrinsic motivation should lead to cooperation
when the scenario is considered from a mathematical viewpoint, the need for extrinsic
motivation in the according realworld scenario is still likely.
Second, a tussle evolution is investigated for a scenario in which cooperation incentives
among different types of users might be present even without application of any extrinsic,
ISP-driven motivator provided. The scenario is situated in ULOOP users becoming re-
sellers of access to roaming end-users, resulting in a tussle among ULOOP users and
ISPs (infrastructure-based connectivity providers). Section A.5.3 develops the tussle with
respect to its evolution by a study of anticipated actions and counter-actions taken by the
respective affected stakeholders.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 125 of 161
© Copyright 2012, the Members of the SESERV Consortium

A.5.1 Game Theoretic Analysis of Cooperation Incentives in UCN – A
Prisoner’s Dilemma
In this section game theory is applied to provide a mathematical proof that incentive
mechanisms are inevitable for the technology developed by ULOOP.
A.5.1.1 The Basic Model
The prisoners’ dilemma, a famous problem in game theory, can be described as a game
with two players, where each can choose to either cooperate or defect. If both players
cooperate both get a payoff of a, if only one cooperates and the other defects, the
cooperating one is worst of and gets b and the defecting one gets the maximum payoff of
c. If both defect, both get d. It is essential that we have c>a>d>b.
This game can be drawn to the ULOOP technology where cooperating means forwarding
packets and defecting means not forwarding packets, which results in exactly the same
payoff relations: If both users forward, both get a payoff of a. If only one forwards but the
other doesn't, the forwarding user is worst off, as he wastes his battery but does not see
his packages forwarded, while the selfish user, sees his packages forwarded and saves
battery (thereby gets the maximum payoff). If none forwards, the payoff is low but still
greater then forwarding when the user doesn't (as at least no battery is wasted).
Accordingly, the variables mentioned above can be defined as follows:
• c = benefit (payoff of a defecting user if the other one cooperates),
• a = benefit - cost (payoff of a cooperating user if the other one cooperates),
• d = 0 (payoff of a defecting user if the other one defects),
• b = - cost (payoff of a cooperating user if the other one defects),
where benefit is the worth of the service for the recipient (e.g., getting packages
forwarded), while cost is the cost incurred on the cooperating user to serve the request
(e.g. the cost of battery drain). As long as benefit > cost (which is a reasonable
assumption) we have c > a > d > b, as in the “normal” prisoner's dilemma and .
Game theory allows to show, that any rational acting player exposed to the prisoners
dilemma will defect [6], which means that any rational acting user will not forward
packages. Only if the game is played infinitely iterations, players may choose to cooperate.
However, it is still reasonable/likely to defect, which renders a mathematically proof that
incentive mechanisms are inevitable for the deployment of ULOOP technology. In the
following sections, this game-theoretic model is extended to gain further insights about the
behavior that can be expected from users deploying the ULOOP technology.
A.5.1.2 Modeling Incentive Mechanisms
Since any rational player would defect, while the maximum collective welfare is achieved
when both players cooperate, incentive mechanisms are necessary to make users
cooperate. Incentives can be modeled as a constant k that is added to a and b. An
incentive compensates for the cost of cooperating. Put mathematically, if a+k>c and
b+k>d, the incentive can be considered to motivate all users to cooperate. Note that, if
only one of the two inequalities holds, this has interesting implications for the behavior of
players. Since this scenario is complicated but less likely, it is not elucidated in detail here.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 126 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

A.5.1.3 Modeling Multiple Users
As it is usually not the case that only two players interact, it is necessary to discuss how
the prisoners dilemma can be drawn to more than two players. For this purpose, mixed
strategies are introduced, which are a well-known modeling tool in game theory. In a
mixed strategy, players choose their action according to a probability distribution (to
become unpredictable) over the set of all actions available. The expected payoff for a user
is then calculated as the sum of payoffs of each outcome multiplied by its respective
likelihood (which can be deduced from the probability distributions of each player). For
example, assuming that two players play the prisoners dilemma where player A chooses
always to cooperate and player B cooperates with a probability of 0.6 (and consequently
defects with a probability of 0.4) the expected payoffs would be calculated as follows:
• expected payoff for A = 0.6*a+0.4*b,
• expected payoff for B = 0.6*a+0.4*c.
Of course, also A could play a mixed strategy, which would make the calculation a little
more complicated.
When a user A finds himself in an environment where, y% of the other users cooperate,
this environment can be modeled as a user that cooperates with a probability of x=0.01*y
and accordingly defects with a probability of 1-x. Therefore, when A decides to cooperate
he will get an expected payoff of x*a+(1-x)*b and an expected payoff of x*c+(1-x)*d if he
decides to defect. When inserting the values given in Section A.5.1.1 for variables a-d we
obtain the following result.
• A’s payoff when cooperating = x*a+(1-x)*b= x*(benefit-cost)-(1-x)*cost= x*benefit-
x*cost-cost+x*cost= x*benefit-cost
• A’s payoff when defecting = x*c+(1-x)*d=x*benefit+(1-x)*0=x*benefit
Interestingly, this result shows that the difference between the payoff for cooperating and
defecting for A is cost, regardless of the environment he finds himself in, i.e., the expected
behaviors of all other users. This implies that a user will not make his choice depending
on whether to cooperate or defect based on the environment he finds himself in, i.e., even
an environment with high y will not convince a selfish user to cooperated.
In Section A.5.1.1 we saw that if only one interaction will take place in which the players
don’t know anything about the behavior of each other, any rational player will defect. This
implies that (in absence of an incentive mechanism) any rational acting user will not
forward other users’ packets if he does not know whether the other user is cooperative.
The result presented in this section has the even stronger implication, that not even if a
player knows for sure that his opponent will cooperate, it will charm him into cooperation.
Therefore there is no point for the ULOOP technology to advertise the actual y of a
ULOOP community, in order to convince selfish users to cooperate. Although this result
excludes the possibility of the existence of a cooperative critical mass that has to be
reached in order to convince all users to blindly cooperate. This result is somehow
demotivating in that it shows that there is no way of convincing users to cooperate, if not
changing the payoff values (by incentive mechanisms, as discussed in Section A.5.1.2).
However, in the next section, we consider the case in which several interactions take place
and find that even without changing payoff values interaction can be reasonable.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 127 of 161
© Copyright 2012, the Members of the SESERV Consortium

A.5.1.4 Modeling Multiple Rounds
Until now it was only discussed how to model rewards and multiple users. However, since
users usually interact several times with each other, it is also very important to investigate,
how two users will behave, if they have to forward packages for each other several times
in a row. It is reasonable to argue that players will usually chose between the two
strategies "Tit-for-Tat" and "All-D". The Tit-for-Tat strategy is defined, by cooperating in the
first round and then always playing the strategy, the opponent played in the previous
round. This implies that, if both players choose tit-for-tat, they will always cooperate and if
a user defects, he will be punished by its opponent in the next round. The All-D strategy is
defined as always defecting, regardless of the behavior of the opponent. We now show
that for a reasonably high number of rounds that is expected to be played both players
should decide to go for Tit-for-Tat and not All-D.
Assume users A and B can choose the Tit-for-Tat strategy or the All-D strategy. Also they
want the other user to forward a certain number of packets for them, where this number
that the two users want to be forwarded may not be equal. Therefore, one of the players
will see all of his packets being successfully transmitted and stop the interaction, i.e.,
forwarding packets of the other user (if he ever did). Of course, he could also selflessly
decide to still forward the packages of the other user, but since we want to investigate the
case of reasonably selfish users, we have to assume that users at best forward if they
themselves have packets they want to be forwarded. Therefore, either A or B will stop the
interaction with the other user or both will stop the interaction after the same round (the
latter is rather unlikely). It is the worst case for A, whose point of view we will adopt in the
following, if B stops the interaction. In addition to that B will not forward any more packets
for A, when his last packet was transmitted, B will also defect in (what only he knows is)
the last round, even if he has chosen Tit-for-Tat as his strategy before.
Assume B is interested in an interaction of n+1 rounds and A does not know n. If we
assume the worst case for A (B stops the interaction, because A is interested in an
interaction of more than n+1 rounds, and does not cooperate in the last round, because he
knows he cannot be punished afterwards), it is easy to see that A gets a payoff of
(i) n*a+b, if both chose tit-for-tat
(ii) b+n*d, if A chose tit-for-tat and B All-D
(iii) c+n*d, if B chose tit-for-tat and A All-D
(iv) (n+1)*d, if both chose All-D
Obviously we have (iii)>(iv)>(ii) and do not know if (i)>(iii) or vice-versa. The latter
inequality is equivalent to (i)-(iii)>0 and can be resolved as follows.
(i)-(iii)>0
<=> n*a+b-c-n*d > 0
<=> n*(benefit-cost)-cost-benefit > 0
<=> (n-1)*benefit > (n+1)*cost
<=> benefit > cost*(1+2/(n-1))
If choosing cost=1 the result is benefit>(n+1)/n. Inserting constant values for n returns the
values for benefit that are shown in Table 23.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 128 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Table 23: ConditionS for the Tit-for-Tat Strategy to be Reasonable for a Fixed Number of
n+1 Rounds (Benefit vs. Cost).
n benefit >
1 -
2 3
3 2
4 1.67
5 1.5
6 1.4
7 1.33
8 1.3
9 1.25
10 1.2

This implies, for example, that if 6 rounds have to be played, the row with n=5 (since
number of rounds=n+1) has to be considered, which returns benefit>1.5 (and cost=1).
More precisely, this means that even if the benefit is only 50% greater than the cost and
the average number of rounds that players are usually interested in is at least 6, it is more
reasonable to play Tit-for-Tat. Consequently if the conditions described by the table are
met, even a selfish user will choose tit-for-tat (as long he is assumed to act rational) and
therefore incentive mechanisms would not be necessary.
Note that the inequality above (benefit > cost*(1+2/(n-1))) gives a ratio between number of
rounds expected to be played, benefit, and cost that is necessary to make cooperative
behavior reasonable. However, probably none of these parameters can easily be
influenced by ULOOP developers. As mentioned in Section A.5.1.2 a reward for
cooperative behavior can be introduced which is then modeled as a constant k that is
added to a and b. If we resolve (i)-(iii)>0 with these new values for a and b, we derive the
inequality benefit > (cost-k)*(1+2/(n-1)) (instead of benefit > cost*(1+2/(n-1))). This
equation can be deployed by ULOOP developers to insert fixed values for the three
variables mentioned above and thereby determine the minimum reward for cooperative
behavior (k), that is necessary to make cooperative behavior reasonable.
The formula presented allows predicting the behavior of users in a concrete
implementation of the ULOOP system. When the inequality is fulfilled (after the parameters
given by the implementation
55
have been inserted) cooperative behaviour can be expected
even from selfish users. In case it is not fulfilled, the second equation can be deployed in
order to determine amount of incentive that is necessary to convince users to cooperate.
A.5.1.5 Conclusion from the Game Theoretic Analysis
When the tussle analysis is applied to a certain technology, affected stakeholders have to
be identified and subsequently the tussles between them. In a third step, these tussles

55
Note that the determination of these parameters may be complicated. Especially the determination of expected rounds
can probably only be conducted by observations.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 129 of 161
© Copyright 2012, the Members of the SESERV Consortium

have to be considered in detail in order to determine how stakeholders will behave, how
these tussles can evolve and how technology designers can preclude or at least resolve
them in a fair manner. The game theoretic model developed in this section obviously had a
very specific tussle-focus in that it exclusively addressed the tussle arising between users
regarding the mutual forwarding of their data. Because this tussle is essential in ad-hoc
networks it deserves particular attention. In particular, the tussle under consideration sees
two end-users as stakeholders that both share the interest of having their data forwarded
by the other. What makes it complicated is their additional interest of saving as much
battery as possible, wherefore they have a strong intrinsic motivation to not forward the the
others’ traffic. Given these specifications, the tussle is evident: both want the same favor of
the other but have the intrinsic motivation to not return it.
The first implication drawn in this section is that, if two users have the opportunity to
exchange one such favor of forwarding, none of the users will do so, if he behaves
rationally. Although both users thereby save the energy of forwarding the others traffic, the
user satisfaction would be much higher, if both users would forward the traffic. Therefore,
although this outcome cannot be considered unfair (tussle analysis aims at resolving
tussles in a fair way) it is reasonable to ask how this tussle can be resolved such that both
users return the favor.
In order to answer this question, it was investigated, if the uncertainty of users about the
other’s behavior is the cause of this selfish behavior. More general, the scenario was
investigated, where a user is confronted with an environment of users, where he knows
the likelihood that these cooperate. As it turned out, not even in an environment consisting
solely of cooperative users, the user will be charmed into cooperation.
After these somehow demotivating findings, the last aspect of real-world scenarios was
investigated, which is repeated interaction of users. This finally allowed to draw positive
conclusions about the possibility of motivating users to cooperate: It was possible to
determine an inequality depending on number of interactions expected to take place, the
benefit of having a packet forwarded, and the cost of forwarding a packet that showed that,
if these three parameters have a certain ratio, cooperation becomes rational. Since these
three parameters may not be easily influenced, a fourth parameter modeling a reward for
cooperation was added to the inequality. Thereby, it became possible to determine the
amount of reward that is necessary to motivate users to cooperate, in case the three
before mentioned parameters do not reach the necessary ratio. This is particularly
interesting, as such reward can be made big enough to motivate users to cooperate even
in a one-time interaction, but it may be desirable to take the other parameters into account
in order to keep the introduced reward as small as possible.
Probably the most interesting finding from these inequalities, that can be practically taken
into account by ULOOP developers, is that the more interactions of users can be expected
the more reasonable it becomes to cooperate. Consequently, if a reputation system is
introduced, it would allow users to denigrate others, if they do not cooperate. Thereby, if a
user acts selfishly but cannot be “punished” by the user that actually falls victim to this
selfishness, the latter could use the reputation system to warn other users about the
former. That is to say, as a reputation system allows to virtually increase the number of
interactions to infinity, no direct rewards are needed to resolve the tussle for mutual
forwarding in a fair and eligible way.
In short: without any reputation or reward system no rational user will ever cooperate, if
only one interaction takes place. If a certain number of interactions can be expected to
take place, the tussle might resolve with the desired outcome (cooperation on both sides
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 130 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

takes place) depending on certain parameters. Furthermore, interaction can always be
incentivised by sufficiently high, direct rewards, given by the system. However, the
probably most sustainable way of resolving this tussle is a reputation system that allows
users to report experience they had with other users.
A.5.1.6 Tussle Analysis
Based on the argumentation above a tussle evolution can be rendered, which is also
illustrated in Figure 38. The stakeholders in this tussle evolution are mainly users, where it
is important to discriminate between rational (and therefore selfish) and altruistic users.
While their behaviour can be assumed to be oppositional, their interest is the same, as
they want their packets to be routed through the ULOOP network. Also they do not want to
spend energy. The way how they pursue the latter goal, leads to their differing behaviour,
because the selfish users will decide to save energy by not forwarding any traffic for other
users while altruistic users, will do so, as it reduces the overall energy consumption.
Another stakeholder that may get involved, depending on how the tussle that emerges
from these conflicting interests, will be resolved, is the ULOOP technology maker, as we
will see soon.

Figure 38: Candidate Tussle Evolution for Traffic Forwarding with ULOOP
The tussle evolution begins with the technology being introduced without any mechanism
to enforce cooperation. As argued in Section A.5.1.4, only in quite specific settings,
multiple rounds will lead to cooperative behaviour of selfish users, where the chances for
cooperation increase with the number of times that users interact. However, even in such
cases uncooperative behaviour will occur regularly. Selfish users only cooperate, when
they fear that their relay stops cooperation otherwise. However, if they can change relays,
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 131 of 161
© Copyright 2012, the Members of the SESERV Consortium

they will have no reason to behave cooperatively, wherefore it is likely that they will deploy
a download manager to switch relays frequently. Therefore, if no incentive mechanisms
are implemented by ULOOP the tussle evolution will reach a state in that selfish users
frequently switch their relay, as they rapidly fall from grace, for not forwarding packets.
This state is obviously not desirable/fair, as the percentage of forwarded traffic in the
network will be marginal and completely done by altruistic users.
In order to overcome this situation ULOOP developers have three choices, where each is
discussed at a time next.
The first and most obvious way to incentivize users to cooperate is to reward cooperation
(cf. Section A.5.1.2). With respect to transmission, it will lead to a fair outcome, as rewards
can theoretically be chosen so high that even for selfish users decide to cooperate.
However, a drawbacks is that by rewards, only positive but no negative feedback can be
given, which is why still not every user might choose to cooperate. Furthermore this
solution also generates a spillover from the transmission functionality to the monitoring
functionality, which is part of security: this spillover arises from the fact that rewards have
to be paid by some entity, which will likely have to be the ULOOP technology developer. In
this case, the tussle has been resolved in a fair manner for the transmission functionality,
but clearly not in a fair manner with respect to the security functionality.
The second approach to introduce incentive mechanisms is to implement a reputation
system. As argued in Section A.5.1.4 and Section A.5.1.5, this will also lead to high
cooperation rates and therefore to a good and possibly stable tussle solution with respect
to transmission functionality. What is more, is that it has high potential to lead to a fair
state within the monitoring functionality: because the reputation system would be
implemented by the ULOOP nodes themselves, the burden of running according
mechanisms would be evenly distributed. This makes this approach preferable compared
to the awarding mechanisms. This advantage also becomes evident in Figure 38, where
the circles of both solutions are centred for the transmission functionality (both are fair) but
only the circle according to the reputation solution is centred within the monitoring
functionalities block. Node that all four circles are tagged with question marks, as the real-
world outcome highly depends on how according mechanisms are implemented and the
evidence for their efficiency is currently only backed up by theoretical arguments.
The third mechanism to provide incentive to cooperate is to announce the percentage of
cooperating users for a specific ULOOP network. The idea is that even selfish users might
be charmed into cooperation if the percentage of cooperating users is high. However, as
discussed in Section A.5.1.3 this should not affect the behaviour of selfish/rational user at
all. Therefore, such mechanism will not lead to better outcome with respect to the
transmission functionality.
A.5.2 Traffic Management on the ULOOP Gateway
Besides network coverage extension, the sharing of Internet connectivity between ULOOP
users is a key topic for ULOOP. As discussed in the next section, different connectivity
providers might see losses in revenues from these sharing activities, wherefore tussles
between users and connectivity providers may arise. However, also a tussle between
players within the user-stakeholder-group was identified by ULOOP. This tussle occurs
when the serving of requests incurs extra cost on a requestuee (sharing his access point),
which is obviously not in the latter’s interest. Due to physical conditions, the requestee can
shape the traffic to not incur extra traffic (provided he has access to the write tools) or opt
out of sharing connectivity at all. Although the requester is not intrinsically motivated to not
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 132 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

incur extra traffic on the requestee, he does not want the requestee to opt out of sharing
his bandwidth, which can be considered an extrinsic motivation. Since opting out will also
bring loss to the requestee (as he does not see any revenue for sharing), both will pursue
the goal of not incurring extra cost. Consequently, the tussle can be resolved by providing
either tools necessary to shape the traffic to the requestee or implementing some way of
signalling the bandwidth that can be used by the requester. As we can see, the requestee
is the one who can suffer potential drawbacks from this tussle, but also the one who has
control over the device to decide the tussle. Therefore, by providing the right means to
him, the tussle can be resolved with a stable outcome.
A.5.3 Tussle Evolution for Connectivity Re-selling in UCN
One way to address inherently 'uncooperative' situations is to develop an incentive
mechanism that overcomes these hurdles. The cooperation incentives framework under
investigation in the ULOOP project as well as the game theoretic analysis provided here
do exactly that. The tussle evolution studied in this section, however, originates from the
opposite end. It centers around a situation that starts with a group of stakeholders that is,
so to say, ‘unhappy’ with the way they can obtain connectivity – a situation that may give
incentive to a second group of stakeholders to happily adopt and use ULOOP's
technology. The two stakeholder groups referred to here are both users, the first in the role
of a traveler abroad and, thus, in a roaming situation. The second in the role of a local
user. In ULOOP terminology, the first would become an end user eventually, the second
would be termed a ULOOP user.
As a well-known fact from when being abroad, data roaming tariffs can be (and usually
are) high. For instance, a traveler consuming a MByte in a neighboring country without any
special data option would easily have to pay 10 EUR. On the other hand, when using the
mobile network at home, even prepaid users can get today a MByte at a rate of 3 cents.
This huge difference leaves room to a big opportunity to mobile phone holders in their
home networks to re-sell connectivity to roaming travelers. Even if a traveler would pay a
local reseller 1 or 2 EUR per MByte, this (relatively high) rate would be still very attractive
to the traveler.
The technology needed to implement this, is two-fold: In the minimum case, a mobile
hotspot software (for tethering) on the local user's mobile device, combined with a simple
accounting and payment service that allows an easy C2C (Consumer-to-Consumer)
money transfer. In essence, the scenario could be implemented by porting the business
model of a company like FON to the mobile, and combining this with a simple, C2C-
oriented payment partnership (such as a simplified PayPal solution).
In adoption of this solution, two user groups would see benefits. On the other hand,
another stakeholder group would be unsatisfied with this, namely infrastructure-driven
connectivity providers (operators). They would loose in two ends: First, they could sell less
to travelers. Second, they would see increased use by (re-selling) users, which at some
point might result in capacity issues (if users are actually using their data packets at full).
What could operators do? While it is obvious, that 3G providers do not have any means
(neither on the technical nor on the legal side), to force end-users into using the 3G
connectivity provided by them instead of other connectivity, ISPs could make tethering
(and by that re-selling) illegal (customer-provider relationship; terms of use in the contract).
That alone might not disturb users too much, since at first re-selling could not be detected
easily. Hence, operators would want to develop technology to detect tethering/re-selling.
Once such technology is available, operators could go after users and – should the
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 133 of 161
© Copyright 2012, the Members of the SESERV Consortium

problem be pressing enough to them – even initiate legal actions in exemplary cases in
order to frighten other users (just in the way copyright holders try to fight sharing of
copyrighted material). Such exemplary cases might have an effect, a temporary effect, or
no significant effect. It is up to further investigation to anticipate how the tussle may
continue (i.e., evolve) from here.

Figure 39: Candidate Tussle Evolution for Connectivity Re-selling with ULOOP
When assuming a slightly different reaction from the operator's side, namely that they
would simply block connectivity for a while when detecting tethering, then the opponent
could try to obfuscate their actions. This may be supported by a full deployment of UCN
technology in terms of a multi-hop UCN to be be formed by local users with several data
'outlets' (covering thus multiple customer-provider relationships): traffic could be more
easily obfuscated when distributed to multiple operators and via multiple users. An
operator's detection technology might or might not work in such a situation, but the
important point is here that the deployment of a more ULOOP-like technology has the
potential for re-sellers to hide better due to ULOOP’s open architecture.
Again, further investigation is needed to anticipate where the tussle could evolve further
from this point. Maybe also a regulator would interact at some point. One other thing that
might be of relevance as well here is a potential spillover. Such detection technology of an
operator could be used for other purposes possibly. In the most simple case, a DSL/cable
provider could use it for detecting FON member – creating a tussle spillover from the
mobile sector into fixed network connectivity re-selling.
Figure 39 visualizes the described tussle and its evolution with the addition of a scenario
that may lead to a stable and fair outcome. This scenario assumes that operators and
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 134 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

users combine their individual incentives and agree on a compromise. Users would be
allowed to re-sell connectivity to roaming end users as long as they are using UCN
technology and, by that, give a benefit to operators in terms of 3G offloading and coverage
extension (lower infrastructure costs for operators). Operators and end users may share
revenue generated from roaming travelers, possibly combined with a mechanism that
would allow users to negotiate their share in dependency of their location: If a user is
located in an area with high infrastructure load and high demand from roaming travelers, a
higher revenue share may be achieved by a user for cooperating.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 135 of 161
© Copyright 2012, the Members of the SESERV Consortium

A.6 Detailed Tussle Analysis for C2POWER Technologies
The C2POWER project addresses the issues of energy efficiency in wireless networks, by
investigating collaborative ways of reducing the overall energy consumption in ad-hoc
networks. The consideration that mobile devices implement several multi-standard
interfaces to support heterogeneous networks is basic for the technology developed. As
devices usually have short-range low-power communication interfaces, such as WiFi and
Bluetooth, as well as long-haul energy-hungry communication interfaces, e.g., 3G, it may
be more energy efficient to relay data via a path of low-power hops than via one long-haul
transmission. C2POWER deploys this observation to achieve energy savings in wireless
ad-hoc networks. Although, energy efficient hand-overs in wireless networks are
investigated as well, this tussle analysis is focused on the former approach.
The standard scenario for our considerations consists of an access network of wireless
(network) devices and at least one access point. It can be assumed that all wireless
devices implement the C2POWER technology, as wireless devices not doing so can be
disregarded. All wireless devices want to send data to at most one of the access points,
which then forwards that data to a content provider in the core network, and some wireless
devices are willing to relay data for others to access points. A wireless device that wants to
send data to an access point is termed source and a device willing to forward data is
termed relay.
In order to initiate collaboration, relays send bids to the sources of how much
compensation they want to forward the sources data. This compensation is calculated in
dependency on the energy a relay has to invest to forward the data, i.e., the energy for
receiving the data from the source, processing it, and sending it to the access point. The
compensation can be recompensed by the source either by forwarding data for the relay at
a later point of time or by a C2POWER specific currency that enables transitivity of
forwarding efforts. Each source estimates the amount of energy it would cost to send data
directly to the access point and then decides whether it is worth relaying it, taking into
account the best/cheapest bid. The relay selection process may be repeated several
times, where a relay may improve its bid if it got rejected and the relay whose bid got
accepted will raise the price. In this way, offers will converge. Although C2POWER
considers such conflicting interests nicely, and even establishes elaborated game theoretic
models, amongst others to formally prove the convergence just mentioned, tussle analysis
allowed to find tussles that are more or less general for user-centric networks and
discussed next.
Note that not only one hop relaying may take place but energy efficient routing via several
hops is investigated as well by the C2POWER project. However, the relay selection
process (that results in one-hop-routing) and energy efficient routing via several hops is
considered separately by C2POWER and the latter not further addressed in the
subsequent sections.
A.6.1 Encryption
As just outlined, C2POWER tries to achieve energy savings by collaboratively relaying
data. Therefore we see at least two stakeholders in the user group interacting, where one
offers a favor (relaying data) to the other, while the latter is liable to pay back the favor one
way (also relaying data) or another (compensation by a C2POWER internal currency).
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 136 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

In such a relaying agreement, the relaying users "sees" all traffic send be the source user.
If the relaying users are malicious they can extract information from this traffic, which
conflicts with the interest of the source user of having their data private. To resolve this
tussle in their favor, the source users have to use encryption, which is a tussle spillover,
since another internet functionality is deployed and users, that were before not affected,
are affected, as discussed next. There are two ways of how the encryption can be
implemented, where both bear specific benefits and drawbacks. In the first approach
source users deploy End-to-End encryption (with the other end being the content
provider), whereat the second approach tackles the problem by implementing encryption
directly on the C2Power layer.
Both approaches are contrasted in Figure 40 and discussed in the following two sections.
Section 138A.6.1.3 discusses an additional tussle that may actually arise out of a more
elaborated eavesdropping scenario.

Figure 40: Advantages and Drawbacks of the Two Approaches to Preclude
Eavesdropping.
A.6.1.1 End-to-End Encryption
Assume that the source user decides to use end-to-end encryption, e.g., IPsec or https, to
communicate with the content provider. The advantage obviously is, that, unless the
encryption is broken, only the two communicating parties can decrypt the traffic. However,
since end-to-end encryption has to be supported by both end-points of the communication,
it does not exclusively depend on the sources choice, whether such encryption is applied,
because also the content provider must support the encryption method which may not
always be the case. It is important to note that the content provider had before not been
affected by the use of C2POWER technology but is now pushed into the use of encryption
methods by the source user. Although, the content provider may offer the use of
encryption methods, it still may prefer unencrypted connections, as this reduces the load
on its servers. Consequently, if C2POWER is widely applied and relies on end-to-end
encryption to achieve confidential communication, content providers may see a significant
and unwanted rise in incoming encrypted traffic (if the data privacy tussle is resolved by
the use of end-to-end encryption). A further drawback more related to the focus of
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 137 of 161
© Copyright 2012, the Members of the SESERV Consortium

C2POWER is that encrypting and decrypting costs energy, which is exactly the factor that
C2POWER seeks to minimize.
A.6.1.2 Encryption on the C2POWER Layer
The alternative to end-to-end encryption, is to implement encryption directly on the
C2POWER layer, where this choice is obviously not only up to the source user but
depends on whether C2POWER decides to provide such mechanisms. Therefore, also a
technology maker is involved but rather in a enabling than in an affected way. However,
encryption on the C2POWER layer implies the need of another wireless device in the
access network to decrypt the traffic, before it leaves the access network, wherefore there
has to be at least one user in the access network, other than the source user, that see the
traffic in clear text. We will refer to this user as the decrypting node. Contrary to the end-to-
end encryption approach, not only the source but also the decrypting node will have to
spend energy on decryption methods, wherefore one user gets affected in a way that it
was not before the tussle spillover. However, lighter encryption algorithms can be chosen,
in order to minimize this factor.
What is more critical about having a decrypting node, is that the respective user might
move: since only this wireless device is able to decrypt the traffic, all traffic encrypted by
the source has to be routed via the decrypting node, that is to say, alternative routes not
including the decrypting node are not possible. If, for any reason, the decrypting node
becomes unreachable, the encrypted traffic already send becomes useless and a new
decrypting node has to be determined before continuing the transmission. However,
probably the biggest drawback of this approach is, that it implies the existence of at least
one user that is trusted by the source user, as he sees the sources traffic in clear text. If
there is no such trusted user, the source has to choose a "trustworthy" user node on the
off chance. Furthermore, since the traffic is encrypted only in between source and
decrypting node, the application of this approach only makes sense if there is more than
one relay.
A tussle-aware approach would be that sources are able to select the properties of the
path and destination during connection setup. Although problems would still exist, when a
source finds itself in a completely unknown environment, it would give users as much
autonomy of decision as possible and thereby exclude unwanted automated relaying
decisions. The ability to decrypt should be a functionality of access points. Since these are
stationary this would bear three advantages: first, they cant move out of reachability,
second, they have access to a power network and can therefore easily do encryption
tasks, and third, reputation about them is easier to propagate, since they cant actively
switch their environment. If access point would support these encryption capabilities, that
is to say, actively integrated into the C2POWER technology, the tussle for encryption
would have been resolved in a fair way, as only the source and the access point would be
affected by the additional encryption mechanism. Since the source demands higher
privacy it is justified that it invests energy on encryption. Although the access point has no
intrinsic interest on the sources traffic being transmitted in a confidential manner, it is not
to much affected by the arising encryption work, as it has access to a power network.
However, it is not farfetched to reward the access point for its encryption efforts in some
way.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 138 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

A.6.1.3 Stationary Eavesdroppers
In the above sections, we assumed that the tussle for data privacy was fought between
users exclusively controlling mobile devices and therefore running on battery. In this
section we want to point to the more problematic case, where the malicious user is
connected to a power network. This will not only be critical for the confidential
communication via C2POWER networks, because the eavesdropper can invest a more or
less arbitrary amount of energy on encrypting traffic, but it also allows him to offer much
better bids, than those of (honest) wireless devices.
In particular, scenarios can be imagined, where a malicious user sets up a C2POWER
client in a public area, only for the purpose of attracting as many sources as possible and
decrypting their traffic. If such attacks are considered to be likely, this tussle will quickly
evolve into the use of end-to-end encryption as a counter measure by source users: as
discussed above, this encryption method does not require a trusted wireless device in the
excess network and the possibly heavy encryption involved will render a serious problem
for the malicious user, even though connected to a power network. However, since the
content provider may not always offer suited encryption methods, C2POWER should also
provide some way of encrypting traffic within the C2POWER network. In order to provide
incentive to relays to participate in this encryption process, they should be rewarded,
where this reward should be paid by the user demanding the encryption. Although it is for
sure not in the interest of the source user to have to pay a higher price for the use of the
network, this solution is fair, as demanding extra efforts form the network justifies extra
payment and the nodes generating this extra effort get compensated.
Although end-to-end encryption or C2POWER internal encryption together with pricing
should be enough for most cases to resolve the tussle for data privacy, a malicious user
connected to power network will still cause another tussle, as outlined next. As mentioned
above, the relay selection process may be repeated several times, where a relay may
improve its bid if it got rejected and the relay whose bid got accepted will raise the price. In
this way, offers will converge, where always the relay with the best and second best base
cost get chosen by the source. Base cost is definded as the energy a relay has to invest to
forward the data, i.e., the energy for receiving the data from the source, processing it, and
sending it to the access point. Of course, relays do not only request to be compensated for
the base cost but will add a certain number p in order to make profit. If a relay’s bid gets
accepted it will increase p in the next round to generate more profit and if the bid gets
rejected it can try to get chosen in the next round by decreasing p. It is important to note,
that no relay will send a bid better than its base cost, i.e., p will always be non-negativ as
otherwise a accepted bid would generate loss for the relay. As shown in [42] by the
process of decreasing and increasing bids, the offers will converge such that always the
two relays with the lowest and second lowest base cost will be chosen by a source. By this
mechanism C2POWER cleverly aligns the interests of different users: the realying users
want to have as much compensation as possible for their relaying efforts and the source
users do not want to get ripped off but take advantage of the possibly rich choice of
relayers in that these compete in making the best bid. Therefore, this process can be
considered a tussle, that gets fairly solved by the mechanism outlined above.
However, assume that a malicious user sets up a C2POWER node that is connected to a
power network and can therefore be arbitrarily generous, with respect to its bids. The
consequence will be that, if the malicious user increases his offers when accepted, e.g.,
for the purpose of looking innocent (a node always making the best offer and not even
trying to increase his profit would be conscious and could therefore be rejected from the
network), the profit the non-malicious user with the lowest base cost makes, is decreased
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 139 of 161
© Copyright 2012, the Members of the SESERV Consortium

significantly, as it will not converge "upwards" towards the second best base cost, but
"downwards" towards the unrealistic good offer of the malicious user. However, one might
argue, that this is still not the worst case for the honest relays, as they at least have the
chance of seeing their bids accepted: if the malicious user would keep its bids always
below the base cost of any honest relay, non of these will see any bid accepted ever.
To summarize: a malicious user connected to a power network has high potential to harm
the relay auction market, by flooding it with dumping bids, artificially decreasing the prices
paid by source nodes, while not even these profit, as their traffic will be subject to
eavesdropping attacks.
A.6.1.4 Tussle Analysis
Based on the argumentation above, a tussle evolution can be rendered, which is also
illustrated in Figure 41. The stakeholders in this tussle evolution are mainly users, where it
is important to discriminate between malicious and normal users. While the latter are
interested in having their packets forwarded by the network, whereby investing as little
energy as possible, or forwarding packets of other users for the sake of profit, the former
will also offer forwarding services but with the aim of eavesdropping the forwarded traffic.
As it is obviously not in the interest of the normal user that malicious users get access to
the information they sent, a tussle arises that has to be resolved in the interest of the
normal users, that we will refer to just as users in the following.

Figure 41: Candidate Tussle Evolution for Traffic Forwarding with C2POWER Technology
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 140 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

As illustrated in Figure 41, one possibility for users to prevent eavesdropping is be to to
deploy multipath TCP technologies and split the traffic among multiple relay users. While
this would resolve the tussle in the interest of the users, multiple relays are necessary,
wherefore this might proof to be a too expensive solution.
Two other approaches to be deployed by users are the use of end-to-end encryption and
encryption within the C2POWER access network. Both approaches are discussed in
Section A.6.1.1 and Section A.6.1.2 respectively and illustrated by the two circles tagged
with a question mark on the pink bar, that illustrates the security functionality, in Figure 41.
Note that also their spillover into the transmission functionality is illustrated by the two red
dashed arrows.
Note that if the approach of encryption within the C2POWER/access network is taken, it
will result in a spillover from the security functionality to the mobility functionality: as certain
devices within the access network have to be traversed by the traffic, in order to be
decrypted, the mobility of a user is decreased, as traffic becomes useless if these
decrypting nodes become unreachable. This spillover is adequately represented by the
two circles in the yellow bar in Figure 41.
A.6.2 Preliminary Overhead
Since the purpose of the tussle analyses is the anticipation of the adoption of technology,
we now point to a critical mass to be reached by C2POWER technology, although not
directly related to a tussle.
C2POWER deploys network topologies to achieve overall energy savings and distribute
these savings in a fair manner. Conclusive theory is presented to best match sources and
relays and support the convergence of pricing for relaying efforts. Also, it is shown, how to
fairly distribute the savings among all participants. However, an important question to be
answered regards the overhead of negotiations preceding the actual relaying activities.
Although it is shown, that cooperation allows to achieve energy savings and distribute
these fairly, the energy that has to be spent before hand to match collaborating source and
relays, may prove to be a sticking point. That is to say, since a wireless device has to
invest energy to search for a suited relay or offer himself as such, the chances of these
initial activities yielding a successful collaboration need to be reasonably high, as
otherwise, the risk of spending energy in vain may outweigh the potential savings. Put
mathematically, only if the product of success chance and subsequent expected energy
saving is greater than the energy required for negotiations, a user will be willing to
participate in the C2POWER network. Because the success chance will also depend on
the number of users running C2POWER technology and this number will be low when
introducing C2POWER to the market, it is important to determine a critical mass necessary
to overcome the risk of unsuccessful initial energy spending.
A.6.3 Connectivity Re-selling
In Section A.5.3, a scenario is discussed, where a traveler abroad and, thus, in a roaming
situation, deploys the ULOOP technology to obtain Internet access via a local ULOOP
user. As a well-known fact, when being abroad, data roaming tariffs can be (and usually
are) high, which allows to achieve cost savings when the internet access of a local is
deployed instead of roaming. These savings may be split between the traveler and the
local. As this “business model” is enabled by relaying data, it also applies to the
C2POWER technology. Just as in the case of ULOOP, both user groups would see
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 141 of 161
© Copyright 2012, the Members of the SESERV Consortium

benefits. On the other hand, another stakeholder group would be unsatisfied with this,
namely infrastructure-driven connectivity providers (operators). They would lose in two
ends: First, they could sell less to travelers. Second, they would see increased use by (re-
selling) users, which at some point might result in capacity issues (if users are actually
using their data packets at full). However, it is important to stress, that C2POWER is also
reasearching mechanism to enable transparent relaying, i.e., if data of a user is relayed by
another, it looks to the access point as if this data is directly transmitted by the former.
Depending on if and how C2POWER is able to implement such mechanism, this would
render the above mentioned business model obsolete, as traveler's traffic can no longer
be fed into the backbone looking like local's traffic. Since the interests of the ISPs would
still remain, the tussle is expected to continue. For example, ISPs could offer better rates
for roaming traffic that is not incoming via 3G but some local access point, and therefore
the tussle could be resolved in interest of both stakeholder types. Unfortunately, at the
time this tussle analysis is conducted, C2POWER’s progress on such mechanism is still in
a too early stage to allow for a reasonable inclusion in this tussle analysis.


D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 142 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

A.7 Detailed Tussle Analysis for OPTIMIS Technologies
In this section, we analyse tussles identified initially in [4] which are related to specific
functionalities/roles described within the OPTIMIS whitepaper [43].
The background and motivation for OPTIMIS comes from movements in the cloud
computing industry. Private clouds have moved very much to the foreground throughout
2011, and increasingly hybrid strategies will be defining the supply side agenda as
organizations seek to make the best use of existing resources and domain skills together
with the new opportunities that Cloud brings.
While the economic model remains the number one driver for building a business case to
get into the Cloud, OPTIMIS considers that operational benefits, flexibility, agility and
quicker time to market are the key drivers of on-going adoption and value. As cloud
deployments are being built out, change management, learning required and complexity of
integration with existing systems have become key barriers for enterprise end-users,
alongside security.
As a consequence of recent outages and performance problems
56
, the need for multi-
clouds has become more inevitable as companies seek to insure themselves against
failure. At the same time, cloud offerings that enable organizations to extend their firewalls
and networks directly into a hosted cloud are quickly coming to market. Altogether, these
increase the requirement for 'Best Execution Venues'. End-user organizations are
increasingly seeking ways to automate the delivery of workloads and applications to the
most suitable cloud environments, be it internal or external, and whether that is determined
by performance, risk, location and compliance or other SLA parameters.
OPTIMIS aims at optimizing IaaS cloud services by producing an architectural framework
and a development toolkit. The optimization covers the full cloud service lifecycle (service
construction, cloud deployment and operation). OPTIMIS gives service providers the
capability to easily orchestrate cloud services customized for the unique needs of their
applications and make intelligent deployment decisions based on their preference
regarding trust, risk, eco-efficiency and cost (TREC), as well as data protection
requirements. It also gives service providers the choice of developing once and deploying
services across all types of cloud environments – private, hybrid, federated or multi-clouds.
OPTIMIS simplifies the management of infrastructures by automating most processes
while retaining control over the decision-making. The various management features of the
OPTIMIS toolkit make infrastructures adaptable, reliable and scalable. These, altogether,
lead to an efficient and optimized use of resources.
By using the OPTIMIS toolkit, organizations can easily provision on multi-cloud and
federated cloud infrastructures. This allows IT departments to considerably improve the
use of resources from multiple providers in a transparent, interoperable, and architecture-
independent fashion.
A.7.1 OPTIMIS Features
OPTIMIS is a software toolkit that Service Providers (SPs), Infrastructure Providers (IPs)
and Corporate IT Departments (IT) deploy in their datacentres. It is a complement to cloud

56
See for example: http://www.infoworld.com/d/cloud-computing/the-10-worst-cloud-outages-and-what-we-can-learn-
them-902
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 143 of 161
© Copyright 2012, the Members of the SESERV Consortium

management, orchestration and application lifecycle management platforms. It gives
service providers the capability to easily orchestrate cloud services customized for the
unique needs of their applications and make intelligent deployment decisions based on
their preference regarding trust, risk, eco-efficiency and cost (TREC). It gives them the
choice of developing once and deploying services across all types of cloud environments –
private, hybrid, federated or multi-clouds.
General Features
• TREC-powered Optimization and Brokerage for service deployment and runtime
and infrastructure selection.
• Build and Run Optimum Clouds - private, hybrid, federated or multi-clouds.
• Privacy by Design to protect personal data and allow for compliance.
A.7.2 Use-Case: User Controlled QoS Selection
The OPTIMIS project centres around a problem found in the general broker model in cloud
infrastructure provision. In a typical broker the algorithms of the broker select according to
just price or availability.: The cloud user has other interests beyond mere price and
availability. The principle ones are trust, risk and eco-efficiency. Together with cost they
are referred to as the TREC parameters.
Eco-efficiency
The user may have policies regarding environmental targets (carbon emissions) or this
may be part of company ethos (e.g. a environmental agency may wish to use only ‘green’
clouds) or marketing (e.g., an airline offsets high-polluting activities (fuel consumption)
through additional focus on always sourcing green where possible for all other services).
There are a series of parameters, specific to the service provided, including external
certification level (LEED) and percentage of renewable fuels. There is monitoring to ensure
some of these parameters are met, in other cases (such as certification) OPTIMIS works
on the assumption of good faith with the provider to supply truthful information.
Trust
Trust in OPTIMIS refers to the fulfilment of QoS and risk management, rather than to the
security strength of the underlying infrastructure.
Does the provider really offer the service they say they will? Do they have a reputation for
fulfilment or non compliance? This is done through statistics. OPTIMIS is a broker, not an
aggregator, but keeps statistics according to SLA compliance in client acquisition and on
their reputation according to those same clients.
At present, OPTIMIS works on the assumption of an ideal scenario where all providers are
transparent, truthful and upfront about the information.
Risk
What is the likelihood of the system failing?
IaaS (infrastructure-as-a-service) providers are each associated certain values for these
parameters (although these not always be exposed to the user or broker).
We can model the situation thus:
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 144 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium


Figure 42: The OPTIMIS Cloud Broker Use Scenario
In this schematic we see that the user accesses a service from the broker, who in turn can
select the infrastructure service from multiple IaaS providers, each of which are
differentiated according to parameters including cost, availability, energy efficiency and so
on. Some of these parameters are short-term (availability is constantly in flux) whilst others
(reputation) are stable over short periods. A short-sighted broker, which is the initial
starting point for the project and the current status quo in advanced brokering experiments,
assumes that the user only cares for availability and price. In other word it treats the
service as a pure commodity or utility. This is perhaps a legacy from the days of utility
computing when analogies with gas, water and electricity were commonplace (and indeed
the inspiration behind the technology). However, this assumption is erroneous, because
users demand more from IaaS providers than cost and availability.
In OPTIMIS, the TREC values are collected and the broker is capable of comparing like-
for-like between the sites. OPTIMIS consequently has developed an advanced broker
capable of using user-input preferences to make an intelligent decision as to which
provider to select at any given time. In the project, the OPTIMIS prototype collects data
assuming a high level of trust with the providers. In a commercial context this may lead to
tussle spillover as the level of trust and monitoring needs to be factored into the data
analysis.
Hence the broker balances cost and availability with eco efficiency, trust and risk. In
altering the tussle equilibrium, (by empowering the users) this led to a continuation of the
tussle, through over compensating as is explained below:
1. We start assuming that the choices the broker makes offer a good harmony
between the interests of the user and the broker. [Tussle in equilibrium]
2. Over time, however, as user requirements become more advanced, and the broker
develops its business relationships with providers, this becomes biased towards the
broker: the user lacks the power to specify its requirements and the control over its
eventual provider. [Tussle in favour of broker]
3. With the introduction of OPTIMIS, the user gains the upper hand. This is
satisfactory for the user, but the broker has a business relationship with the IaaS
providers. Joining the OPTIMIS system and exposing hitherto confidential
information (eco efficiency) and permitting its reputation and risk to be assessed
and disseminated by a third party is both a risk and investment which the IaaS
expects to be compensated for (in terms of increased revenues). Taking control
away from the broker in the choice of IaaS provided removes its ability to manage
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 145 of 161
© Copyright 2012, the Members of the SESERV Consortium

these business relationships. (For instance, we could imagine that a guaranteed
percentage of users is part of the joining incentive). Hence this new situation is
unfavourable to the broker and may impede uptake. [Tussle in favour of customer]
4. As a result of this analysis, OPTIMIS is investigating pricing models which give the
broker some manoeuvrability to fulfil user requirements whilst managing the
relationships with providers. For instance financial incentives to use certain
providers and a disconnect between the price the broker pays the provider and
charges the user. Part of this is done through using thresholds rather than absolute
values which the client agrees to – i.e. the client sets a price range and the broker
has a margin to manoeuvre within that range. [Tussle in equilibrium]
5. However, at present this is still on-going work, and there is a fear this may lead to
new tussles as the broker distorts the market price to its own ends. In absence of
effective competition one aspect there may be price inflation towards the upper end
of users’ ranges. Secondly, user may see that they are not receiving the best price
available. Given that the project vision was founded on one of giving the user
transparency of the IaaS and influence of the broker, if the IaaS pricing (one of the
key criteria of IaaS selection) is no longer transparent (to avoid this effect), it may
lead to new tussles as other attributes become more dominant, which may in turn
again lead to price inflation and user dissatisfaction. Nonetheless, this might not be
considered ‘unfair’ as the broker is acting under significant uncertainty and needs to
be compensated. The market may accept lack of transparency of prices as part of
the cost of managing that uncertainty and shielding both users and providers from
its effects
57
. [New tussles leading to distortion of equilibrium].
Hence, following the above considerations, we can represent the tussle produced in the
situation of a simple broker and show how OPTIMIS, in fixing one issue leads to a situation
where which requires either increased competition between brokers (perhaps through
regulation) or a more complex technology solution to restore a happy medium.
At the heart of the relationships between broker and IaaS (referred to in bullet point three,
above) is a tussle between these actors. The broker wants transparency, low switching
costs and to be able to pass on good service (defined in this context by good TREC
58

parameters, which in turn translates to cheap, reliable, energy efficient and secure IaaS).
The IaaS wants opacity, vendor lock-in and high margins/profitability. Effectively the deal
the OPTIMIS broker (or indeed most brokers) offers is to increase revenues for the
provider in return for some control or transparency from the provider. Here the offer is that
through providing low switching costs (provided through APIs to the OPTIMIS broker) and
transparency on TREC parameters, OPTIMIS will allow the provider to earn revenues
through the OPTIMIS sales channel (based on the premise that OPTIMIS can act as a
high volume sales channel as an enabler of hybrid and brokered cloud provision). The
switching costs to non-OPTIMIS-enabled IaaS (or alternative technology) remain high.
This means there is still some lock-in, but to a group of OPTIMIS-affiliated providers rather
than to a single provider.
There is an equilibrium between what is in the interest of the IaaS and what is in the
interest of the broker. The IaaS does not want to release existing users which they have
invested in capturing and which now should be providing returning business. Likewise they

57
It is important that the user can 1) influence, and 2) verify the decisions the broker has made. Providing these
constraints are met, there may be some tolerance of the granularity of the TREC details given (ie some sacrificing of
transparency). If these conditions are not met, there is much greater potential for the tussle spillover described.
58
TREC, Trust, Reputation, Eco-efficiency and COST, see above
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 146 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

do not want to cannibalise their existing market. The broker would, in contrast, be quite
happy to reset the market status quo, liberating all “locked-in” users and taking 100% of
the cloud market.

Figure 43: The OPTIMIS Tussle over IaaS Provider Selection
Thus, as part of developing the offer to the IaaS providers, both parties must make
concessions, in order to establish an business relationship profitable to both parties. This
might, for example, include guaranteeing a certain level of usage, even when this usage is
not the most competitive. (for instance the broker might accept a lower profit margin on the
first tranche of provision, ensuring that the best price available to the user (here IaaS price
+ broker margin) be lower for one provider than another). By shifting the balance of power
to the user, the broker loses this ability and this is the consideration of point 3.
In point 4 we consider that a pricing model can restore the balance between broker and
user, and implicit in the broker’s position is its ability to provide a mutually beneficial
trading situation to the providers. Naturally, although this analysis has considered IaaS
providers to be static to this tussle, there is a relationship between IaaS providers and the
broker, and we may see effects such as less reputable or secure providers having to drop
their prices or become more energy efficient to ensure differentiation.
In point 5 we consider that pricing model alone may not be sufficient, as market distortion
leads to new tussles. An alternative solution is for the broker to attempt to remove itself
from interfering in market price, instead throttling back the bargaining power of the users.
This could be a tiered system, with premium users having more flexibility, but at a cost. In
this model the broker could charge a subscription to both parties, which is not atypical in
two-sided markets. However, as commented above, the level of tolerance of the market for
transparency may not be as high as initially assumed – and acceptance of the broker’s
need to not disclose some data may be present.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 147 of 161
© Copyright 2012, the Members of the SESERV Consortium

A.8 Detailed Tussle Analysis for BONFIRE technologies
BonFIRE is developing a multi-site cloud facility for experimentation and testing of cloud
technologies. The facility aims to provide services that allow RTD teams to study the
cross-cutting effects of clouds and networks. The project is funded by the European
Commission as part of the Future Internet Research and Experimentation (FIRE) Unit.
Much has been written about the cloud computing model enabling on-demand networked
access to a shared pool of configurable computing resource that can be rapidly
provisioned, elastically scaled and released with minimal management effort or service
provider interaction. Cloud consumers utilize and pay for what they need, require no
upfront capital investment and benefit from reduced costs due to the efficiency gains of
providers. This all sounds fantastic but helping businesses make the transition to cloud
computing models has significant technical, operational and legal challenges. Cloud is a
disruptive technology, it changes the way applications are developed and operated.
BonFIRE targets the RTD phase of the technology lifecycle where developers experiment
with and test technology, they investigate new ideas and perform verification and
validation of technology prior to production deployments.
Experimentation and testing of distributed systems is a complex endeavour. Computer
systems are made up of many interacting components whose behaviours exhibit
significant degrees of uncertainty. Predicting the behaviour of even the most basic
computer programme running on a single processor machine is a hard task considering
the interplay between processor architecture, memory, cache, etc. Layer on top of this
huge bodies of software providing middleware and applications, and then deploy on
infrastructure across distributed locations under different domains of control and you begin
to understand the challenge. In the fields of both scientific investigation and software
engineering methodologies have developed to understand and validate the behaviour of
systems.
For scientific investigations, such as those undertaken by most researchers, the objective
is to discover new knowledge by studying the relationship between independent and
dependent variables. Experimenters define a hypothesis in terms of these variables and a
null hypothesis (what’s the chance of it happening anyway?) then tests are executed to try
and accumulate enough evidence to reject the null hypothesis. Scientific investigations fall
into two general classes, experimental studies and observational studies, each separated
by the level of control the experimenter can exert over independent variables. In an
experimental study the design assumes that the experimenter can manipulate factors or
circumstances to test effect of other phenomena, there’s precise control over independent
variables and the ability to minimise any extraneous effects. In an observational study,
there are still independent and dependent variables but there’s minimal control over
independent variables and a greater chance of interference. For observational studies
much of the design is ensuring that the right data is selected. In both types of scientific
investigation the experimenter must eliminate systematic variation, reduce experimental
error and test for significance of results. Techniques such as randomization (remove bias
and other sources of extraneous variation), replication (repeat of the study by others), and
local control (bring all extraneous sources of variation under control to increase the
efficiency of an experimental design by decreasing the experimental error) are all
principles that allow for a valid test of significance.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 148 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Software testing is a verification technique; it’s about finding software failures. The
definition according to the IEEE standard terminology a failure is ‘An unsatisfactory
program execution is a “failure,” pointing to a “fault” in the program, itself the result of a
“mistake” in the programmer’s thinking. The informal term “bug” can refer to any of these
phenomena.’ A software tester cannot test an entire system but must adopt a strategy that
ensures acceptable test coverage of various kinds (e.g. instruction, branch, or path
coverage) and minimises the time needed to uncover the faults (number of faults it
uncovers as a function of time). Software testing allows software engineers to acquire
evidence that can contribute towards estimates on software quality. Testing is at the heart
of both scientific investigation and software engineering. Each requires the definition of a
system under test (software/experiment), instrumenting that system and controlling
sources of systematic and random errors. These requirements are the drivers for the key
architectural principles of observerability and control adopted by BonFIRE.
A.8.1 Cloud Functionalities and Stakeholders
BonFIRE can largely be considered as an Infrastructure as a service (IaaS) offering with
associated functionality described earlier in Section 3.2. The generalised BonFIRE
ecosystem with detailed stakeholder descriptions and the benefits they gain from
participation is described below:


Figure 44: Main Stakeholders of BONFIRE Project
• Experimenter (open call and unfunded): an individual, team, organisation or
research project undertaking research and development of novel ICT products and
services, mainly from the Internet of Services, Cloud and Networking communities
(member of the ‘Technology Maker’ stakeholder role). The motivation, desired
impact and maturity of experiments can be diverse due to the drivers coming from
both academic and commercial objectives. However, experiments must have clearly
defined beneficiaries who can exploit results either through new product features,
service configurations, or disseminated to the body of human knowledge. The
primary benefits are that BonFIRE is offered by leading service providers and cloud
technologists who will work in consultation with users of the facility to ensure strong
experiment hypothesis, design and execution; experimenters have access to multi-
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 149 of 161
© Copyright 2012, the Members of the SESERV Consortium

site and heterogeneous cloud resources (e.g. compute, storage and networking)
with advanced low-level control and monitoring APIs and the ability to scale beyond
the resources immediately available to them; and the experiment process is
supported by tools that ensure results are verifiable and reproducible, with
experiments that can be transitioned between different environments.
• Testbed Provider: an organisation offering testbed services to Experimenters, or
following the stakeholders’ taxonomy of Section 3.3 an instance of the ‘IaaS
Provider’ role. There are principally two classes of testbed providers: 1) publically
funded academic organisations offering services to academic communities and
some commercial HPC services, and 2) commercial testbed providers offering a
services business to the market. The primary benefits for an academic service
provider are research associated to testbed assets, revenue generation, and
esteem. The primary benefit for commercial testbed providers is revenue
generation. The testbed providers are key stakeholders within the ecosystem.
• Broker: an organisation - member of the ‘Information provider’ role - offering
market, community and added-value knowledge services to an ecosystem of
Experimenters and Testbed Providers. The purpose of the broker depends on the
nature of the community or ecosystem it is aiming to support. Brokers can be
effective in open communities as a central administration point for policies aimed at
organising and optimising for shared community interests. In this case, the Broker is
typically operated by a trusted member of the community who performs services on
behalf of the community. The benefit is likely to be some funding for these
operations but it is mostly that their community is supported. Brokers can also play
an intermediary role in emerging ecosystems as the facilitator of business activity.
Such stakeholders are more often labelled Platforms rather than brokers in today’s
language but the roles are similar. The benefit is revenue by simulating and
supporting activity between ecosystem stakeholders and taking a cut of their
revenue generating activities.
• Testbed Developer: an individual, team or organisation - member of the
‘Technology Maker’ role - that develops and distributes software supporting access,
management and monitoring of testbed resources, along with other tools supporting
activities within the experiment lifecycle. Testbed Developers exist within open
source communities or commercial software providers. The benefit for open source
developers includes aspects such as personal need (e.g. require a better way of
doing existing tasks), career advancement, complimentary products and services
(e.g. professional services), and a model for getting free development and support,
although it should be noted that commercial business models can also be built on
open source platforms. The benefit for commercial software providers is primarily
revenue-generating activities through software licensing and support services.
• Hardware and software suppliers: 3rd party organisations that as another
member of the ‘Technology Maker’ role supply resources (e.g. products and
services) to testbed providers that are then used to offer testbed services. The
benefit for a supplier is primarily revenue generating activities and sometimes
reputation in market segments.
A.8.2 Tussle Analysis
The combination of cloud computing, networks, experimentation and testing raises
significant socio-economic challenges. BonFIRE is completing a socio-economic
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 150 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

assessment of experiments in Q4-2012 so this discussion can only aim to touch on the
issues and cannot analyse all of the different tussles that could occur in relation to
stakeholders and cloud functionalities.
BonFIRE, and cloud computing in general has the potential to suffer from the principle
agent (PA) problem. When one party - the principal - delegates a task to another party -
the agent, a principal agent relationship is established. In BonFIRE, a principal agent
relationship exists between the experimenter (principal) and the broker (agent). There’s
also a potential PA relationship between the broker and the testbed providers, although for
BonFIRE testbed providers delegate authority for decisions about authentication,
authorisation and capacity allocation to the broker and therefore in many scenarios the
combination of BonFIRE testbed providers and broker can be considered a single agent.
Incentive problems may arise in PA relationships due to three factors: conflicting
objectives, information asymmetry, and difficulty of monitoring the agent. If any of these
are present incentive problems may occur. The juxtaposition between these economic
issues and the requirements for experimentation (control and observability) raises some
interesting challenges.
The following figure discusses the evolution and relationship of two candidate tussles in
the context of the BonFIRE project; ‘Resource Allocation Tussle’ and ‘Information
Disclosure Tussle’.
We start describing the Resource Allocation Tussle which is related to the Cloud QoS
functionality. The major stakeholders are the BonFIRE broker and two types of
experimenters; those who have advanced requirements (e.g., they develop critical
systems of which the performance must be extensively evaluated under several
conditions) and those who are mostly attracted by the low cost of BonFIRE system. The
BonFIRE broker is assumed to be neutral in that particular tussle.
The first step is the introduction of BonFIRE system resulting in a new, unstable state (blue
colour). The reason is that, unlike commercial cloud providers, BonFIRE has a finite
capacity
59
and experimenters will probably end up competing for accessing enough cloud
resources.
In the second iteration of the tussle analysis methodology, we selected to study three
scenarios differing on the following aspects:
• The pricing scheme (flat vs tiered).
• The type of experimeters’ requests supported (no preferences at all, simple job
descriptions or advanced job descriptions).
In the first case (2a) we assume that Experimenters simply send service requests and the
Broker allocates resources on a Best-Effort basis (e.g. to the lowest utilised physical
machine). Furthermore, the Broker charges all Experimenters a flat fee, regardless of the
resources devoted over time. The problem here is that both types of Experimenters would
have no incentive to restrict the number of requests, leading to an effective denial of
service (the underutilisation) for others wanting to use the service. This is a classic
“tragedy of the commons” case because the users are not accountable for their choices.
Thus, this approach is expected to lead to a biased outcome that is more favourable to
Experimenters of non-critical systems.

59
Although commercial providers also have a finite capacity the scale of most providers means that in practice
customers can consume as much as they want and elastically scale based on dynamic demand. BonFIRE cannot
support this almost infinite scaling due to limited infrastructure capacity.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 151 of 161
© Copyright 2012, the Members of the SESERV Consortium


Figure 45: Candidate Tussle Analysis Evolution for BonFIRE
In the second case (2b) we assume that Experimenters estimate what they need and the
Broker, trusting that requests are estimated truthfully and correctly, allocates resources on
a Best-Effort basis (e.g. to the lowest utilised physical machine). As in the previous case,
the Broker charges all Experimenters a flat fee, regardless of the resources devoted over
time. Furthermore, Experimenters tend to overestimate their requirements (no point having
an experiment fail due to insufficient resources) which again results in an effective denial
of service. Thus, this approach is expected to lead to a biased outcome that is more
favourable to Experimenters of non-critical systems, too.
In the third case (2c) we study the expected outcome of Brokers charging based on a
tiered pricing scheme and allowing advanced experiments to be run. We expect that such
an approach would lead to a fair outcome since those Experimenters asking for example a
highly controlled environment will be willing to pay a higher price. However, Experimenters
would need access to some monitoring information in order to verify that their
requirements have been met and be able to identify what went wrong with a particular
experiment.
If no monitoring information, which is part of the Security functionality, is available then this
would make the adoption of BonFIRE system harder for the premium type of
Experimenters. In terms of the SESERV tussle analysis methodology this is a spillover
from the Security functionality to the QoS functionality.
In the following we will examine why this spillover is likely to happen by studying the
‘Information Disclosure Tussle’. Again the first step is the introduction of the BonFIRE
system, assuming that no monitoring information is made available to Experimenters. This
new outcome is considered biased because it is favourable only to the Broker and the
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 152 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

IaaS host, who fear that if an experimenter could observe the physical machine they could
also make sense of the brokers’ business model and use this information against them (for
example by other Brokers who pretend to be Experimenters). Actual experimenters on the
other hand, would like the Broker to maximise disclosure of infrastructure monitoring
(including virtual and physical behaviours) in order to observe experiment output.
Let’s consider how Experimenters could react in order to reach a more favourable
outcome. A possible reaction is the execution of custom monitoring programs in the virtual
machine (in parallel with the experiment) and collection of those results (case 2a).
However, this approach is expected to be useful only for a limited set of performance
indicators and for those that can me monitored, the information would be just a rough
estimation. This information may not give enough guidance on what went wrong with the
experiment or provide evidence of SLA term violation. Thus the new outcome would be
better than the previous one but still not stable.
Another option that BonFIRE could examine is to develop the necessary procedures and
interfaces that would allow fine-grained monitoring information to become available to
Experimenters. Given the above analysis of stakeholders’ interests, this outcome would be
still biased but in contrast to the previous ones, it would be favourable to Experimenters
only.
The last option that will be examined is the case where only aggregated monitoring
information becomes available to Experimenters (case 2c). The aggregation levels
available could be predetermined by Brokers and IaaS hosts (perhaps together with an
extra charge) and Experimenters would select the one that suits their needs. Allowing
Experimenters, Brokers and Infrastructure providers to make their choices meets the
Design for Choice principle and should lead to a stable/fair outcome.


Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 153 of 161
© Copyright 2012, the Members of the SESERV Consortium

Appendix B Interactions with Other Projects
This section provides the main interactions of SESERV members with other Challenge 1
projects for the period September 2011-August 2012, and most of which took place by
March 2012 in order to allow for timely progress and completion of the relevant work
documented in the present deliverable.
Date Action SESERV Impact/ Result
Sep 27, 2011 SESERV Presentation at
ULOOP Industrial Workshop
(B. Stiller)
ULOOP’s Socio-economics
Work Package leader will
attend SESERV's Athens
Workshop and provide
respective tussle analysis
input
Sep 27-29, 2011 Email communication Discussion about tussle
analysis with T. Leva from
SAIL project. New SAIL
deliverable was emailed.
Oct 4, 2011 Physical meeting at AUEB
premises (C. Kalogiros, G.
Stamoulis, C. Courcoubetis)
Finalization of ETICS
whitepaper in collaboration
with M. Dramitions from
ETICS. Agreement to submit
it to the FuNeMs 2012
together with O. Dugeon
(Orange Group, FR) from
ETICS.
Oct 6, 2011 Physical meeting at EC
premises (8
th
concertation
meeting)
Discussion with Anastasios
Zafeiropoulos (ECONET)
about D2.1 results and
tussle analysis. Agreed to
participate to Athens
Workshop.
Oct 6, 2011 Physical meeting at EC
premises (8
th
concertation
meeting)
Discussion with P.
Demestichas (OneFIT)
about D2.1 results and
tussle analysis. Agreed to
participate to present
OneFIT’s technology to
Athens Workshop in one of
the focus groups.
Oct 6, 2011 Physical meeting at EC
premises (8
th
concertation
meeting)
Discussion with N. Le Sauze
(ETICS) about the Athens
Workshop. Agreed to
present the ETICS
technology to Athens
Workshop in one of the
focus groups.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 154 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Oct 21, 2011 Physical meeting at AUEB
premises (C.Kalogiros, I.
Papafili)
Discussion with PURSUIT’s
member on the tussle
analysis
Oct 27, 2011 SESERV Presentation at
FISE workshop in Poznan
(C. Kalogiros)
Discussion with
UNIVERSELF’s Technical
Manager (Laurent Ciavaglia)
and researcher (M.
Stamatelatos) on tussle
analysis methodology and
results. A joint paper on
combining tussle analysis
and UBM was agreed for
early 2012.
Nov 8, 2011 Phone conference (I.
Papafili, C. Kalogiros)
Discussion about tussle
analysis with T. Leva and
Nan Zhang from SAIL
project. Agreed to present
the ETICS technology to
Athens Workshop in one of
the focus groups.
Dec 2, 2011 Email (I. Papafili, C.
Kalogiros)
A set of questions about
SAIL tussles was sent to T.
Leva (SAIL)
Dec 8, 2011 – May 31, 2012 Email communication (more
than 50 mails exchanged)
Discussion about tussle
analysis (in particular game-
theory model) with A.
Bogliolo from ULOOP
project. Several Documents
were send during that time.
Dec 8, 2011 – June 18,
2012
Email communication Discussion about tussle
analysis with A. Radwan
from C2POWER project.
Dec 13, 2011 Phone conference (M.
Waldburger, P. Poullie, A.
Bogliolo)
Discussion about first
concrete steps for the tussle
analysis modeling of the
ULOOP project
Dec 14, 2011 Email (I. Papafili, C.
Kalogiros)
A set of responses to
questions about SAIL
tussles was sent by T. Leva
(SAIL)
Dec 20, 2011 Phone conference (M.
Waldburger, P. Poullie, C.
Kalogiros)
Discussion with ULOOP’s
Socio-economics Work
Package leader (A.
Bogliolio) about tussle
analysis. Agreed to present
ULOOP’s technology to
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 155 of 161
© Copyright 2012, the Members of the SESERV Consortium

Athens Workshop in one of
the focus groups.
Jan 12, 2012 Physical meeting in Athens Discussion with M.
Stamatelatos about tussle
analysis and the relationship
to UBM.
Jan 20, 2012 Phone conference (M.
Waldburger, P. Poullie, C.
Kalogiros)
Discussion with
C2POWER’s Technical
Manager (A. Radwan) about
tussle analysis. Unable to
participate to Athens
Workshop due to conflicting
date with a C2POWER
plenary meeting.
Jan 30, 2012 Physical meeting in Athens
(I. Papafili, C. Kalogiros)
Discussion with T. Leva
(SAIL) and A. Kostopoulos
(PURSUIT) about tussles in
both architectures.
Agreement to submit a FIA
book chapter 2012 about
tussles in ICN.
Jan 31, 2012 Physical meeting in Athens
(I. Papafili, C. Kalogiros, C.
Courcoubetis, G. Stamoulis,
G. Thanos, M. Waldburger,
M. Boniface, B. Stiller, B.
Pickering, C. Tsiaras)
Participation to focus
groups, interviews of
participants and private
discussions with other
project members about the
workshop conclusions
Feb 22, 2012 Email (C. Kalogiros, C.
Tsiaras)
Survey about Athens
workshop has been sent to
participants
March 30, 2012 Phone Conference (P.
Poullie, A. Bogliolo)
Discussion of details of the
game-theoretic model for
ULOOP that was developed
over the last months.
4 April 2012 Email (I. Papafili, C.
Kalogiros)
Invitations sent to candidate
participants for WP2-related
focus groups in FIA Aalborg
April 18, 2012 ,
April 24, 2012
Email (C. Kalogiros) Discussion with M.
Stamatelatos, S. Delaere
and V. Goncalves from
UNIVERSELF about tussle
analysis and its relationship
to MACTOR and UBM.
May 11, 2012 Physical meeting (C.
Kalogiros)
Focus Group on QoS aware
Future Internet in FIA
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 156 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

Aalborg
Jun 20, 2012 Physical meeting (C.
Kalogiros)
Discussion with F. von
Bornstaedt about ETICS
tussle analysis
Jun 22, 2012 Email (I. Papafili, C.
Kalogiros)
Discussion with Tapio Leva
and Nan Zhang (SAIL) on
details of the function of the
NRS component
Jun 25, 2012 Physical meeting (C.
Kalogiros)
Discussion with Willis X.
Chen (from BonFIRE) about
tussle analysis
Jun 26, 2012 Email (I. Papafili, C.
Kalogiros)
Discussion with George
Xylomenos (PURSUIT) on
details of the function of the
RENE component
Jul 10, 2012,
Jul 17, 2012
Email (C. Kalogiros, M.
Boniface)
Discussion with Willis X.
Chen (from BonFIRE) about
tussle analysis
Jul 16 2012,
Aug 2, 2012
Phone conference (C.
Kalogiros)
Discussion with M.
Stamatelatos about
UNIVERSELF tussle
analysis


Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 157 of 161
© Copyright 2012, the Members of the SESERV Consortium

Appendix C Related Documents
This Appendix contains related work supported by SESERV with respect to the
incentives work undertaken in WP2.
As such, SESERV members have coordinated the FIA chapter that was submitted by
bringing together representatives from the PURSUIT and SAIL research projects. The
paper discusses tussles (and their evolution) that could appear if ISPs had adopted the
proposed instances of the Information-centric paradigm. Furthermore, the two papers on
LiveShift (Future Internet Workshop, Munich, Germany, January 2012) and Policies (IFIP
Networking, Prague, Czech Republic, May 2012) reflect joint work and discussions
integrated from socio-economic perspectives into technical work performed at UZH in the
area of incentives for peer-to-peer-based applications and systems and have
acknowledged the support from SESERV and disseminated (will disseminate) its
experience to the respective workshop and conference audiences. Furthermore, the
“Future networks: Objectives and design goals” is attached discussing ITU
recommendations for Future Networks.

D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 158 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

C.1 FIA Chapter

[Full text of FIA chapter follows subsequently.]


adfa, p. 1, 2011.
© Springer-Verlag Berlin Heidelberg 2011
A Tussle Analysis for
I nformation-centric Networ king architectures
Alexandros Kostopoulos
1
, Ioanna Papafili
1
, Costas Kalogiros
1
,
Tapio Levä
2
, Nan Zhang
2
, Dirk Trossen
3
,
1
Athens University of Economics and Business, Department of Informatics, Athens, Greece
2
Aalto University, Department of Communications and Networking, Espoo, Finland
3
University of Cambridge, Computer Laboratory, Cambridge, UK
{alexkosto,+iopapafi,+ckalog}@aueb.gr,++
{tapio.leva,+nan.zhang}@aalto.fi,+dirk.trossen@cl.cam.ac.uk+
Abstract. Current Future Internet (FI) research brings out the trend of design-
ing information-oriented networks, in contrast to the current host-centric Inter-
net. Information-centric Networking (ICN) focuses on finding and transmitting
information to end-users, instead of connecting end hosts that exchange data.
The key concepts of ICN are expected to have significant impact on the FI, and
to create new challenges for all associated stakeholders. In order to investigate
the motives as well as the arising conflicts between the stakeholders, we apply a
tussle analysis methodology in a content delivery scenario incorporating socio-
economic principles. Our analysis highlights the interests of the various stake-
holders and the issues that should be taken into account by designers when de-
ploying new content delivery schemes under the ICN paradigm.
Keywords: information-centric networking, content delivery, future internet ar-
chitecture, tussles, incentives, socio-economics, value network.
1 Introduction
Over the recent years, an increasing number of users gain access to the Internet via
numerous devices equipped with multiple interfaces, capable of running different
types of applications, and generating huge data traffic volumes, mostly for content.
Traffic stemming out of these activities implies increased cost for the Internet Service
Providers (ISPs) due to the congestion in their networks and the generated transit
costs, as well as unsatisfactory Quality of Service (QoS) for some end-users.
This exponential growth of content traffic has been initially addressed by peer-to-
peer applications, or Content Distribution Networks (CDNs). CDNs consist of dis-
tributed data centers where replicas of content are cached in order to improve users`
access to the content (i.e., by increasing access bandwidth and redundancy, and reduc-
ing access latency). These CDNs practically formulate overlay networks [1] perform-
ing their own traffic optimization and making content routing decisions using incom-
plete information about customer`s location and demand Ior content, as well as utili-
zation of networks and available content sources. Similarly ISPs perform individual
traffic optimization using proprietary, non-native and usually non-scalable solutions
for traffic monitoring and shaping (e.g. Deep Packet Inspection (DPI) boxes for peer-
to-peer traffic) and have no incentive to reveal information about their network to
CDNs. This information asymmetry often leads to a suboptimal system operation.
Information-centric Networking (ICN) postulates a fundamental paradigm shift
away from a host-centric model towards an information-centric one. ICN focuses on
information item discovery and transmission and not on the connection of end-points
that exchange data. Thus, ICN has the potential to address efficiently the aforemen-
tioned information asymmetry problem by including traffic management, content
replication and name resolution as inherent capabilities of the network.
What remains the same is that the Internet is a platform composed of multiple
technologies and an environment where multiple stakeholders interact; thus, the Inter-
net is interesting from both the technological and the socio-economic viewpoint. So-
cio-economic analysis comprises a necessary tool for understanding system require-
ments and designing a flexible and successful FI architecture.
A first attempt to investigate socio-economic aspects of FI in a systematic manner
was performed by Clark et al. [2]. They introduced the Design for Tussle principle,
where the term tussle is described as an ongoing contention among parties with
conflicting interests. It is obvious that the need for designing a tussle-aware FI has
emerged to enhance deployment, stability and interoperability of new solutions. Alt-
hough there are plenty of counter-examples of adopted protocols/architectures that do
not follow the Design for Tussle principle, tussle-aware protocols and architectures
are expected to have better chances for adoption/success in the long-term [3].
The need for understanding the socio-economic environment, the control exerted
on the design, as well as the tussles arising therein has been also highlighted in [4].
The purpose of this work is to explore and analyze the tussles that may arise in ICN,
as well as to consider the roles of different stakeholders; below, we present a tussle
analysis methodology which extends the methodology originally developed within the
SESERV project [5], and apply it in the content delivery scenario. We focus on the
tussle spaces of name resolution, content delivery and caching.
This paper is organized as follows: In Section 2, we present our methodology for
identifying tussles among different stakeholders. Then, Section 3 provides an over-
view of representative information-centric networking architectures developed in the
PURSUIT [6] and SAIL [7] research projects. In Section 4, we focus on a use case for
content delivery; we identify the involved stakeholders and major functionalities and
roles that they can take, and then investigate the potential tussles among the stake-
holders. Finally, in Section 5, we conclude our remarks.
2 A Methodology for Tussle Analysis
This section provides a generic guide for better understanding the impact of a tech-
nology on the stakeholders` strategies, as well as on how other technologies might be
used and deployed. Below, we extend the methodology presented in [8] and combine
it with the Value Network Configuration (VNC) method introduced by Casey et al.
[9]. The tussle analysis methodology consists of the following steps:
1. Identify all primary stakeholder roles and their characteristics for the functionality
under investigation.
2. Identify tussles among identified stakeholders.
3. For each tussle:
(a) Translate knowledge into models by assessing the mid-term and long-term im-
pact to each stakeholder;
(b) Identify potential ways for stakeholders to circumvent negative impacts, and
the resulting spill-overs.
4. For each circumventing technique, apply steps 1-4 again.
The involved stakeholders usually express their interests by making choices that
will affect the technology by deciding which technologies will be introduced, how
these will be dimensioned, configured, and finally, used. All these collective decisions
will eventually determine how technology components will operate and produce out-
puts that are valuable for these stakeholders. Technology outputs are assessed by each
stakeholder individually and can affect real-world interactions (e.g. payments, price
competition, price regulation and collaboration) or trigger new technology decisions.
Such interactions allow the Internet to evolve and act as a living organism (Fig. 1).

Fig. 1.The Socio-Economic layer and Technology layer of the Internet ecosystem
Several techniques or methods can be used to perform each of the aforementioned
steps. In this paper, we show how the VNC method [9] can be incorporated in the
tussle analysis. What makes the VNC method a particularly useful tool for tussle
analysis is the separation of the stakeholders (or actors as Casey et al. call them) from
the functional roles the actors can take, thus allowing us to analyze multiple role
combinations instead of limiting to a single value network.
Identifying functional roles - defined in [9] as a set of activities and technical com-
ponents, the responsibility of which is not divided between separate actors in a partic-
ular scenario- is central to the VNC method. Because roles hold economic and strate-
gic value, the actors fight for their control. The tussles emerge when there is a conflict
of interest between the actor controlling the role and the other actors affected by it.
Depending on which actor controls a role, the tussle outcomes and the circumventing
techniques vary, which further motivates the usage of the VNC method.
The VNC method emphasizes technologies` role in deIining the possible value
networks by identifying also the technical components and technical interfaces be-
tween them. By doing this, the method improves our understanding of the relationship
between the technical architecture (a set of technical components linked to each other
with technical interfaces, such as protocols) and the value network configuration (role
division and related business interfaces among actors). This is important in analyzing
whether the technology is designed for tussle [2], i.e., if the technical design allows
variation in value networks. Fig. 2 presents the notation presented in [9] that can be
used to visualize the roles and VNC.

Fig. 2.Notation of VNC methodology
After identifying the involved stakeholders as well as the tussles among them, the
next step is to translate knowledge into models and provide quantitative analysis. In
[10] a toolkit is suggested that uses mind-mapping techniques and system dynamics to
model the tussles. System Dynamics (SD) [11] is a useful tool to evaluate dynamic
interactions between multiple stakeholders, by simulating the possible outcomes (e.g.,
how technology diffuses) when multiple stakeholders interact. The main focus is on
the assessment of outcomes and their evolution over time, since possible reactions can
be modeled. After having captured the causality models, relevant socio-economic
scenarios may be formulated to investigate the potential consequences in the Internet
market. We do not conduct SD analysis in this paper due to space constraints.
3 Overview of I CN architectures
Diverse research projects, such as PURSUIT [6], SAIL [7] and NDN [12] are empha-
sizing the need to move towards an ICN architecture. In this section we briefly pre-
sent an architecture overview of ICN in order to provide the necessary background.
We focus on the Publish/Subscribe (pub/sub) model adopted by PURSUIT and the
Network of Information (NetInf) introduced by SAIL.
3.1 Publish/Subscribe
In the PURSUIT pub/sub paradigm, information is organized in scopes. A scope is a
way of grouping related information items together. A dedicated matching process
ensures that data exchange occurs only when a match in information item (e.g., a
video file) and scope (e.g., a YouTube channel) has been made. Each packet contains
the necessary meta-data for travelling within the network. Fig. 3 presents a high level
picture of the main architectural components of the pub/sub architecture.

Fig. 2. A Publish/Subscribe architecture for ICN [13].
At the application level, the pub/sub components implement applications based on
basic ICN services, enabling publications and subscriptions towards information
items within particular scopes.
At the network level, the architecture itself consists of three main functions: ren-
dezvous, topology and forwarding. The rendezvous function implements the matching
between publishers and subscribers of information based on several criteria. Moreo-
ver, the rendezvous service provides additional functionalities to implement policies
associated with the matching, such as access control. When a publication is matched
with one or more subscriptions, an inter-domain forwarding graph is created in nego-
tiation with the inter-domain topology formation (ITF) function. After constructing
inter-domain paths between the forwarding networks to which publisher(s) and sub-
scriber(s) are attached, intra-domain paths need to be constructed. This is done in
collaboration with the AS-internal topology management (TM) function, which in-
structs its local forwarding nodes (FN) to establish paths to local publishers / sub-
scribers or to serve as transfer links between ASes.
3.2 Networ k of Information
The SAIL Network of Information (NetInf) aims at three architectural objectives: i)
unique naming regardless oI the Named Data Object`s (NDO`s) location and without
a hierarchical naming structure; ii) receiver-oriented NDO delivery; and iii) a multi-
technology and multi-domain approach, where any underlying technology and net-
work can be leveraged [14]. The NetInf network consists of Name Resolution System
(NRS) nodes and NetInf router (NR) nodes, which are illustrated in Fig. 4.
NetInf supports both name-based routing and name resolution. Name resolution is
enabling scalable and global communication: NDOs are published into the network
and registered by the NRS. Specifically, the NRS is used to register the network loca-
tors of NDO copies in the underlying network, which can potentially provide packet-
level routing and forwarding functionalities. The NDO request can be resolved by the
NRS into a set of network locators, which are used to retrieve a copy of the NDO
from the optimum source based on a pre-defined criterion. At least one global NRS
must exist in the NetInf network, but also intra-domain NRS` are possible.
The NetInf router node accepts NetInf names as input and decides how to route the
request so that eventually a NDO is returned to the previous-hop NetInf node. This
routing decision could be either towards a NRS or directly towards the NDO source,
the latter of which represents the name-based routing scenario. In addition, NetInf
cache servers for content replication can be placed both in the NR nodes and the NRS
nodes.
Underlying networks
NetInf
NR
NR
NR
Requester
Publisher
NR
NR
NR
NRS
NRS
NR
1. Publish
2. Request
4. Request
5. Information Object
NR
NR NetInf Router node
NRS Name Resolution System node
3b. NRS
response
3a. NRS
request

Fig. 4.NetInf high level architecture.
Fig. 4 also shows the high level content retrieval process in NetInf. First, (1) a
NDO owner publishes the NDO into the network by adding it to the NRS registry.
When a (2) request for a NDO occurs, the NetInf router can either (3a) forward the
request to a NRS for (3b) the set of locators or it can (4) directly forward the request
to the NDO source, depending on whether the NetInf router knows where the NDO is.
Finally, (5) the NDO is returned to the requester via the same route as the request and
the NDO can be cached on every node that it passes.
4 Tussles i n I nformation-centric Networ king
In this section, we focus on the content delivery use-case in a generic ICN architec-
ture and apply our combined tussle analysis and VNC methodologies to it. We first
look into the intra-domain scenario and then build incrementally on the inter-domain
scenario. As the first step of our methodology, we identify here major functionalities,
group them into roles and list the stakeholders that can take up these roles. Then, in
the second step, we perform tussle analysis on a per functionality view.
4.1 The Content Delivery Use-Case
As illustrated in Fig. 5, we consider two Access Network Providers (ANPs) that em-
ploy ICN to offer content delivery services to their customers. The two ANPs are
connected through transit links to an Inter-Connectivity Provider (ICP). Both ANPs
employing ICN have deployed their own networks of Caches. Within the ANPs prem-
ises, local NRSs are also provided, which are connected to a global NRS service. The
NRSs could be controlled by either the respective network infrastructure provider
(ANP or interconnectivity provider) itself, or by a third-party. Potential subscribers of
an information item exist in both ANPs; however, only a single publisher (P
1
) of that
specific content exists initially, in ANP
1
.

Fig. 5. Content delivery in an ICN architecture.
Intra-domain scenario. We assume that P
1
in ANP
1
publishes an information item
to his local NRS, and the local NRS advertises the publication to the global NRS.
Then, S
1
in ANP
1
sends a subscription for an information item to the local NRS of its
ANP. The local NRS identifies that the requested information item is published within
the ANP and matches P
1
with S
1
. If more subscriptions for the same information item
occur, the ANP may also decide to cache the content to another location in order to
achieve load balancing and to provide higher QoS to its customers (subscribers).
Inter-domain scenario. Let us now assume that S
2
in ANP
2
also subscribes to his
local NRS for the same information item. Since, the information item is not published
within ANP
1
, the local NRS informs the global NRS about this subscription. The
global NRS, who is aware of P
1
, matches P
1
with S
2
. ANP
2
may cache the information
item in his caching elements, in order to serve potential new subscribers.
4.2 Functionalities, Roles, Stakeholders
Based on the aforementioned use-case, we identify the key functionalities and map
them to five key roles (Table Table 1). There are multiple stakeholders in position to
control these roles, which would lead to different outcomes. Here, we focus on the
role allocation visualized in Fig. 6, since it is a representative case to take place in
ICN. In our setup, the content access management (i.e. AAA) role can be taken by
either the Content Provider (CP) or the ANP, the name resolution is taken by either
the ANP or a third-party provider (i.e. a Rendezvous Network (RENE) provider in
[6]), whereas the other four roles are assigned to the ANP. The chosen role allocation
differs from the typical situation in the market today where other stakeholders, such
as CDN providers or CPs, control the name resolution, caches and content network.
Table 1. Key roles and functionalities in ICN content delivery
Role F unctionalities
Name Resolution Content directory control, names to locations resolution,
rendezvous, matching, applying policies
Content access management AAA (Authentication, Authorization, Accounting)
Cache management Cache servers control, content selection for being cached,
cache updating
Cache location ownership Cache locations control
Content network management Content network resources selection, path selection, QoS

Fig. 6. Generic Value Network Configuration (VNC) for content delivery in ICN.
The major stakeholders that can take up the aforementioned roles in our scenario
are presented in Table 2. We also use parentheses to include the additional roles that
could be potentially taken up by stakeholders in other scenarios. Additionally, we
include the CDN providers, as well as the regulators that exist in current Internet,
although their interests and actions are not subject of this analysis.
4.3 Tussle Analysis
In this section we identify tussles related to key roles listed in Table 1. Each tussle
is described with references both to the use case (Fig. 5) and the VNC (Fig. 6).
Table 2. Stakeholder - basic role mapping.
Stakeholder Basic role
End-user Content consumption, (content creation)
Content Provider (CP) Content creation, (content access management)
Internet Service Provider (ISP)
- Access Network Provider (ANP) Access network operation, cache management, cache
location ownership, content network management,
(name resolution , content access management)
- Inter-Connectivity Provider (ICP) Interconnectivity provisioning to ANPs,
(name resolution)
NRS provider Name resolution
Content Distribution Network
Provider (CDN), e.g. Akamai
Cache management, cache location ownership,
content network management, name resolution
Regulator Competition regulation
Tussles related to name resolution
Spam requests tussle: The local NRS may decide to replicate the requested infor-
mation to his own cache like the rendezvous in the pub/sub model. In this case, the
local NRS (or RENE) adds a subscription in his message towards the publisher asking
the inIormation to be Iorwarded also to the ANP`s cache. Thus, an NRS could issue a
request for another stakeholder (e.g. the end-user) for an information item that the
latter is not interested in (spam). This combined service contradicts the functionality
separation as dictated in [2], since the rendezvous also performs content management
besides its main function, i.e., name resolution.
Net neutrality tussle: The global NRS is potentially in a position to favor specific
CPs by promoting their content over the content of other CPs, or by filtering the in-
formation items provided by the latter ones. Additionally, if the local NRS is provided
by the ANP (similar to today`s ISPs` DNS service bundled with access provisioning),
there is an incentive for the NRS to forward the subscription to the local publisher. If
the content is not locally published, then the ANP-owned local NRS (NRS
2
) may
refuse to further handle the request to avoid fetching the information object from a
remote publisher or the cache of a competing CDN to avoid increasing ANP
2
`s inter-
connection costs. The latter case is also known as a 'walled garden¨. Ideally this situ-
ation is avoided by having architectures that allow competition in the resolution ser-
vice; otherwise a regulator would have to ensure that end-users are allowed to send
their subscriptions to the NRS of their choice.
Conflicting opti mization criteria: When multiple sources can serve a request, a
tussle occurs due to actors` diIIerent preIerences Ior the one to be used (e.g., cost
concerns, performance attributes, regulatory constraints, or other local policies). For
example, localization of traffic due to caching and content replication affects the vol-
ume exchanged between ANPs, as well as ANPs and ICPs. If the local NRS forwards
the content requests to local caches, both the interconnection costs of ANPs and reve-
nues of ICP decrease. This is naturally positive to ANPs but negative to ICPs.
Similarly, an ICP-owned global NRS may forward a subscription originated from a
local NRS to publishers that are located behind a transit link, even if the information
item was also available through a peering link (a different scenario than the one in Fig
5). The same situation could appear if the local NRS is provided by a third-party,
similar to, e.g., Google`s DNS, which may have different incentives. Such conflicting
optimization criteria might imply a straightforward increase of interconnection cost
for the ANP, and possibly degraded end-users` Quality of Experience (QoE).
As it is obvious, the actor who controls the name resolution is able to restrict or
even determine the available options to others. However, such an actor (like an ANP
when the end-user has used a different NRS provider) may still be able to use a dif-
ferent source than the proposed one. For example in [6], after the final matching of a
publisher and a subscriber by the Rendezvous Network, the Topology Manager may
create a path between the subscriber and a different publisher (i.e., an ANP`s own
cache server)
1
. This could be the case when the end-user or the NRS provider cannot
verify which publisher has been actually used.
Furthermore, other stakeholders could enter the name resolution market. In an ex-
treme case, even a CP may react by providing also his own NRS. For example,
YouTube could serve its information space by redirecting end-users to servers accord-
ing to its own criteria). Such an NRS may also be provided as a premium service to
other CPs. However, in both cases, client configuration by the end-users is required.
Finally, traditional CDN providers (like Akamai) could also react by announcing
all the content items (publishers and caches) they are aware of to multiple NRS pro-
viders, or even deploy their own name resolution servers.
Nevertheless, the name resolution role is central to ICN and of high interests to the
most stakeholders in this setup.
Tussles related to content access management
Access control tussle: If the ICN architecture does not clearly specify how to limit
access to certain end-users, the ANP may serve the subscriptions from its local cache
without consulting CP`s AAA system. This would destroy CP`s business, especially iI
it is based on transactional payments from end-users, but also if he sells advertising or
information about content usage. A proposed solution is presented in [10], where the
RENE could act as an accountability broker between the end-users and CPs.
Content usage statistics tussle: When the content is provided from local caches
controlled by multiple stakeholders, the CP may lose visibility on how its content is
used. This information has value, because payments from advertisers to CP and from
CP to content makers are often based on the popularity of content.
Privacy tussle: Finally, a control tussle may rise between the stakeholder manag-
ing content access and the end-users, since the former can use personal and transac-
tional data for purposes not approved by the end-user to make a profit, e.g. to sell data
to marketing companies.
Tussle related to cache management
Content f reshness tussle: The content cached in the ANP`s caches may be outdat-
ed, because the ANP may be reluctant to update the content in order to reduce his

1
Here, we assume that the Topology Manager is aware of the information item ID.
interconnection (i.e., transit) costs. Then, the end-user`s quality oI experience de-
grades, since he does not receive the most recent information.
Tussles related to cache location ownership
Cache placement for revisiting interconnection agreements tussle: Tussles here
mostly involve ISPs since existing interconnection agreements may not be justifiable
if a new cache was added. Hence, ISPs may try to affect peering ratios in advanta-
geous ways (e.g. create such an imbalance that violates their peering agreement). For
example, an ANP deploying his own cache content network and having a peering
arrangement with another ANP (which does not own a content network) may break
this agreement in hopes of providing transit service to the latter one. Similarly, an ICP
who sees its revenues being reduced may decide to adjust transit prices or enter the
content delivery market by providing global NRS services.
Tussles related to content network management
Networ k information tussle: An ANP may provide inaccurate information (or no
information at all) about its network topology, dimensioning, current utilization, etc.,
fearing that this sensitive information could be revealed to its competitors. However,
this may have a negative impact on the effectiveness of selecting publishers and con-
sequently paths between publishers and end-users that meet the QoE constraints posed
by the latter. For example, in case there are two publishers for a particular request,
one of them may seem more appropriate (although it may not be), if its own ISP is
untruthful by providing biased network information (e.g. lower delay in a path).
5 Discussion
ICN brings new challenges in the Internet market, since name resolution services may
be offered by different stakeholders in order to meet their own optimizing criteria;
either by the ANP, or by a third-party (such as a search engine or a significant CP).
Such major stakeholders oI today`s Internet are highly expected to extend their activi-
ties to offer NRS` in ICN.
Additionally, there is a crystal clear incentive for an ANP to deploy ICN, in order
to enter the content delivery market. Due to the information-oriented nature of the
network, an ANP could deploy his own caches, which implies that the ANP will gain
more control of the content delivery. Therefore, under suitable business agreements,
this will imply increase of his revenue, while simultaneously reducing his operational
costs due to more efficient content routing and reduction of the inter-domain traffic.
Moreover, CPs and end-users will also be affected; i.e. CPs will be able to provide
their content through more communication channels to their customers, while end-
users will enjoy increased Quality-of-Experience (QoE).
On the other hand, the emergence of ANP-owned CDNs will cause traditional
CDNs to lose revenues and control over the content delivery market. Thus, legacy
CDNs will probably react in order to maintain their large market share, or at least not
exit the market. CDNs may deploy their own backbone networks to interconnect their
own caches, but still they will probably not in position to deploy access networks to
reach the end-users; this is ANPs` last frontier. Nevertheless, no matter how legacy
CDNs will react, such local CDNs owned by ANPs will (and already) be deployed
(e.g. At&T`s CDN). The evolution of this competition and the way that the system
will be lead to an equilibrium is the subject of future investigation and analysis.
Our contribution in this paper resides in the identification and analysis of tussles in
ICN, which should be considered by designers and engineers that aim at deploying
new content delivery schemes for the FI.
Acknowledgement. The authors would like to thank G. Xylomenos, G. D. Stamoulis,
G. Parisis, C. Tsilopoulos and X. Vasilakos. The research of A. Kostopoulos and I.
Papafili has been co-financed by the European Union (European Social Fund ESF)
and Greek national Iunds through the Operational Program 'Education and LiIelong
Learning¨ oI the National Strategic ReIerence Framework (NSRF) - Research Fund-
ing Program: Heracleitus II-Investing in knowledge society through the European
Social Fund. C. Kalogiros is supported by the EU-FP7 SESERV project. The research
of T. Levä and N. Zhang is supported by the EU-FP7 SAIL project. The research of D.
Trossen is supported by the EU-FP7 PURSUIT project.
References
1. D.D. Clark, W. Lehr, S. Bauer, P. Faratin, R. Sami, J. Wroclawski, Overlay Networks and
the Future of the Internet. Communications and Strategies, 63, pp. 109-129, 2006.
2. D.D. Clark, J. Wroclawski, K.R. Sollins, R. Braden, Tussle in Cyberspace: Defining To-
morrows Internet. IEEE/ ACM Trans. Networking 13, 3, pp. 462-475, June 2005.
3. C. Kalogiros, A. Kostopoulos, A. Ford. On Designing for Tussle: Future Internet in Retro-
spect. IFIP EUNICE 2009, LNCS 5733, pages 98107, Barcelona, September 2009.
4. I. Brown, D. Clark, D. Trossen, Should Specific Values Be Embedded In The Internet Ar-
chitecture?. ACM Context ReArch 2010, December 2010.
5. EU FP7 SESERV project, http://www.seserv.org/
6. EU FP7 PURSUIT project, http://www.fp7-pursuit.eu/
7. EU FP7 SAIL project, http://www.sail-project.eu/
8. C. Kalogiros, C. Courcoubetis, G.D. Stamoulis, M. Boniface, E.T. Meyer, M. Waldburger,
D. Field, B. Stiller, An Approach to Investigating Socio-economic Tussles Arising from
Building the Future Internet. FIA Book 2011, LNCS, vol. 6656, pp. 145-159, May 2011.
9. T. Casey, T. Smura, A. Sorri, Value Network Configurations in wireless local area access.
9th Conference of Telecommunication, Media and Internet, 2010.
10. D. Trossen, A. Kostopoulos, Exploring the Tussle Space for Information-Centric Network-
ing. 39th TPRC, Arlington, VA, September 2011.
11. J. Sterman, Business dynamics: Systems thinking and modeling for a complex world. Irwin:
McGraw-Hill, 2000.
12. Named Data Networking (NDN) project, http://www.named-data.net/.
13. D. Trossen, M. Sarela, K. Sollins, Arguments for an Information-Centric Internetworking
Architecture. ACM Computer Communication Review, April 2010
14. P. Pöyhönen, O. Stranberg (eds.). (D-B.1) The Network of Information: Architecture and
Applications. SAIL project deliverable, 2011.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 159 of 161
© Copyright 2012, the Members of the SESERV Consortium

C.2 Networking Paper

[Full text of paper follows subsequently.]


Playback Policies for Live
and On-Demand P2P Video Streaming
Fabio V. Hecht
1
, Thomas Bocek
1
,
Fl´avio Roberto Santos
1,2⇥
, and Burkhard Stiller
1
1
University of Zurich, Communication Systems Group CSG, Zurich, Switzerland
2
Federal University of Rio Grande do Sul, INF, Porto Alegre, Brazil
{hecht,bocek,santos,stiller}@ifi.uzh.ch
Abstract. Peer-to-peer (P2P) has become a popular mechanism for
video distribution over the Internet, by allowing users to collaborate on
locating and exchanging video blocks. The approach LiveShift supports
further collaboration by enabling storage and a later redistribution of
received blocks, thus, enabling time shifting and video-on-demand in an
integrated manner. Video blocks, however, are not always downloaded
quickly enough to be played back without interruptions. In such situa-
tions, the playback policy defines whether peers (a) stall the playback,
waiting for blocks to be found and downloaded, or (b) skip them, losing
information. Thus, for the fist time this paper investigates in a repro-
ducible manner playback policies for P2P video streaming systems. A
survey on currently-used playback policies shows that existing playback
policies, required by any streaming system, have been defined almost ar-
bitrarily, with a minimal scientific methodology applied. Based on this
survey and on major characteristics of video streaming, a set of five
distinct playback policies is formalized and implemented in LiveShift.
Comparative evaluations outline the behavior of those policies under
both under- and over-provisioned networks with respect to the playback
lag experienced by users, the share of skipped blocks, and the share of
sessions that fail. Finally, playback policies with most suitable charac-
teristics for either live or on-demand scenarios are derived.
Keywords: P2P, live streaming, video on demand, playback policies
1 Introduction
The peer-to-peer (P2P) paradigm has been successfully applied to increase scal-
ability and decrease cost for the publisher of video streams on the Internet [7].
Since users of a P2P system already collaborate on distributing video streams,
LiveShift [6] makes further collaboration possible by allowing peers to store re-
ceived video streams and distribute them in the future, thus enabling time shift-
ing or – if the combined storage is large – even video-on-demand (VoD). This

Work developed while being a guest Ph.D. Student at University of Zurich,
CSG@IFI.
gives users the freedom to, without having previously prepared any local record-
ing, watch any program from an arbitrary position in the time scale, skipping
uninteresting parts until seamlessly catching up with the live stream as desired.
Content availability in any video streaming system is aected by network
and server conditions, possibly causing content not to be downloaded on time to
be played. The challenge increases in P2P systems due to, e.g., poorly managed
networks, asymmetric bandwidth of peers, tra⌅c-shaping at Internet service
providers, free-riding, limited view of peers, and the fact that users change their
interest frequently – switching channels and, in case of LiveShift, also time shift-
ing. Content may even be available in some peers, but not downloaded before
playback, because these peers have allocated all their upload capacity to serve
other peers. In this paper, the term content availability is thus defined as content
that is downloaded before its playback deadline.
The playback policy is the decision on, when content is unavailable, whether
to stall playback, or skip to a block that has already been downloaded. Though
any P2P video streaming system needs to implement a playback policy, current
systems either omit this information, or adopt an arbitrarily-defined policy. Up
to the authors’ knowledge, this paper’s work is the first aiming specifically at
investigating and comparing the eect of dierent playback policies.
The two main research questions this paper addresses are (1) do dierent
playback policies aect user experience in a P2P video streaming system, and
(2) which playback policies are most suitable for live and on-demand scenar-
ios? In order to answer these questions, this paper briefly overviews P2P video
streaming and introduces key terminology in Sect. 2. Section 3 displays the re-
lated work survey on playback policies used by dierent live and on-demand
P2P video streaming systems. Based on this, a classification and generalization
of dierent playback policies are presented in Sect. 4, enabling a meaningful com-
parison among them. These policies have been implemented in LiveShift; Sect. 5
presents their evaluation and comparison under a variety of carefully-selected
scenarios and parameters. Finally, Sect. 6 concludes this paper.
2 Background and Terminology
LiveShift, like most widely-deployed P2P video streaming systems, employs the
mesh-pull approach [7, 10], which consists on dividing the stream into blocks that
are announced and exchanged directly between peers with no fixed structure.
Figure 1 illustrates various terms used in this paper, and Table 1 defines
nomenclature. In LiveShift, blocks have a fixed length L in the time scale. A
viewing session starts when a user chooses a channel and starting time t
0
(the
current time, if live streaming). While the user holds on to (i.e. watches) a
channel, the system attempts to locate and download the corresponding blocks.
Ideally, the user would experience perfect playback, that is, without interruptions,
the block to be played b
p
(t) would be obtained based on the playback position t:
b
p
(t) = t/L (1)
Fig. 1. Terminology
However, due to lack of content availability, the playback experienced by the
user at t is the block b
e
(t), as given by (2), where n
sk
(t) is the number of skipped
blocks and t
st
(t) is the time stalled from t
0
until t.
b
e
(t) = n
sk
(t) + (t t
st
(t))/L (2)
Performing initial buering corresponds to stalling playback until the play-
back buer accumulates a number of blocks (typically a fixed configuration pa-
rameter), as an attempt to reduce the chance of skipping or stalling during
playback. The start-up delay is the experienced stalling time caused by initial
buering. The playback deadline determines the time that a particular block is
due to be played, according to the playback policy.
Table 1. Nomenclature
L Block length (time unit) ⇧ Buer size (in blocks)
t0 Session start time Initial buering coe⇥cient
t Current playback position ⇥ Stalling coe⇥cient
bp(t) Block played at t if perfect playback t
d
Remaining download time
be(t) Block played at t tp Remaining movie length
t
lag
(t) Playback lag at t r Relative incoming block rate
n
sk
(t) Skipped blocks from t0 to t T Maximum retries
n
pl
(t) Played blocks from t0 to t n Minimum block ratio
The term playback lag is commonly defined for live streaming as the elapsed
time from the moment a block is generated at the source until it is played at
the peer side [9]. In LiveShift, the concept of playback lag is extended also for
viewing time-shifted streams; playback lag is thus defined by (3) as the time
dierence between the block that is playing, according to the playback policy
used, and the block that would be playing, if there were no interruptions from
the moment the user pressed the play button. This extension in the definition of
playback lag preserves its original concept of measuring the overall ability of the
system of locating and downloading content, while being applicable to non-live
viewing as well. Liveness is a general term that stands for low playback lag.
t
lag
(t) = (b
p
(t) b
e
(t)) ⇥ L (3)
While both stalling and skipping negatively impact user-perceived video qual-
ity, their impact is dierent. On one hand, when stalling occurs, the video stream
is interrupted and playback lag is increased. Besides, peers with higher playback
lag lose their ability of providing streams with more liveness to other peers, neg-
atively impacting the entire system. On the other hand, when skipping occurs,
image quality might be impaired. Besides, since skipped blocks are not down-
loaded, they cannot be uploaded to any other peer, creating buer “holes” that
may harm the distribution overlay. In both cases, peers need a larger number
of providers to compensate for those missing blocks, which may be challenging,
since upload capacity is typically a rare resource in such systems [7].
3 Related Work
Though the implementation of a playback policy is required in any video stream-
ing system, it is often omitted from their specification. Works that do describe
the adopted playback policy are introduced in this section.
The most popular P2P live video streaming applications, such as Sop-
Cast [2], are proprietary and do not disclose in detail their policies. Measurement
works [13], though, suggest that these systems, after performing initial buering,
employ a window that moves at constant speed, skipping all blocks that are not
downloaded on time. The assumption is that, since streaming is live, maintaining
liveness is more important than attempting to play every single block. It also
helps keep peers mutually interested in a narrow content window. Such policy is
also used on works that model live P2P systems [8].
For VoD, the assumption is frequently the opposite. Since liveness is not
important, an intuitive policy would be, after performing initial buering, stall
every time a block to be played is missing. In a P2P system, though, such policy
could cause playback to stall for a very long time in case there are a few very rare
blocks. The VoD P2P client Tribler [11] addresses this issue by stalling playback
only if less than 50% of a 10-second playback buer is filled; otherwise, it skips
to the next available piece.
The work presented in [12] also uses Tribler for VoD, but adopts a dierent
policy. Playback is stalled until a 10-second-long buer is filled and the remaining
download time is less than the duration of the video plus 20%. The policy does
not allow skipping.
Gridcast [4] is a P2P VoD system which playback policy consists on stalling
if a block is not available at its playback deadline, while attempting to play it
up to 10 times. If the block still has not been downloaded, playback skips to the
next block. Initial buering is 10 seconds, and each block is 1 second long.
LiveShift [6] adopts a unified policy for live and on-demand streaming which
consists on skipping n contiguous missing blocks if and only if the peer holds at
least 2n contiguous blocks immediately after those, otherwise stalling. It aims
not to let peers stall for long in case only a few blocks are not available from
current neighbors, while skipping if there is a good chance of continuing playback
without interruptions.
While these works discuss briefly the playback policy adopted, they do not
oer a plausible proof or justification to why such algorithms and parameters
were chosen in the foreseen scenarios.
4 Playback Policies
This section describes and generalizes a set of four playback policies based on
the related work survey presented in Sect. 3, plus a new Catchup policy, enabling
a fair and meaningful comparison among them. Analysis and discussion on re-
spective trade-os in both live and on-demand scenarios are as well performed.
4.1 Always Skip and Skip/Stall Policies
The Always Skip policy, commonly used for live streaming, consists on always
skipping missing blocks to maintain liveness. It is defined by P
as
= (⇧, ), where
⇧ represents the size of the playback buer (in blocks), and ⇥ (0, 1] corresponds
to the share of blocks that the buer must hold before starting playback. After
the first block has been played, buered blocks are attempted to be played
sequentially and at constant speed. Missing blocks are immediately skipped;
however, if the buer is empty, playback stalls to perform again initial buering.
This is done so peers adapt their playback position b
e
(t) according to the blocks
that can be found at – and downloaded from – currently known peers.
The Skip/Stall (sk) policy is an extension to the Skip policy to allow stalling
as in Tribler [11]. It is defined as P
sk
= (⇧, , ⇥), which introduces the ⇥ ⇥ [0, 1]
coe⌅cient, such that, when a block at b
e
(t) is missing and the buer is not
empty, the system stalls playback until a share ⇥ of the playback buer is filled;
then, it skips to the next available block. The Always Skip policy is, thus, an
instance of the sk policy when ⇥ = 0.
4.2 Remaining Download Time Policy
Especially for VoD, it is reasonable to define a playback policy that depends on
the remaining download time, so stalling is reduced for users with a fast net-
work, while buering increases with slower networks. The Remaining Download
Time (rd) policy stalls playback until the remaining download time t
d
t
p
, the
remaining movie length. In order to apply this policy to LiveShift, the concept
of remaining playback time must be defined, since the stream ahead of playback
position may be very large – it extends until the current (live) time. Hence, t
p
is a parameter that may be set to, e.g., 30 minutes of video that buering will
attempt to guarantee playback for.
The rd policy can be modeled as P
rd
= (⌥, , ⇥, t
p
) by using the same algo-
rithm as defined for the sk policy, but using a variable buer size ⌥
0
, calculated
based on the parameters t
p
and ⌥, instead of ⌥ directly. Infinite geometric series
are used to calculate how long the playback buer would last, since the applica-
tion continues to download blocks while the buer is being used. If i represents
the incoming block (i.e. download) rate, and L being block length, let r repre-
sent the relative incoming block rate, such that r = i ⇤L; thus, if r = 1, the peer
is downloading blocks at a rate exactly enough to keep normal playback. The
variable buer size ⌥
0
can therefore be calculated as shown in (4), where ⌥ is used
both for initial buering and as a general lower limit of the buer size (when,
for example, r ⌅ 1). The coe⇤cient is present in the equation to preserve
the semantic of remaining download time, since only a share of the buer is
required by the playback policy to be held in the buer.

0
= max

t
p
L

(1 r)

, ⌥

(4)
4.3 Retry Policy
The Retry (re) playback policy is similar to the policy implemented in Grid-
cast [4], and is defined as P
re
= (⌥, , T). It consists on performing initial buer-
ing, then stalling if the block at playback position t is not available. The system
retries playing the missing block up to T times, which brings playback to stall
for a maximum of T ⇤ L seconds. As soon as the missing block is downloaded,
it will be played back; if the stalling threshold is hit, though, playback skips to
the next available block.
4.4 Ratio Policy
The Ratio (ra) policy aims at skipping blocks only if there is a high chance of
then continuing playback without interruptions. It is described as P
ra
= (⌥, , n),
where ⌥ and retain their previous meaning. After initial buering, if the block
at playback position is locally held, it is always played. If, however, the block is
missing, a ratio 1 : n is given, such that x contiguous missing blocks are skipped,
if and only if, at least xn contiguous blocks are held directly after those.
4.5 Catchup Policy
The Catchup (ca) playback policy is introduced to keep playback lag very low at
the cost of skipping more blocks than other policies. It is defined by P
ca
= (⌥, ),
where ⌥ and are used to perform initial buering as in the sk policy. After
playback has started, all missing blocks are skipped, as long as the buer is not
empty. When it is indeed empty, playback position is restored to the original one
by skipping all blocks responsible for playback lag until b
p
(t) = b
e
(t). It is meant
to provide a practical limit on the lowest possible playback lag achievable.
5 Evaluation
All playback policies defined in Sect. 4 have been implemented into LiveShift. Ex-
periments were conducted using the entire LiveShift code in a deployed test-bed
of 20 machines [3]. The main objective was to compare how dierent playback
policies aect user experience of a P2P video streaming system under scenarios
with dierent levels of content availability.
Table 2 displays those dierent scenarios used. Peers are divided in classes ac-
cording to their maximum upload capacities – while high upload capacity (HU)
peers and peercasters (PC) are able to upload at a rate equivalent to 5 times
the bit rate of the video stream being transmitted, low upload capacity (LU)
peers are able to upload at only 0.5 times the original stream rate. The increas-
ing number of LU peers causes available upload bandwidth to decrease; while
Scenario s1 has abundance of total upload capacity compared to the number of
peers to be served, in Scenario s4, the chance that peers experience content un-
availability is much higher. It is important to note that peers that have unused
upload capacity might only hold unpopular content, leading to suboptimal over-
lay resource usage. Peers are not artificially limited in download bandwidth, and
latency between peers was introduced by using a random sample from the King
dataset [5], and enforced using DummyNet [1]; the sample contains an average
latency of 114.2 ms. This paper only displays results for scenarios s1 and s4, for
brevity and because scenarios s2 and s3 produced expected results between s1
and s4.
Table 2. Evaluation scenarios
Scenario # PC # HU # LU Total Peers Total Upload Capacity
s1 6 15 60 81 135
s2 6 15 90 111 150
s3 6 15 120 141 165
s4 6 15 150 171 180
Multiple instances of LiveShift were executed using the same settings as
in [6]. In short, peers were created with an inter-arrival time of 1 s. Every peer
was programmed to repeatedly switch to a channel and starting time t
0
, then
hold to the channel, attempting to locate and download blocks. While holding
to the channel, every peer reported, once per second, its experienced playback
lag t
lag
(t), as defined in Sect. 2, and share of skipped blocks n
sk
(t)/(n
pl
(t) +
n
sk
(t)). Channel popularity and holding time were both characterized by traces,
as described in [6]. Results were obtained through 10 runs of 20 minutes each.
Due to content unavailability, a peer may sometimes experience very long
stalling times. In such cases, it is not realistic to assume that users would wait
indefinitely for the content. Thus, when a peer is able to play less than 50% in a
moving window of 30 seconds, playback is considered failed, that is, the user is
considered to have given up and switched to another (channel, t
0
). Buering is
not taken into account, since it is part of the playback policy being investigated.
Since the goal of the evaluation is to measure the impact of playback policies
in the entire overlay, on each run, all peers were configured to adopt one of
the playback policies defined in Sect. 4. Each playback policy, as specified in
Table 3, has been investigated using dierent values for their main parameters,
based on values seen in the literature, complemented by additional values that
allow a deeper understanding of their eect. To make results better comparable,
parameters that apply to all playback policies were kept constant; all experiments
were obtained using ⇧ = 6 s, = 0.8, and L = 1 s. The evaluation metrics used
are playback lag, share of skipped blocks, and share of failed playback sessions.
Table 3. Playback policies and parameters
Policy Parameter Identifier
Skip/Stall
= 0 sk-0
= .5 sk-.5
= .75 sk-.75
Remaining Download Time
tp = 5 s rd-5
tp = 30 s rd-30
tp = 60 s rd-60
Retry
T = 1 re-1
T = 5 re-5
T = 10 re-10
Ratio
s = 2 ra-2
s = 3 ra-3
s = 5 ra-5
Catchup (none) ca
5.1 Playback Lag
Playback lag is an important metric to evaluate user experience, since a lower
value denotes a lower start-up delay, less interruptions, and more closeness to
what the user initially intended to watch. It is expected to increase with larger
sessions, as well as with lower content availability, due to stalling. Reports from
each peer and run were collected and an average was calculated for every 1-
minute interval. The same proceeding was performed on all runs.
The distribution of playback lag among dierent peers and at dierent t
values was analyzed for each policy. For most peers, playback lag dierences for
0
0.2
0.4
0.6
0.8
1
1 10 100
C
D
F
playback lag (s)
skip/stall s1 at 10 min playback position
sk-0 sk-.5 sk-.75
0
0.2
0.4
0.6
0.8
1
1 10 100 1000
C
D
F
playback lag (s)
skip/stall s4 at 10 min playback position
sk-0 sk-.5 sk-.75
Fig. 2. CDF of playback lag under Skip/Stall playback policy in scenarios s1 and s4
the investigated parameters are consistent, as exemplified in Fig. 2, which shows
the CDF of playback lag at 10 min of holding time under the sk policy. Peers,
however, with highest playback lag (i.e. above 90
th
percentile) suer from severe
content unavailability, as a result of the high channel switching frequency in the
defined peer behavior; these peers are not able to download any blocks, so the
playback policy cannot play a significant role. Since this occurs as well under all
other investigated policies, all playback lag plots on the rest of this section focus
on the 80
th
percentile, that is, the maximum playback lag experienced by 80%
of the peers.
0
2
4
6
8
10
12
14
0 2 4 6 8 10 12 14 16 18
p
l
a
y
b
a
c
k

l
a
g

(
s
)
playback position (min)
skip/stall s1
sk-0 sk-.5 sk-.75
0
10
20
30
40
50
60
0 2 4 6 8 10 12 14 16 18
p
l
a
y
b
a
c
k

l
a
g

(
s
)
playback position (min)
skip/stall s4
sk-0 sk-.5 sk-.75
Fig. 3. Playback lag under Skip/Stall playback policy in scenarios s1 and s4
Always Skip and Skip/Stall Playback Policies. Experiments with the
sk playback policy were made using three dierent values for the parameter
, as shown in Fig. 3. While the x-axis represents the time t (in minutes) for
which a user holds on to a channel, the y-axis represents the playback lag t
lag
(t)
(in seconds). The experiments reveal that the sk playback policy is extremely
flexible. It is able to maintain a relatively low playback lag even for longer
sessions (higher holding time values) when = 0 (sk-0, the Always Skip policy).
In Scenario s1, sk-.5 and sk-.75 display very distinct results, yet on Scenario
s4 they yield very similar results in terms of playback lag. This is due to the
fact that, in a scenario with more available upload bandwidth, peers have more
opportunity to perform several parallel downloads, hence the chance that a peer
is able to download blocks out of order (thus being able to skip) is higher.
Remaining Download Time Playback Policy. The rd playback policy was
instantiated with dierent values of t
p
, which is the minimum playback time
the policy attempts to guarantee playback for, considering the current download
rate. Figure 4 shows that, on the over-provisioned Scenario s1, results dier little
with the dierent parameters evaluated. This is due to the fact that peers can
often download at a rate r 1, therefore ⇤
0
frequently reaches its minimum value
⇤, as ⇤
0
decreases with a higher download rate. In the more bandwidth-restricted
Scenario s4, larger values of t
p
cause higher playback lag with higher holding
times, as expected. In comparison with other playback policies, the Remaining
Download Time playback policy shows the highest playback lag, which is due to
a potentially larger buer, especially with lower download rates.
0
2
4
6
8
10
12
14
0 2 4 6 8 10 12 14 16 18
p
l
a
y
b
a
c
k

l
a
g

(
s
)
playback position (min)
rdt s1
rd-5 rd-30 rd-60
0
10
20
30
40
50
60
0 2 4 6 8 10 12 14 16 18
p
l
a
y
b
a
c
k

l
a
g

(
s
)
playback position (min)
rdt s4
rd-5 rd-30 rd-60
Fig. 4. Playback lag under Remaining Download Time playback policy in s1 and s4
Retry Playback Policy. The re playback policy was investigated with dif-
ferent values for the parameter T, which expresses the stalling limit per block.
Figure 5 shows that, on both scenarios s1 and s4, while re-1 displays a lower
playback lag than re-5 and re-10, it is still higher than levels achieved under
the sk policy. The fact that playback lag under re-5 and re-10 policies are
very similar is due to the unlikelihood, on both s1 and s4, of situations in which
playback needs to stall for longer than 5, but less than 10 seconds, until the
block at playback position is downloaded.
Ratio Playback Policy. Results with the ra playback policy were obtained
using dierent values for the parameter n. Figure 6 shows that it has a noticeable
0
2
4
6
8
10
12
14
0 2 4 6 8 10 12 14 16 18
p
l
a
y
b
a
c
k

l
a
g

(
s
)
playback position (min)
retry s1
re-1 re-5 re-10
0
10
20
30
40
50
60
0 2 4 6 8 10 12 14 16 18
p
l
a
y
b
a
c
k

l
a
g

(
s
)
playback position (min)
retry s4
re-1 re-5 re-10
Fig. 5. Playback lag under Retry playback policy in scenarios s1 and s4
impact on the experienced playback lag in the over-provisioned Scenario s1, but
not on s4. Like with the sk policy, peers have much fewer opportunity to skip
in s4 due to the lower probability of performing parallel downloads.
0
2
4
6
8
10
12
14
0 2 4 6 8 10 12 14 16 18
p
l
a
y
b
a
c
k

l
a
g

(
s
)
playback position (min)
ratio s1
ra-2 ra-3 ra-5
0
10
20
30
40
50
60
0 2 4 6 8 10 12 14 16 18
p
l
a
y
b
a
c
k

l
a
g

(
s
)
playback position (min)
ratio s4
ra-2 ra-3 ra-5
Fig. 6. Playback lag under Ratio playback policy in scenarios s1 and s4
Catchup Playback Policy. The Catchup (ca) playback policy is designed to
keep a very low playback lag by resetting it to zero when the playback buer
is empty by skipping the necessary amount of blocks. Results show on Fig. 7
that, as designed, it displays a relatively lower playback lag in comparison to
the other policies. Interestingly, while it displays in Scenario s1 a clearly higher
playback lag than sk-0, the opposite is observed in s4. This happens due to the
much higher probability in Scenario s4 that the buer becomes empty and the
catchup mechanism is therefore triggered.
5.2 Skipped Blocks
Figure 8 compares the mean share of skipped blocks under the playback policies
and parameters investigated. Error bars indicate 95% confidence intervals of the
0
2
4
6
8
10
12
14
0 2 4 6 8 10 12 14 16 18
p
l
a
y
b
a
c
k

l
a
g

(
s
)
playback position (min)
catchup s1
ca
0
10
20
30
40
50
60
0 2 4 6 8 10 12 14 16 18
p
l
a
y
b
a
c
k

l
a
g

(
s
)
playback position (min)
catchup s4
ca
Fig. 7. Playback lag under Catchup playback policy in scenarios s1 and s4
means. The share of skipped blocks is, as expected, inversely proportional to the
playback lag shown by each policy, hence sk-0 and ca policies skip more blocks
than other policies.
User-experienced image degradation levels vary according to specific video
encoding and decoding algorithms (codecs) used – to which LiveShift is agnostic.
Understanding both expected levels of skipped blocks and codec characteristics
are thus crucial when choosing the appropriate playback policy for a specific
situation.
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
s
k
-
0
s
k
-
.
5
s
k
-
.
7
5
r
d
-
5
r
d
-
3
0
r
d
-
6
0
r
e
-
1
r
e
-
5
r
e
-
1
0
r
a
-
2
r
a
-
3
r
a
-
5
c
a
s
k
i
p
p
e
d

b
l
o
c
k
s

(
%
)
playback policy
Scenario s1
0
1
2
3
4
5
6
7
8
s
k
-
0
s
k
-
.
5
s
k
-
.
7
5
r
d
-
5
r
d
-
3
0
r
d
-
6
0
r
e
-
1
r
e
-
5
r
e
-
1
0
r
a
-
2
r
a
-
3
r
a
-
5
c
a
s
k
i
p
p
e
d

b
l
o
c
k
s

(
%
)
playback policy
Scenario s4
Fig. 8. Skipped blocks in scenarios s1 and s4
5.3 Failed Playback Sessions
The share of sessions in which the peer stalls for such a long time that playback is
considered failed represent less than 0.5% of all sessions in Scenario s1, as shown
in Fig. 9. In contrast, in Scenario s4, the mean oscillates between 9.5% and 13.5%
in all scenarios. The overlapping 95% confidence interval error bars indicate that
the share of failed playback depends rather on each scenario’s available upload
capacity than on the playback policy used.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
s
k
-
0
s
k
-
.
5
s
k
-
.
7
5
r
d
-
5
r
d
-
3
0
r
d
-
6
0
r
e
-
1
r
e
-
5
r
e
-
1
0
r
a
-
2
r
a
-
3
r
a
-
5
c
a
f
a
i
l
e
d

p
l
a
y
b
a
c
k

(
%
)
playback policy
Scenario s1
8
9
10
11
12
13
14
15
16
s
k
-
0
s
k
-
.
5
s
k
-
.
7
5
r
d
-
5
r
d
-
3
0
r
d
-
6
0
r
e
-
1
r
e
-
5
r
e
-
1
0
r
a
-
2
r
a
-
3
r
a
-
5
c
a
f
a
i
l
e
d

p
l
a
y
b
a
c
k

(
%
)
playback policy
Scenario s4
Fig. 9. Failed playback in scenarios s1 and s4
6 Discussion and Conclusions
Having observed the behavior of LiveShift under dierent playback policies, with
dierent parameters, in scenarios ranging from under- to over-provisioned P2P
networks, it is evident that dierent playback policies do aect user experience
in a P2P video streaming system, in terms of playback lag and share of skipped
blocks. Understanding their behavior is, hence, imperative to select the most
appropriate policy for the desired result, whether it is keeping playback lag as
low as possible, avoiding skipping many video blocks, or achieving a compromise.
The ultimate decision may depend on the type of content being transmitted, or
be left completely up to the user.
This raises the second research question, which playback policies are more
suitable for live and on-demand scenarios? Under circumstances in which min-
imizing playback lag is the main goal, which might be desirable by viewers of
live (e.g., sports) events, the ca and sk-0 are the most suitable policies studied,
considering that they have consistently shown much lower playback lag for a
majority of peers compared to all other approaches. This comes, however, at the
cost of a higher number of skipped blocks. If lowest number of skipped blocks
is the objective, the policies that have shown to skip less than 0.5% of the total
blocks on both scenarios s1 and s4 are sk-.75, re-5, re-10, ra-3, ra-5, and
all rd policies. These policies may be applied in cases in which occasional in-
terruptions are of less importance than skipping content, for instance for VoD.
Alternatively, compromising playback lag and skipped block rate may be the
goal. Policies that show a skipped block rate inferior to 0.5% and playback lag
inferior or equal to 45 seconds for 15-minute-long sessions (for 80% of peers) are,
on the under-provisioned Scenario s4, the following: ra-3, ra-5, re-5, re-10,
and sk-.75. In Scenario s1, the ra-3, re-1, sk-.5, and sk-.75 are policies that
yield a skipped block rate inferior to 0.5% and playback lag less than or equal
to 9 seconds for 10-minute-long sessions, also for 80% of peers.
In all evaluated scenarios, peers adopt a uniform playback policy, which al-
lows for an evaluation of their eect on the entire distribution overlay, if assumed
that all users are interested in either live or on-demand characteristics. Future
work will investigate scenarios in which peers adopt mixed policies, which are
likely in LiveShift. There is also the opportunity of combining characteristics of
dierent policies. A further promising possibility is creating a predictive playback
policy that considers past peer experiences to avoid stalling when the probability
that a missing block is downloaded in a timely fashion is low.
Acknowledgments
This work has been performed partially in the framework of the of the European
FP7 STREP SmoothIT (FP7-2008-ICT-216259) and was backed partially by the
FP7 CSA SESERV (FP7-2009-ICT-258138) coordination inputs on incentives.
References
1. Dummynet - Network Emulation Tool for Testing Networking Protocols.
http://info.iet.unipi.it/ luigi/dummynet/, last visited: Februray 2012.
2. SopCast - Free P2P Internet TV. http://www.sopcast.org, last visited: 07.12.2011.
3. Test-bed Infrastructure for Research Activities – Communication Systems Group
(CSG) at the Department of Informatics (IFI), University of Zurich (UZH).
http://www.csg.uzh.ch/services/testbed/, last visited: 07.12.2011.
4. B. Cheng, L. Stein, H. Jin, X. Liao, and Z. Zhang. GridCast: Improving Peer Shar-
ing for P2P VoD. ACM Transactions on Multimedia Computing, Communications
and Applications, 4:26:1–26:31, Nov. 2008.
5. K. P. Gummadi, S. Saroiu, and S. D. Gribble. King: Estimating Latency Between
Arbitrary Internet End Hosts. In 2nd ACM SIGCOMM Workshop on Internet
Measurment, IMW ’02, pages 5–18, New York, NY, USA, Nov. 2002.
6. F. Hecht, T. Bocek, R. G. Clegg, R. Landa, D. Hausheer, and B. Stiller. LiveShift:
Mesh-Pull P2P Live and Time-Shifted Video Streaming. In 36th IEEE Conf. on
Local Computer Networks, LCN 2011, pages 319–327, Bonn, Germany, Oct. 2011.
7. X. Hei, C. Liang, J. Liang, Y. Liu, and K. Ross. A Measurement Study of a Large-
Scale P2P IPTV System. IEEE Transactions on Multimedia, 9(8):1672–1687, Dec.
2007.
8. R. Kumar, Y. Liu, and K. Ross. Stochastic Fluid Theory for P2P Streaming
Systems. In 26th IEEE International Conference on Computer Communications,
INFOCOM 2007, pages 919–927, May 2007.
9. C. Liang, Y. Guo, and Y. Liu. Is Random Scheduling Su⇥cient in P2P Video
Streaming? In 28th International Conference on Distributed Computing Systems,
ICDCS ’08, pages 53 –60, June 2008.
10. N. Magharei and R. Rejaie. Mesh or Multiple-Tree: A Comparative Study of
Live P2P Streaming Approaches. In Proceedings of IEEE INFOCOM 2007, pages
1424–1432, 2007.
11. J. Mol. Free-riding, Resilient Video Streaming in Peer-to-Peer Networks. PhD
thesis, Delft University of Technology, Jan. 2010.
12. J. J. D. Mol, A. Bakker, J. A. Pouwelse, D. H. J. Epema, and H. J. Sips. The Design
and Deployment of a BitTorrent Live Video Streaming Solution. Intl. Symposium
on Multimedia, pages 342–349, Dec. 2009.
13. A. Sentinelli, G. Marfia, M. Gerla, L. Kleinrock, and S. Tewari. Will IPTV Ride
the Peer-to-Peer Stream? IEEE Comm. Magazine, 45(6):86 –92, June 2007.
D2.2 Final Report on Economic FI Coordination Seventh Framework CSA No. 258138
Public

Page 160 of 161 Version 2.0
© Copyright 2012, the Members of the SESERV Consortium

C.3 FI Paper

[Full text of paper follows subsequently.]
A Time-Shifted Video Streaming Approach
(LiveShift)
Fabio V. Hecht, Thomas Bocek, Burkhard Stiller
University of Zurich, Department of Informatics (IFI), Zurich, Switzerland
Email: {hecht,bocek,stiller}@ifi.uzh.ch
I. INTRODUCTION
The growth in deployment of Fiber to the Home (FTTH)
technology increases upload capacity at the edge, making the
peer-to-peer (P2P) paradigm especially attractive to increase
scalability and decrease cost for the publisher.
Since users of a P2P system already successfully collaborate
on distributing video streams, LiveShift allows for further
collaboration by having peers store received video streams
in order to distribute them in the future, thus, allowing time
shifting (TS) or – if the combined storage is large – even
Video-on-Demand (VoD). This enables a user to watch a
program from the start and jump over uninteresting parts until
seamlessly catching up with the live stream without having
previously prepared any local recording – all on the basis of
a P2P network.
Compared to a live video streaming system, users may
switch in LiveShift not only channels but also positions in a
potentially large time scale. This, added to the asymmetry of
interest inherent in such a scenario, demands a flexible proto-
col and policies that do not require peers to be simultaneously
interested in data each other has.
The contribution of LiveShift is a designed, prototyped,
and analyzed fully-distributed P2P streaming mesh-pull pro-
tocol and respective policies which are suitable both for live
streaming and VoD at the same time. Many proposed P2P
systems exist, which are engineered to support either VoD or
live streaming, but LiveShift is designed for the co-existence
of both. In this summary, the system is evaluated using traces
from a real IPTV system to model peer behavior including
channel switching. Results are measured in terms of Quality-
of-Experience (QoE) metrics, such as playback lag, that are
important for the end user.
II. PROTOCOL DESIGN
LiveShift’s major protocol design objectives are the fol-
lowing: (1) Free Peercasting: Any peer is able to publish a
channel, becoming a peercaster; (2) Scalability: The approach
shall scale to a high number of peers; (3) Robustness: The
system must tolerate churn; (4) Full decentralization: No
central entities shall be present – except peercasters; and
(5) Low overhead: Network overhead introduced must be low.
A. Segments and Blocks
Users do have the possibility of switching channels and
time shifting. LiveShift adopts the mesh-pull approach [4],
which adapts better to dynamic network conditions and churn
when compared to tree protocols [5], by dividing the stream
into chunks that are exchanged between peers with no fixed
structure. Two levels of chunking are used – a segment is an
addressing entity, which is made up of several smaller blocks.
Each segment is uniquely identified by a SegmentIdentifier,
which is a pair (channelId, startTime) announced on the
tracker by peers which offer video blocks within a segment.
Blocks are small-sized, fixed-time video chunks, and are the
video unit exchanged by peers.
B. Distributed Hash Table and Distributed Tracker
LiveShift uses a distributed hash table (DHT) to store the
channel list and individual channel information. The DHT is
responsible for storing the channel list. The distributed tracker
(DT) is responsible for mapping segments to a set of providers
– peers that hold at least one block in the segment. Both DHT
and DT are provided by TomP2P [1].
C. Protocol Overview
The protocol is designed to allow for the implementation
of different policies. A peer r, when entering the system,
retrieves the channel list from the DHT. After having cho-
sen a channelId and a startTime to tune into, r con-
sults the DT to retrieve a set C
r
of candidate peers that
advertised blocks in the corresponding segment. Additional
candidates are obtained via PeerSuggestion messages.
Peer r then contacts a number of candidates p ∈ C
r
by
sending each a SubscribeRequest message, containing
the SegmentIdentifier and a declared upload capacity.
When a peer p ∈ C receives a SubscribeRequest from
a peer r, it attempts to place r in its subscribers set S
p
. If
|S
p
| < S
p
, the subscribers set is not full yet, and peer r is
sent a Subscribed message, with a block map indicating
which blocks in the requested segment p holds and a timeout
value T
S
. r will then be subscribed to receive updates to the
corresponding block map via Have messages. If |S
p
| = S
p
, p
checks if there is another peer q ∈ S
p
that has lower priority
than r (according to the policy used, c.f. III-A). If so, it will
be preempted and removed from the set. Thus, either q or r
will receive a NotSubscribed message. Limiting |S
p
| is
important to control the number of Have messages sent.
When r receives Subscribed, it adds p to the neighbor
set N
r
and needs to verify interest periodically by comput-
ing the intersection between scheduled blocks and blocks
announced by p. If the intersection is not empty, r sends p an
Interested message, which makes p add r in Q
p
⊂ S
p
,
the queue for peers waiting for an upload slot, and reply a
Queued message, with a timeout value T
Q
. On the con-
trary, when p has no more interesting blocks, r sends it
NotInterested to be removed from Q
p
.
Peer p has a number of upload slots U
p
, each of which is
granted to an interested peer r ∈ Q
p
. When peer r is granted
an upload slot, it receives a Granted message. Similarly to
what happens in S
p
, peers with higher priority may preempt
peers from upload slots.
When r is granted an upload slot from p, it is allowed to
send BlockRequest messages to p and receive video blocks
in BlockReply messages. This happens until either r sends
a NotInterested message, p sends a Queued message
(when preempted), or either sends a Disconnect message.
Each upload slot accepts up to two BlockRequests at
a time, to fully utilize its upload capacity, with no delays
between sending a BlockReply and receiving the next
BlockRequest message.
D. Peer Departure and Failure
Three mechanisms are present to react quickly to peers
leaving unexpectedly or failing. When DHT routing errors
exceed a threshold, the failing peer is removed from all sets,
leaving space for other peers. Also, PingRequest messages
may be used to test if peers are on-line. Peers must reply
with a PingReply whenever they receive a PingRequest,
otherwise they are considered failed. Finally, a peer p, when
leaving, should send PeerSuggestion messages to all
peers in S
p
containing all peers in N
p
as suggestions.
III. EVALUATIONS
Evaluation results were obtained not from simulations, but
by running full implementations of LiveShift. The defined
LiveShift protocol is flexible, and may be used with differ-
ent policies [3]. Evaluations include both channel browsing
behavior and churn to produce realistic results.
A. Policies Used
The following policies have been used on evaluations and
have produced good results, but further work is required to
study them individually.
• Segment length: 10 minutes.
• Block length: 1 second.
• Block selection: next 15 missing blocks, at most 30 ahead
of playback position.
• Block rescheduling: if BlockRequest takes more than
twice the peer average response time.
• Candidate selection: initially 40 random peers from the
DT + PeerSuggestion + senders of Subscribe.
• Neighbor selection: maximum 15, order is: (1) least
amount of Subscribe sent, (2) highest amount of
blocks received, (3) random.
• No more C
r
, N
r
, I
r
when receiving video blocks at a rate
sufficient to keep up with normal playback.
TABLE I
EVALUATION SCENARIOS
Scenario Number PC Number HU Number LU Churn
s2 6 15 90 0
s3 6 15 120 0
s2c30 6 15 90 30%
• Members of S
p
and U
p
are chosen in the following order:
(1) highest upload capacity, (2) highest number of blocks
provided, (3) longest in queue.
• Number of upload slots: dynamically adjusted so that peer
upload capacity is fully used and each slot can upload at
most at the full stream bitrate.
• Timeout values: T
S
is set to 5 s, and T
Q
to 10 s. Inactivity
timeout for peers in U is 4 s.
• Playback policy: skip n contiguous missing blocks if
and only if the peer holds at least 2n contiguous blocks
immediately afterward.
• Storage policy: storing all received blocks until the maxi-
mum capacity – 2 hours of video – is reached, then blocks
with oldest download time are removed.
B. Evaluation Scenarios and Peer Behavior
Table I describes the three scenarios described in this
summary. For complete evaluation results, including Scenario
s1, please refer to [3]. Peers were divided in classes regarding
their maximum upload capacities. While Peercasters (PC) and
High Upload (HU) peers are capable of uploading at 500% the
bitrate of the video stream, Low Upload (LU) peers can only
upload at 50%. Peers are not limited in download capacities.
All results obtained are averaged over 10 runs of 1 hour each.
The peer behavior was modeled using traces from a real
IPTV system [2]. Peers are created with an inter-arrival time
of 1 s and loop through the following two steps: (1) choose
a channel and starting time, (2) hold to the channel, locating
and downloading content from other peers. In the scenario with
churn, peers, before step (1), have a certain chance of going
offline for an amount of time given by the channel holding time
distribution, before having again the same chance of remaining
offline or going back on-line.
C. Quality-of-Experience and Scalability
The main Quality-of-Experience (QoE) metric used is the
playback lag experienced by users, during holding time,
from the point a tuple (channelId, startTime) was selected.
Figs. 1-3 show the playback lag experienced as users hold
on (watch) a channel in the different proposed scenarios. The
playback lag is, the difference between the time of the video
block that is playing, and the time of the block that should be
playing if there were no interruptions in the playback. A lower
playback lag means lower start-up delay, less interruptions, and
more closeness to what the user initially intended to watch,
thus better user experience. It can be seen that:
a. The lag increases slowly as users continue to view a
channel – this shows that users do not experience frequent
stalling.
0
5
10
15
20
0 5 10 15 20 25 30 35 40 45 50 55
p
l
a
y
b
a
c
k

l
a
g

(
s
)
holding time (min)
50% LU
50% HU
80% LU
80% HU
95% LU
95% HU
Fig. 1. Playback lag in Scenario s2
0
5
10
15
20
25
30
35
0 5 10 15 20 25 30 35 40 45 50 55
p
l
a
y
b
a
c
k

l
a
g

(
s
)
holding time (min)
50% LU
50% HU
80% LU
80% HU
95% LU
95% HU
Fig. 2. Playback lag in s2c30
0
10
20
30
40
50
60
0 5 10 15 20 25 30 35 40 45 50 55
p
l
a
y
b
a
c
k

l
a
g

(
s
)
holding time (min)
50% LU
50% HU
80% LU
80% HU
95% LU
95% HU
Fig. 3. Playback lag in Scenario s3
b. Even in the worst case scenarios investigated, 95% of
HU peers experience lags of less than 10 s, which is an
acceptable performance.
c. LU peers are more susceptible to high lag, especially
in scenarios with churn or less available bandwidth. For
example Fig. 2 shows 5% of LU peers with a playback lag
above 25 s after watching for long periods of time, while in
Fig. 3 the worst 5% of LU peers have playback lag above
50 s after watching a channel for more than 40 min.
In Scenario s3, the average distance to the peercaster
increases and peers take longer on average to obtain upload
slots, which are more disputed, thus, the system shows signs
of being saturated, as the playback lag for several LU peers
surpasses 30 s. Average playback lag is 7.70 s in s2, 14.31 s
in s3, and 8.93 s in s2c30, showing that 30% churn increases
the average playback lag by about 15% in s2.
D. Skipped Blocks, Failed Playback, and Overhead
According to the playback policy defined in Section III-A,
some blocks may be skipped, not causing the playback lag
to increase, but in fact to decrease. The share of skipped
blocks is, on average, 2.41% in s2, and 1.86% in both s3
and s2c30. Interestingly, relatively less blocks are skipped in
more bandwidth-constrained scenarios, due to fewer concur-
rent downloads happening.
The availability of content is affected by the fact that peers
change their interest frequently. In the worst case, a peer may
not be granted an upload slot from any of the peers which
hold the blocks sought after. This may happen even when the
system has spare bandwidth, due to the unbalance in content
popularity – peers may have unused upload capacity due to
holding only unpopular content. If the playback stalls for a
long time, it is not realistic to assume that the user will wait
forever. Thus, when a peer, in a sliding window of the last
30 s of playback, is able to play less than half the blocks it
should, playback is considered failed, that is, the user gave
up and switched to another (channelId, startTime). Failed
playbacks are 0.60% in s2, 1.87% in s2c30, and 4.64% in s3.
The overhead for a stream of 500 kbit/s is 3.00% on s2,
2.92% on s3, and 2.74% on s2c30, including DHT and DT
traffic.
IV. CONCLUSIONS AND FUTURE WORK
This summary presented a flexible and fully-decentralized
mesh-pull P2P protocol for locating and distributing both live
and time-shifted video streams in an integrated manner. It also
sketched upon policies that can be used with the LiveShift
protocol, revealing important evaluation results. These show
that L supports a low playback lag for users with high
bandwidth, even in the presence of churn (in form of channel
switching, time shifting, and peers disconnecting). For users
with more restricted bandwidth, high churn or low bandwidth
scenarios negatively affect the playback, but it remains within
60 s of transmission in the investigated scenarios.
While LiveShift defines an important first step into support-
ing the proposed use case of supporting both live and time-
shifted video streaming in a fully-decentralized environment,
future work includes finding optimal policies and developing
an effective incentive mechanism to verify the upload capacity
of peers that may be applied in the proposed use case.
ACKNOWLEDGMENT
This work has been performed partially in the framework of
the of the European FP7 STREP SmoothIT (FP7-2008-ICT-
216259) and was backed partially by the FP7 CSA SESERV
(FP7-2009-ICT-258138) coordination inputs on incentives.
REFERENCES
[1] T. Bocek, “TomP2P - A Distributed Multi Map,” http://tomp2p.net/, 2011.
[2] M. Cha, P. Rodriguez, J. Crowcroft, S. Moon, and X. Amatriain, “Watch-
ing Television over an IP Network,” in Proceedings of the 8th ACM
SIGCOMM Conference on Internet Measurement, New York, NY, USA,
2008, pp. 71–84.
[3] F. Hecht, T. Bocek, R. G. Clegg, R. Landa, D. Hausheer, and B. Stiller,
“LiveShift: mesh-pull P2P live and time-shifted video streaming,” Uni-
versity of Zurich, Department of Informatics, Tech. Rep. IFI-2010-0009,
2010.
[4] X. Hei, C. Liang, J. Liang, Y. Liu, and K. Ross, “A Measurement Study
of a Large-Scale P2P IPTV System,” IEEE Transactions on Multimedia,
vol. 9, no. 8, pp. 1672–1687, December 2007.
[5] N. Magharei and R. Rejaie, “Mesh or Multiple-Tree: A Comparative
Study of Live P2P Streaming Approaches,” in Proceedings of IEEE
INFOCOM 2007, 2007, pp. 1424–1432.
Seventh Framework CSA No. 258138 D2.2 Final Report on Economic FI Coordination
Public

Version 2.0 Page 161 of 161
© Copyright 2012, the Members of the SESERV Consortium

C.4 Recommendation ITU-T Y.3001

[Full text of approved Recommendation Y.3001 follows subsequently.]






I n t e r n a t i o n a l T e l e c o m m u n i c a t i o n U n i o n


ITU-T Y.3001
TELECOMMUNICATION
STANDARDIZATION SECTOR
OF ITU
(05/2011)

SERIES Y: GLOBAL INFORMATION
INFRASTRUCTURE, INTERNET PROTOCOL ASPECTS
AND NEXT-GENERATION NETWORKS
Next Generation Networks – Future networks


Future networks: Objectives and design goals

Recommendation ITU-T Y.3001



ITU-T Y-SERIES RECOMMENDATIONS
GLOBAL INFORMATION INFRASTRUCTURE, INTERNET PROTOCOL ASPECTS AND NEXT-
GENERATION NETWORKS

GLOBAL INFORMATION INFRASTRUCTURE
General Y.100–Y.199
Services, applications and middleware Y.200–Y.299
Network aspects Y.300–Y.399
Interfaces and protocols Y.400–Y.499
Numbering, addressing and naming Y.500–Y.599
Operation, administration and maintenance Y.600–Y.699
Security Y.700–Y.799
Performances Y.800–Y.899
INTERNET PROTOCOL ASPECTS
General Y.1000–Y.1099
Services and applications Y.1100–Y.1199
Architecture, access, network capabilities and resource management Y.1200–Y.1299
Transport Y.1300–Y.1399
Interworking Y.1400–Y.1499
Quality of service and network performance Y.1500–Y.1599
Signalling Y.1600–Y.1699
Operation, administration and maintenance Y.1700–Y.1799
Charging Y.1800–Y.1899
IPTV over NGN Y.1900–Y.1999
NEXT GENERATION NETWORKS
Frameworks and functional architecture models Y.2000–Y.2099
Quality of Service and performance Y.2100–Y.2199
Service aspects: Service capabilities and service architecture Y.2200–Y.2249
Service aspects: Interoperability of services and networks in NGN Y.2250–Y.2299
Numbering, naming and addressing Y.2300–Y.2399
Network management Y.2400–Y.2499
Network control architectures and protocols Y.2500–Y.2599
Smart ubiquitous networks Y.2600–Y.2699
Security Y.2700–Y.2799
Generalized mobility Y.2800–Y.2899
Carrier grade open environment Y.2900–Y.2999
Future networks Y.3000–Y.3099

For further details, please refer to the list of ITU-T Recommendations.



Rec. ITU-T Y.3001 (05/2011) i
Recommendation ITU-T Y.3001
Future networks: Objectives and design goals



Summary
Recommendation ITU-T Y.3001 describes objectives and design goals for future networks (FNs).
In order to differentiate FNs from existing networks, four objectives have been identified: service
awareness, data awareness, environmental awareness, and social and economic awareness. In order
to realize these objectives, twelve design goals have been identified: service diversity, functional
flexibility, virtualization of resources, data access, energy consumption, service universalization,
economic incentives, network management, mobility, optimization, identification, reliability and
security. This Recommendation assumes that the target timeframe for FNs falls approximately
between 2015 and 2020. Appendix I describes technologies elaborated in recent research efforts that
are likely to be used as an enabling technology for each design goal.


History
Edition Recommendation Approval Study Group
1.0 ITU-T Y.3001 2011-05-20 13





ii Rec. ITU-T Y.3001 (05/2011)
FOREWORD
The International Telecommunication Union (ITU) is the United Nations specialized agency in the field of
telecommunications, information and communication technologies (ICTs). The ITU Telecommunication
Standardization Sector (ITU-T) is a permanent organ of ITU. ITU-T is responsible for studying technical,
operating and tariff questions and issuing Recommendations on them with a view to standardizing
telecommunications on a worldwide basis.
The World Telecommunication Standardization Assembly (WTSA), which meets every four years,
establishes the topics for study by the ITU-T study groups which, in turn, produce Recommendations on
these topics.
The approval of ITU-T Recommendations is covered by the procedure laid down in WTSA Resolution 1.
In some areas of information technology which fall within ITU-T's purview, the necessary standards are
prepared on a collaborative basis with ISO and IEC.



NOTE
In this Recommendation, the expression "Administration" is used for conciseness to indicate both a
telecommunication administration and a recognized operating agency.
Compliance with this Recommendation is voluntary. However, the Recommendation may contain certain
mandatory provisions (to ensure, e.g., interoperability or applicability) and compliance with the
Recommendation is achieved when all of these mandatory provisions are met. The words "shall" or some
other obligatory language such as "must" and the negative equivalents are used to express requirements. The
use of such words does not suggest that compliance with the Recommendation is required of any party.




INTELLECTUAL PROPERTY RIGHTSITU draws attention to the possibility that the practice or
implementation of this Recommendation may involve the use of a claimed Intellectual Property Right. ITU
takes no position concerning the evidence, validity or applicability of claimed Intellectual Property Rights,
whether asserted by ITU members or others outside of the Recommendation development process.
As of the date of approval of this Recommendation, ITU had not received notice of intellectual property,
protected by patents, which may be required to implement this Recommendation. However, implementers
are cautioned that this may not represent the latest information and are therefore strongly urged to consult the
TSB patent database at http://www.itu.int/ITU-T/ipr/.



ITU 2012
All rights reserved. No part of this publication may be reproduced, by any means whatsoever, without the
prior written permission of ITU.

Rec. ITU-T Y.3001 (05/2011) iii
Table of Contents
Page
1 Scope ............................................................................................................................ 1
2 References..................................................................................................................... 1
3 Definitions .................................................................................................................... 1
3.1 Terms defined elsewhere ................................................................................ 1
3.2 Terms defined in this Recommendation ......................................................... 2
4 Abbreviations and acronyms ........................................................................................ 2
5 Conventions .................................................................................................................. 2
6 Introduction .................................................................................................................. 3
7 Objectives ..................................................................................................................... 3
7.1 Service awareness ........................................................................................... 3
7.2 Data awareness ............................................................................................... 4
7.3 Environmental awareness ............................................................................... 4
7.4 Social and economic awareness ..................................................................... 4
8 Design goals ................................................................................................................. 4
8.1 Service diversity ............................................................................................. 5
8.2 Functional flexibility ...................................................................................... 5
8.3 Virtualization of resources ............................................................................. 6
8.4 Data access ..................................................................................................... 6
8.5 Energy consumption ....................................................................................... 7
8.6 Service universalization .................................................................................. 7
8.7 Economic incentives ....................................................................................... 8
8.8 Network management ..................................................................................... 8
8.9 Mobility .......................................................................................................... 9
8.10 Optimization ................................................................................................... 9
8.11 Identification ................................................................................................... 10
8.12 Reliability and security ................................................................................... 10
9 Target date and migration ............................................................................................. 11
Appendix I – Technologies for achieving the design goals ..................................................... 12
I.1 Network virtualization (virtualization of resources) ...................................... 12
I.2 Data/content-oriented networking (data access) ............................................ 12
I.3 Energy-saving of networks (energy consumption) ......................................... 13
I.4 In-system network management (network management) ............................... 13
I.5 Network optimization (optimization) ............................................................. 14
I.6 Distributed mobile networking (mobility) ...................................................... 16
Bibliography............................................................................................................................. 17



Rec. ITU-T Y.3001 (05/2011) 1
Recommendation ITU-T Y.3001
Future networks: Objectives and design goals
1 Scope
This Recommendation describes objectives and design goals for future networks (FNs). The scope
of this Recommendation covers:
• Fundamental issues to which insufficient attention was paid in designing current networks,
and which are recommended to be the objectives of future networks (FNs)
• High-level capabilities and characteristics that are recommended to be supported by future
networks (FNs)
• Target timeframe for future networks (FNs).
Ideas and research topics of future networks (FNs) that are important and may be relevant to future
ITU-T standardization are included in Appendix I.
2 References
The following ITU-T Recommendations and other references contain provisions which, through
reference in this text, constitute provisions of this Recommendation. At the time of publication, the
editions indicated were valid. All Recommendations and other references are subject to revision;
users of this Recommendation are therefore encouraged to investigate the possibility of applying the
most recent edition of the Recommendations and other references listed below. A list of the
currently valid ITU-T Recommendations is regularly published. The reference to a document within
this Recommendation does not give it, as a stand-alone document, the status of a Recommendation.
[ITU-T F.851] Recommendation ITU-T F.851 (1995), Universal Personal
Telecommunication (UPT) – Service description (service set 1).
[ITU-T Y.2001] Recommendation ITU-T Y.2001 (2004), General overview of NGN.
[ITU-T Y.2019] Recommendation ITU-T Y.2019 (2010), Content delivery functional
architecture in NGN.
[ITU-T Y.2091] Recommendation ITU-T Y.2091 (2008), Terms and definitions for Next
Generation Networks.
[ITU-T Y.2205] Recommendation ITU-T Y.2205 (2011), Next Generation Networks –
Emergency telecommunications – Technical considerations.
[ITU-T Y.2221] Recommendation ITU-T Y.2221 (2010), Requirements for support of
ubiquitous sensor network (USN) applications and services in the NGN
environment.
[ITU-T Y.2701] Recommendation ITU-T Y.2701 (2007), Security Requirements for NGN
release 1.
3 Definitions
3.1 Terms defined elsewhere
This Recommendation uses the following term defined elsewhere:
3.1.1 identifier [ITU-T Y.2091]: An identifier is a series of digits, characters and symbols or any
other form of data used to identify subscriber(s), user(s), network element(s), function(s), network
entity(ies) providing services/applications, or other entities (e.g., physical or logical objects).

2 Rec. ITU-T Y.3001 (05/2011)
3.2 Terms defined in this Recommendation
This Recommendation defines the following terms.
3.2.1 component network: A single homogeneous network which, by itself, may not provide a
single end-to-end global telecommunication infrastructure.
3.2.2 future network (FN): A network able to provide services, capabilities, and facilities
difficult to provide using existing network technologies. A future network is either:
a) A new component network or an enhanced version of an existing one, or
b) A heterogeneous collection of new component networks or of new and existing component
networks that is operated as a single network.
NOTE 1 – The plural form "Future Networks" (FNs) is used to show that there may be more than one
network that fits the definition of a future network.
NOTE 2 – A network of type b may also include networks of type a.
NOTE 3 – The label assigned to the final federation may, or may not, include the word "future", depending
on its nature relative to any preceding network and similarities thereto.
NOTE 4 – The word "difficult" does not preclude some current technologies from being used in future
networks.
NOTE 5 – In the context of this Recommendation, the word "new" applied to a component network means
that the component network is able to provide services, capabilities, and facilities that are difficult or
impossible to provide using existing network technologies.
3.2.3 service universalization: A process to provide telecommunication services to every
individual or group of people irrespective of social, geographical, and economical status.
4 Abbreviations and acronyms
This Recommendation uses the following abbreviations and acronyms:
CDN Content Distribution Network
ET Emergency Telecommunications
FN Future Network
ICT Information and Communication Technology
IC Integrated Circuit
ID Identifier
IP Internet Protocol
OCDM Optical Code Division Multiplexing
P2P Peer-to-Peer
QoE Quality of Experience
QoS Quality of Service
SoA Service-oriented Architecture
5 Conventions
This Recommendation uses "is recommended" to indicate the main points to be taken into account
in the standardization of FNs. Detailed requirements and their degree ("required", "recommended",
or "optional") need further study.

Rec. ITU-T Y.3001 (05/2011) 3
6 Introduction
While some requirements for networks do not change, a number of requirements are evolving and
changing and new requirements arise, causing networks and their architecture to evolve.
For future networks, traditional requirements, such as promoting fair competition [ITU-T Y.2001],
which reflect society's values, remain important.
At the same time, new requirements are emerging. Numerous research projects have proposed
requirements pertaining to future society [b-NICT Vision] and [b-EC FI], and although there is still
a lack of consensus, it is clear that sustainability and environmental issues will be vitally important
considerations over the long term. New application areas such as Internet of Things, smart grids,
and cloud computing are also emerging. Also, new implementation technologies, such as advanced
silicon and optical technology, enable support of requirements that were conventionally considered
unrealistic, for example, by substantially reducing the production cost of an equipment. All these
new factors introduce new requirements to networks.
The basic architecture of large-scale public networks, such as telecommunication networks, is
difficult to change due to the enormous amount of resources needed to build, operate, and maintain
them. Their architecture is therefore carefully designed to be flexible enough to satisfy continually
changing requirements. For instance, Internet Protocol (IP) absorbs and hides the different protocols
and implementations of underlying layers and, with its simple addressing and other features, it has
succeeded in adapting to the enormous changes in scalability, as well as factors such as quality of
service (QoS) and security.
However, it is not known if current networks can continue to fulfil changing requirements in the
future. Nor is it known whether the growing market of new application areas will have the potential
to finance the enormous investment required to change the networks, if the new architecture is to be
sufficiently attentive to backward compatibility and migration costs. Research communities have
been working on various architectures and supporting technologies, such as network virtualization
[b-Anderson], [b-ITU-T FG-FN NWvirt], energy-saving of networks [b-ITU-T FG-FN Energy],
and content-centric networks [b-Jacobson].
It is, therefore, reasonable to expect that some requirements can be realized by the new network
architectures and supporting technologies described by recent research activities, and that these
could be the foundation of networks of the future, whose trial services and phased deployment is
estimated to fall approximately between 2015 and 2020. In this Recommendation, networks based
on such new architecture are named "Future Networks" (FNs).
This Recommendation describes objectives that may differentiate FNs from existing networks,
design goals that FNs should satisfy, target dates and migration issues, and technologies for
achieving the design goals.
7 Objectives
FNs are recommended to fulfil the following objectives which reflect the new requirements that are
emerging. These are objectives that are not considered as primary or are not realized to a
satisfactory extent in current networks. These objectives are the candidate characteristics that
clearly differentiate FNs.
7.1 Service awareness
FNs are recommended to provide services whose functions are designed to be appropriate to the
needs of applications and users. The number and range of services is expected to explode in the
future. FNs are recommended to accommodate these services without drastic increases in, for
instance, deployment and operational costs.

4 Rec. ITU-T Y.3001 (05/2011)
7.2 Data awareness
FNs are recommended to have architecture optimized to handle enormous amounts of data in a
distributed environment, and are recommended to enable users to access desired data safely, easily,
quickly, and accurately, regardless of their location. In the context of this Recommendation, "data"
are not limited to specific data types like audio or video content, but describe all information
accessible on a network.
7.3 Environmental awareness
FNs are recommended to be environmentally friendly. The architecture design, resulting
implementation and operation of FNs are recommended to minimize their environmental impact,
such as consumption of materials and energy and reduction of greenhouse gas emissions. FNs are
recommended to also be designed and implemented so that they can be used to reduce the
environmental impact of other sectors.
7.4 Social and economic awareness
FNs are recommended to consider social and economic issues to reduce barriers to entry of the
various actors involved in the network ecosystem. FNs are recommended to also consider the need
to reduce their lifecycle costs in order for them to be deployable and sustainable. These factors will
help to universalize the services, and allow appropriate competition and an appropriate return for all
actors.
8 Design goals
Design goals are high-level capabilities and characteristics that are recommended to be supported
by FNs. FNs are recommended to support the following design goals in order to realize the
objectives mentioned in clause 7. It should be noted that some of these design goals may be
extremely difficult to support in a particular FN, and that each design goal will not be implemented
in all FNs. Whether the support of each of these design goals in a specific FN will be required,
recommended, or optional, is a topic for further study.
Figure 1 below shows the relationships between the four objectives described in clause 7 and the
twelve design goals described in this clause. It should be noted that some design goals, such as
network management, mobility, identification, and reliability and security, may relate to multiple
objectives. Figure 1 shows only the relationships between a design goal and its most relevant
objective.

Rec. ITU-T Y.3001 (05/2011) 5
Y.3001(11)_F01
Service
awareness
Data
awareness
Social and
economic
awareness
Environmental
awareness
Energy consumption
Optimization
Service universalization
Economic incentives
Service diversity
Functional flexibility
Virtualization of resources
Network management
Mobility
Reliability and security
Data access
Identification

Figure 1 – Four objectives and twelve design goals of future networks
8.1 Service diversity
FNs are recommended to support diversified services accommodating a wide variety of traffic
characteristics and behaviours. FNs are recommended to support a huge number and wide variety of
communication objects, such as sensors and terminal devices.
Rationale: In the future, services will become diversified with the appearance of various new
services and applications that have quite different traffic characteristics (such as bandwidth and
latency) and traffic behaviours (such as security, reliability, and mobility). This requires FNs to
support services that existing networks do not handle in an efficient manner. For example, FNs will
have to support services that require only occasional transmission of a few bytes of data, services
that require bandwidth in order of Gbit/s, Terabit/s, and beyond, or services that require end-to-end
delay that is close to the speed-of-light delay, or services that allow intermittent data transmission
and result in very large delay.
In addition, FNs will need to support a huge number and a wide variety of terminal devices to
achieve an all-encompassing communication environment. On the one hand, in the field of
ubiquitous sensor networks, there will be a huge number of networked devices, such as sensors and
integrated circuit (IC) tag readers, that will communicate using a very small bandwidth. On the
other hand, there will be some high-end applications, such as high quality videoconference
applications with high realistic sensation. Although the related terminal devices will not necessarily
be very numerous, huge bandwidths will nevertheless be required for the support of these
applications.
8.2 Functional flexibility
FNs are recommended to offer functional flexibility to support and sustain new services derived
from user demands. FNs are recommended to support agile deployment of new services that keep
pace with the rapid growth and change of user demands.
Rationale: It is extremely difficult to foresee all the user demands that may arise in the long-term
future. Current networks are designed to be versatile, by supporting basic functions that are
expected to accompany most of the future user demands in a sufficiently efficient manner.
However, the current network design approach does not always provide sufficient flexibility,

6 Rec. ITU-T Y.3001 (05/2011)
e.g., when the basic functions are not optimal for the support of some new services, thus requiring
changes in these same functions. Each addition or modification of functions to an already deployed
network infrastructure usually results in complex deployment tasks that need to be carefully
planned; otherwise, this may have an impact on other services that are running on the same network
infrastructure.
On the other hand, FNs are expected to enable dynamic modifications of network functions in order
to operate various network services that have specific demands. For example, video transcoding
and/or aggregation of sensor data inside the network (i.e., in-network processing) should be
possible. It should also be possible to implement new protocols for new types of services in FNs.
Services should coexist on a single network infrastructure without interfering with each other, in
particular, when a network function is added or modified to support a certain service. FNs should be
able to accommodate experimental services for testing and evaluation purposes, and they should
also enable a graceful migration from experimental services to deployed services in order to lessen
obstacles to new service deployment.
8.3 Virtualization of resources
FNs are recommended to support virtualization of resources associated with networks in order to
support partitioning of resources, and a single resource can be shared concurrently among multiple
virtual resources. FNs are recommended to support isolation of any virtual resource from all others.
FNs are recommended to support abstraction in which a given virtual resource need not directly
correspond to its physical characteristics.
Rationale: For virtual networks, virtualization of resources can allow networks to operate without
interfering with the operation of other virtual networks while sharing network resources among
virtual networks. Since multiple virtual networks can simultaneously coexist, different virtual
networks can use different network technologies without interfering with each other, thereby
allowing better utilization of physical resources. The abstraction property enables to provide
standard interfaces for accessing and managing the virtual network and resources, and helps to
support updating of virtual networks' capabilities.
8.4 Data access
FNs are recommended to be designed and implemented for optimal and efficient handling of huge
amounts of data. FNs are recommended to have mechanisms for promptly retrieving data regardless
of their location.
Rationale: The main purpose of existing telephone networks has been to connect two or more
subscribers, enabling them to communicate. IP networks were designed for transmitting data
between specified terminals. Currently, users search data on the networks using data-oriented
keywords, and access them without knowing their actual location. From a user standpoint, networks
are used mainly as a tool for accessing the required data. Since the importance of data access will be
sustained in the future, it is essential for FNs to provide users with the means to access appropriate
data easily and without time-consuming procedures, while providing accurate and correct data.
The amount and properties of digital data in networks are changing. Consumer-generated media are
growing in an explosive manner: social networking services are creating huge volumes of blog
articles instantaneously; ubiquitous sensor networks [ITU-T Y.2221] are generating massive
amounts of digital data every second, and some applications called "micro-blogs" generate quasi-
real-time communication that includes multimedia data. These data are produced, stored, and
processed in networks in a distributed manner. In current IP networks, users access these data in the
network via conventional procedures, i.e., identifying the address and port number of the host that
provides the target data. Some data contain private information or digital assets, but there are no
built-in security mechanisms. More simple, efficient, and safe networking technology, dedicated to
handling huge volumes of data, will therefore be necessary in the future.

Rec. ITU-T Y.3001 (05/2011) 7
The traffic characteristics of such data communication are also changing. Traffic trends in FNs will
depend mainly on the location of data, rather than the distribution of subscribers. Because of cloud
computing, information and communication technology (ICT) resources, such as computing power
and stored data in data centres, are increasing. Combined with the proliferation of mobile devices
having insufficient ICT resources, this trend is shifting data processing from user terminals to data
centres. FN designers therefore need to pay close attention to these changes, e.g., the growing
importance of communications in data centres, and the huge number of transactions in and between
data centres to fulfil user requests.
8.5 Energy consumption
FNs are recommended to use device-level, equipment-level, and network-level technologies to
improve energy efficiency, and to satisfy customers' demands, with minimum traffic. FN
device-level, equipment-level, and network-level technologies are recommended to not work
independently, but rather to cooperate with each other in achieving a solution for network energy
savings.
Rationale: The lifecycle of a product includes phases such as raw material production,
manufacturing, use, and disposal, and all these phases need consideration in order to reduce the
environmental impact. However, energy consumption in the use-phase is usually the major issue for
equipment operating 24 hours a day, as is often the case in networks. Among the various types of
energy consumption, electric power consumption is usually predominant. Energy saving therefore
plays a primary role in reducing the environmental impact of networks.
Energy saving is also important for network operations. Necessary bandwidth usually increases as
new services and applications are added. However, energy consumption and its resulting heat may
become a significant physical limitation in the future, along with other physical limitations such as
optical fibre capacity or operational frequency of electrical devices. These issues may become a
major operational obstacle and, in the worst case, may prevent new services and applications from
being offered.
Traditionally, energy reduction has been achieved mostly by a device-level approach, i.e., by
miniaturization of semiconductor processing rules and the process integration of electrical devices.
However, this approach is facing difficulties such as high standby power and the physical limits of
operational frequency. Therefore, not only device-level approaches, such as power reduction of
electrical and optical devices, but also equipment-level and network-level approaches are essential
in the future.
Switching in the optical domain uses less power than switching in the electronic domain, but packet
queues are not easy to implement without electronic memory. Also, circuit switching uses less
power than connectionless packet switching.
Networking nodes, such as switches and routers, should be designed considering smart sleep mode
mechanisms, as with existing cell phones; this is an equipment-level approach. For network-level
approaches, power-effective traffic control should be considered. A typical example is the use of
routing methods that reduce the peak amount of traffic. Another example is caching and filtering,
which reduce the amount of data that needs to be transmitted.
Device-level, equipment-level, and network-level energy-saving approaches that consider both
improving energy efficiency and reducing inessential traffic, are key factors of energy saving in
FNs.
8.6 Service universalization
FNs are recommended to facilitate and accelerate provision of facilities in differing areas, such as
towns or countryside, developed or developing countries, by reducing lifecycle costs of the network
and through open network principles.

8 Rec. ITU-T Y.3001 (05/2011)
Rationale: Existing network environments still impose high entry barriers, both for manufacturers to
develop equipment, and for operators to offer services. In this sense, FNs should enhance
universalization of telecommunication services, facilitating the development and deployment of
networks and provision of services.
To that purpose, FNs should support openness through global standards and simple design
principles in order to reduce the lifecycle costs of the network, particularly development,
deployment, operation, and management costs, and so reducing the so-called digital divide.
8.7 Economic incentives
FNs are recommended to be designed to provide a sustainable competitive environment for solving
tussles among the range of participants in the ICT/telecommunication ecosystem – such as users,
various providers, governments, and IPR holders – by providing a proper economic incentive.
Rationale: Many technologies have failed to be deployed, to flourish, or to be sustainable, because
of inadequate, or inappropriate decisions, by the architect concerning intrinsic economic or social
aspects, (e.g., contention among participants), or because of a lack of attention to surrounding
conditions (e.g., competing technologies) or incentives (e.g., open interface). Such failures have
sometimes occurred because the technologies did not provide mechanisms to stimulate fair
competition.
One example of this is the lack of QoS mechanisms in the initial IP network implementation needed
in real-time services, such as video streaming. The IP layer did not provide a means to its upper
layer to know if QoS was guaranteed from end-to-end. Initial IP network implementations also
lacked proper economic incentives for the network providers to implement them. These are some of
the reasons that have created obstacles to the introduction of QoS guarantee mechanisms and
streaming services in IP networks, even when telecommunications ecosystem participants have tried
to customize networks, or have asked others to provide customized networks, to start a new service
and share its benefits.
Sufficient attention, therefore, needs to be given to economic and social aspects, such as economic
incentives in designing and implementing the requirements, architecture, and protocol of FNs, in
order to provide the various participants with a sustainable, competitive environment.
Ways of resolving economic conflicts, including tussles in cyberspace that include economic
reward for each participant's contribution are becoming increasingly important [b-Clark]. The use of
networks is considered to be a means of producing economic incentives in various fields, as the
Internet grows and brings together diverse social functionalities. Different Internet participants
often pursue conflicting interests, which has led to conflict over the Internet and controversy in
international and domestic regulation issues.
8.8 Network management
FNs are recommended to be able to efficiently operate, maintain, and provision the increasing
number of services and entities. In particular, FNs are recommended to be able to process massive
amounts of management data and information, and to then efficiently and effectively transform
these data into relevant information and knowledge for the operator.
Rationale: The number of service and entities that the network must handle is increasing. Mobility
and wireless technology have become essential aspects of networks. Requirements on security and
privacy need to adjust to expanding applications, and regulations are becoming increasingly
complicated. Moreover, integration of data collecting and processing capability due to Internet of
Things, smart grid, cloud computing, and other aspects, are introducing non-traditional network
equipment into networks, causing proliferation of network management objectives that further
complicate evaluation criteria. Thus, effective support for operators is essential in future networks.

Rec. ITU-T Y.3001 (05/2011) 9
One problem facing current networks is that economic considerations have caused operation and
management systems to be designed specifically for each network component. Because the
proliferation of unorganized, disorderly management functionality, increases complexity and
operational costs, FNs should provide highly efficient operation and management systems through
more integrated management interfaces.
The other problem is that current network operation and management systems largely depend on
network operators' skills. A large difficulty exists therefore in how to make network management
tasks easier and to inherit workers' knowledge. In the process of network management and
operation, tasks that require human skill will remain, such as high-level decisions based on years of
accumulated experience. For these tasks, it is important that even a novice operator without special
skills can manage large-scale and complicated networks easily, with the support of automation. At
the same time, effective transfer of knowledge and know-how between generations should also be
considered.
8.9 Mobility
FNs are recommended to provide mobility that facilitates high-speed and large-scale networks in an
environment where a huge number of nodes can dynamically move across heterogeneous networks.
FNs are recommended to support mobile services irrespective of node mobility capability.
Rationale: Mobile networks are continuously evolving by incorporating new technologies. Future
mobile networks therefore are expected to include various heterogeneous networks, ranging from
macro to micro, pico, and even femtocell, and diverse types of nodes equipped with a variety of
access technology, because a single-access network cannot provide ubiquitous coverage and a
continuously high quality of service-level communications for a huge number of nodes. On the
other hand, existing mobile networks, such as cellular networks, have been designed from a
centralized perspective and main signalling functionalities regarding mobility are located at the core
network. However, this approach may limit the operational efficiency because signalling of all
traffic is handled by centralized systems so that scalability and performance issues arise. From this
perspective, highly scalable architecture for distributed access nodes, mechanisms for operators to
manage distributed mobile networks, and optimized routes for application data and signalling data,
should be supported for future networks.
Since the distributed mobile network architecture facilitates deployment ease of new access
technologies by flexibly locating mobility functionalities at the access levels, and optimized
mobility by short-distance backhauling and high-speed networks, it is the key for providing
mobility in future networks.
Technologies providing mobility services irrespective of a node's capability exist. However, this
service is not easy when the node has limited capability, such as a sensor. Therefore, how to
universally provide mobility should be considered in FNs.
8.10 Optimization
FNs are recommended to provide sufficient performance by optimizing network equipment capacity
based on service requirement and user demand. FNs are recommended to perform various
optimizations within the network with consideration of various physical limitations of network
equipment.

10 Rec. ITU-T Y.3001 (05/2011)
Rationale: The spread of broadband access will encourage the appearance of various services with
different characteristics and will further widen the variety of requirements among each service, such
as bandwidth, delay, etc. Current networks have been designed to meet the highest level of
requirement for the services with a maximum number of users, and the transmission capacity of the
equipment that is provisioned for the services is usually over-specified for most users and services.
If this model is sustained while user demand increases, the network equipment in the future will
face various physical limitations, such as transmission capacity of optical fibre, operation frequency
of electrical devices, etc.
For this reason, FNs should optimize capacity of network equipment, and also perform
optimizations within the network with consideration to various physical limitations of network
equipment.
8.11 Identification
FNs are recommended to provide a new identification structure that can effectively support mobility
and data access in a scalable manner.
Rationale: Mobility and data access are design goals of FNs. Both features require a provision for
efficient and scalable identification (and naming) [ITU-T F.851] of a great number of network
communication objects (hosts and data). Current IP networks use IP addresses for host
identification. These are in fact host locators that depend on the points of attachment with the
network. As the host moves, its identifier (ID) [ITU-T Y.2091] changes, resulting in broken
communication sessions. Cell phones conceal this problem by managing the mobility issues in
lower layers, but when the lower layer fails to handle this, e.g., because of the access networks'
heterogeneity, this problem re-emerges. Similarly, there are no widely used IDs that can be used in
the identification of data. FNs therefore should solve these issues by defining a new identification
structure for efficiently networking among hosts and data. They should provide dynamic mapping
between data and host IDs, as well as dynamic mapping of these IDs with host locators.
8.12 Reliability and security
FNs are recommended to be designed, operated, and evolved with reliability and resilience,
considering challenging conditions. FNs are recommended to be designed for safety and privacy of
their users.
Rationale: Since FNs should serve as essential infrastructures supporting human social activity, they
should also support any type of mission critical services, such as intelligent traffic management
(road-, rail-, air-, marine- and space traffic), smart-grids, e-health, e-security, and emergency
telecommunications (ET) [ITU-T Y.2205] with integrity and reliability. Communication devices are
used to ensure human safety and support automation of human activities (driving, flying, office-
home control, medical inspection and supervision, etc.). This becomes extremely important in
disaster situations (natural disasters, e.g., earthquake, tsunamis, hurricanes, military or other
confrontations, large traffic accidents, etc.). Certain emergency response services (e.g., individual-
to-authority) may also require priority access to authorized users, priority treatment to emergency
traffic, network device identification, and time and location stamping including the associated
accuracy information which would dramatically improve the quality of service.
All users have to place justifiable trust onto FNs to provide an acceptable level of service, even in
the face of various faults and challenges to normal operation. This ability of a FN is called
resilience, which is characterized by its two features: trustworthiness (how readily trust can be
placed on a system) and challenge tolerance. Trust can be gained from the assurance that the FNs
will perform as expected with respect to dependability and security. The trustworthiness of a system
is threatened by a large set of challenges, including natural faults (e.g., aging of hardware),
large-scale disasters (natural or man-made), attacks (real-world or cyber-based), mis-configurations,
unusual but legitimate traffic, and environmental challenges (especially in wireless networks).

Rec. ITU-T Y.3001 (05/2011) 11
Challenge tolerance disciplines deal with the design and engineering of FNs so that they can
continue to provide service in the face of challenges. Its sub-disciplines are survivability, disruption
tolerance and traffic tolerance, which enact the capability of a system to fulfil its mission, in a
timely manner, in the presence of these challenges.
FNs are characterized by virtualization and mobility, and also by extensive data and services.
Security for networks with these characteristics requires multi-level access control (assurance of
user identification, authentication and authorization). This is an addition to existing security
requirements, such as [ITU-T Y.2701]. This includes protecting the online identity and reputation,
as well as providing users with the ability to control unsolicited communications. FNs should
provide a safe online environment for everyone, in particular for children, disabled people, and
minority groups.
9 Target date and migration
In this Recommendation, description of FNs is to meet the assumption that trial services and phased
deployment of future networks supporting the above objectives and design goals falls
approximately between 2015 and 2020. This estimation is based on two factors: the first is the
status of current and evolving technologies that would be employed in the experimentation and
development of FNs; the second is that any novel development that might take place well beyond
that estimated date is speculative.
This target date does not mean a network will change by that estimated timeframe, but that parts of
a network are expected to evolve. Evolution and migration strategies may be employed to
accommodate emerging and future network technologies. Such evolution and migration scenarios
are topics for further study.


12 Rec. ITU-T Y.3001 (05/2011)
Appendix I

Technologies for achieving the design goals
(This appendix does not form an integral part of this Recommendation.)
This appendix describes some of the technologies emerging in recent research efforts. These
technologies are likely to be used as an enabling technology for FNs and may play an important role
in their development. The title of each clause shows the technology name and the design goal that is
most relevant to the technology, to show the relevance to the main body of this Recommendation. It
should be noted that a technology may relate to multiple design goals. For example, network
virtualization deeply relates not only to virtualization of resources, but also to service diversity,
functional flexibility, network management, reliability and security. The clause title shows the most
relevant design goal.
I.1 Network virtualization (virtualization of resources)
FNs should provide a broad range of applications, services, and network architectures. Network
virtualization is a key technology supporting this. Network virtualization enables the creation of
logically isolated network partitions over a shared physical network infrastructure so that multiple
heterogeneous virtual networks can simultaneously coexist over the infrastructure. It also allows
aggregation of multiple resources and makes the aggregated resources appear as a single resource.
The detailed definition and framework of network virtualization are described in
[b-ITU-T FG-FN NWvirt].
Users of logically isolated network partitions can programme network elements by leveraging
programmability that enables users to dynamically import and reconfigure newly invented
technologies into virtualized equipment (e.g., routers/switches) in the network. Network
virtualization also has a federation of networks so that multiple network infrastructures can be
operated as part of a single network, even though they are geographically dispersed and managed by
different providers. Supporting programmability and federation requires support of the dynamic
movement of logical network elements, services, and capabilities among the logically isolated
network partitions. In other words, it is possible to remove a service or element from one network
partition and re-offer it in a different, logically isolated partition, in order to provide a continued
service or connection to the end users or other providers. By doing so, the end users or other
providers can locate and access such remote services and elements.
I.2 Data/content-oriented networking (data access)
The explosive growth of the world wide web in the Internet has caused a large volume of
distribution of digital content such as texts, pictures, audio data, and video data. A large portion of
Internet traffic is derived from this content. Therefore, several networking methods focusing on
content distribution have been proposed. These include the so-called content distribution networks
(CDNs) [ITU-T Y.2019] and peer-to-peer (P2P) networking for content sharing.
In addition, some novel approaches specializing in data content handling have been proposed from
the perspective of network usage [b-CCNX], [b-Jacobson] and [b-NAMED DATA]. They are
distinguished from existing networks in the concepts of addressing, routing, security mechanism,
and so on. While the routing mechanism of current networks depends on 'location' (IP address or
host name), the new routing method is based on the name of data/content and the data/content may
be stored in multiple physical locations with a network-wide caching mechanism. As for security
issues, there have been proposals where all data/contents have a public-key signature and can prove
their authenticity. Another research emphasizes naming and name resolution of data in the network
[b-Koponen]. Some approaches assume overlay implementation using existing IP networks, and
others assume a new implementation base in a clean-slate manner.

Rec. ITU-T Y.3001 (05/2011) 13
There are a couple of research projects that propose a new paradigm called "publish/subscribe
(pub/sub) networking" [b-Sarela] and [b-PSIRP]. In pub/sub networking, data senders "publish"
what they want to send and receivers "subscribe" to the publications that they want to receive.
There are other research activities which are trying to create new network architectures based on
contents/data new information and information management model, see [b-NETINF] and
[b-Dannewitz].
I.3 Energy-saving of networks (energy consumption)
Reduction of energy consumption is extremely important with regard to environmental awareness
and network operation. This includes a variety of device-level, equipment-level, and network-level
technologies [b-Gupa]. Each technology, whether at the same or different levels, should not work
independently, but should cooperate with the others and provide a total solution that minimizes total
energy consumption.
Energy-saving of networks has the following three promising areas:
– Forward traffic with less power
Existing data transmission is usually carried out with power-consuming devices and
equipment, and their energy consumption depends mainly on their transmission rate.
Energy-saving technologies enable to achieve the same rate with less power using low-
power devices/equipment, photonic switching, lightweight protocols, and so on
[b-Baliga2007], and thus reduce the energy consumed per bit transmitted.
– Control device/equipment operation for traffic dynamics
Existing network devices or systems continually operate at full specification and full speed.
On the contrary, networks with energy-saving technologies will control operations based on
the traffic, using methods such as sleep mode control, dynamic voltage scaling, and
dynamic clock operation technique [b-Chabarek]. This reduces the total energy
consumption needed.
– Satisfy customer requests with minimum traffic
Existing networks typically have not paid attention to the total amount of traffic to satisfy
customer requests. Networks with energy-saving technologies, however, will satisfy
requests with minimum traffic. That is, they can reduce inessential or invalid traffic such as
excessive keep-alive messages or duplicated user messages, by using multicasting, filtering,
caching, redirecting, and so on. They reduce traffic and hence reduce the total energy
consumption needed.
Based on these characteristics, energy-saving of networks can reduce total power consumption, and
serve as a solution to environmental issues from a network perspective. A newly implemented
service may increase energy consumption, but networks with energy-saving technologies can
mitigate this increase. Compared with cases having no energy-saving technologies, overall energy
consumption may even be able to be reduced.
I.4 In-system network management (network management)
Due to limitations of today's network management operations, a new decentralized network
management approach, called in-system management, is being developed [b-MANA], and
[b-UniverSELF]. In-system management employs decentralization, self-organization, autonomy,
and autonomicity as its basic enabling concepts. The idea is that, contrary to the legacy approach,
the management tasks are embedded in the network and, as such, empower the network to control
complexity. The FN as a managed system now executes management functions on its own. The
following are features of the in-system management for FN.
In the future, networks will be large-scale and complicated for supporting various services with
different characteristics, such as bandwidth and QoS, so network infrastructure and network service

14 Rec. ITU-T Y.3001 (05/2011)
management will become more complicated and difficult. Various approaches have previously been
proposed for standardizing the network management system by defining the common interface for
the operation system, such as the service-oriented architecture (SOA) concept, but have not been
operated due to problems such as cost. This will worsen worse in the future due to the proliferation
of different management systems caused by increasing services, so high-efficiency operations and
management technologies are needed. Also, because current network operations and management
depend mainly on the skills of the network manager, facilitating network management tasks and
passing on workers' knowledge are significant problems.
There are two candidate functions to achieve these goals.
First is a unified operation and management system from the perspective of highly efficient
management; the other is a sophisticated control interface and an inheritance system of operator
knowledge and know-how for network operation and management by lower-skilled operators.
Below are candidates for FNs to achieve these goals:
a) Common interface for operation and management [b-TMF NGOSS] and [b-Nishikawa]
This provides the high-efficient operation and management to adapt all network systems
that provide different services. The database technology is the key to automatically migrate
old system data containing user and infrastructure information to the new system.
b) Sophisticated control interface and inheritance system of operator knowledge and
know-how [b-Kipler] and [b-Kubo].
In order to make network control and management of various network systems and services easier
for operators without special skills, FN operation systems should have autonomous control and self-
stabilizing mechanisms. Sophisticated and friendly control interfaces will also help in some network
operation and management tasks. One viable approach is "visualization" of various network
statuses, as follows:
– Visualization of system management (software-level technology)
Network visualization technology supports the work of the system administrator and
improves work efficiency by easily visualizing the state of the network. Visualization
technology includes monitoring of networks, fault localization, and network system
automation.
– Visualization of infrastructure management (hardware-level technology)
Hardware-based visualization technology is also efficient for supporting field engineers.
This includes monitoring of fibre and states of communications, fault localization, and fibre
identification. It also makes it easy to identify the location of the failure, particularly if it is
on the network side or in user devices, which reduces maintenance costs.
I.5 Network optimization (optimization)
The appearance of new services will increase the bandwidth required by many users, while others
will remain satisfied with the current bandwidth, which widens the variety of bandwidth
requirements among users. Current networks have been designed to meet maximum user needs and
the capacity of the equipment is over-specified for most services. Network equipment in the future
will face various physical limitations such as capacity of optical fibre, operation frequency of
optical and electrical devices, and power consumption. Future networks should therefore be
designed to improve effectiveness of use in providing optimal (i.e., not abundant) capabilities for
user needs.
Three promising areas can address the above issues: device-level optimization, system-level
optimization, and network-level optimization.

Rec. ITU-T Y.3001 (05/2011) 15
a) Device-level optimization [b-Kimura]
This operation rate optimization technique, composed of an optical layer, electrical layer,
and hybrid optical/electrical layer, provides the minimum needed bandwidth for services
and applications.
b) System-level optimization [b-Gunaratne]
Though encrypting all data in networks is the ultimate solution against security threats, data
are currently selectively encrypted via higher layer functions, and higher layers are too slow
to encrypt everything. Optimizing security mechanisms, i.e., concentrating encryption
functions in lower-layer processing (physical layer processing technique such as optical
code division multiplexing (OCDM) transmission technology), and stopping higher-layer
encryption, would enable high security to be achieved at the same time as low latency and
power efficiency.
c) Network-level optimization [b-Iiyama]
This form of optimization tackles problems such as the physical limitation of optical fibre
capacity and operation frequency of electrical devices by changing the traffic flows
themselves. The technique also offers potentially higher utilization of network resources
such as network paths or equipment.
– Path optimization
Current networks, which transmit current services such as text or voice, cannot evolve
to high-speed, large-capacity, and low-latency end-to-end (E2E) for all optical
networks due to economical, technical, and other such problems. The path optimization
technique provides the optimized path considering service characteristics and traffic
conditions of the transmission route. It also has the ability to synchronize data sent by a
different path, thus enabling sending of information consisting of multiple data with
different characteristics by using a different path. Combined with operation rate
optimization, low- to very high-speed data transmission can be achieved in a single
network that enables simultaneous easy operation and improved effectiveness.
– Network topology optimization
This technology optimizes upper-layer (e.g., packet layer) network topology using not
only upper-layer information, such as geographical distribution of users' traffic
demands, but also topology information of underlying lower-layer (e.g., optical layer)
networks.
– Accommodation point optimization
In current networks, every service is transmitted on the same access line; therefore, an
access point accommodates all services for a user. This decreases accommodation
efficiency because each service has different characteristics such as bandwidth, latency,
and usability. The accommodation point optimization technique provides high
accommodation efficiency and flexible accommodation that enables optimization of the
accommodation point considering, for instance, the possible transmission distance for
each service, which fully uses the advantage of optical technologies and long-distance
transmission.
– Cache and storage optimization
The distribution of different contents in an efficient manner improving QoS at lower
cost is a challenge for future networks. The use of storage and caching capabilities
allows distributing and delivering contents as close as possible to the end users, thus
optimizing network performance and improving quality of experience (QoE) of the end
users.

16 Rec. ITU-T Y.3001 (05/2011)
– Computing optimization
The computing capabilities provided by the network allow the end users (principally
enterprises) to deploy and run computing tasks (software applications, including
optimization aspects). Distributed computing capabilities inside the network allow
more flexible use of the network and improve both service and network performances.
I.6 Distributed mobile networking (mobility)
In current networks, main functions, such as physical mobility management, authentication, and
application servers, are installed in the centralized systems or the mobile core network. This causes
problems such as scalability, performance, single point of failure, and bottlenecks.
A small and portable wireless access node with distribution of network functions, including
mobility functions, has been attracting broad attention as an alternative access method, especially
for residential and enterprise deployment [b-Chiba]. In this distributed architecture, the mobility
events and data paths can be managed and anchored as closely as possible to the terminals to
prevent scalability and performance issues. Single point of failure and bottleneck issues can also be
isolated since only a small number of terminals are managed at the edge of the access node level.
By flexibly locating functionalities, which have conventionally resided in the mobile core network,
at any part of the network in a distributed fashion, a highly efficient and scalable mobile network
can be realized. Thus, unlike the current mobile network, distributed mobile networking can:
– localize and optimize the signalling and data paths;
– enable the network administrator to control the signalling and data path;
– locate the functional entities (e.g., mobility management) anywhere in the network (both in
the mobile core and access networks);
– provide the discovery function (network resources and devices) of the connected devices in
both centralized and distributed fashions;
– connect devices not fully capable of mobility and/or security without degrading those
features.
By supporting the above functionalities, distributed mobile networking can provide always-on,
always-best connected access with guaranteed end-to-end services.


Rec. ITU-T Y.3001 (05/2011) 17
Bibliography

[b-ITU-T FG-FN Energy] ITU-T Focus Group on Future Networks FG-FN-OD-74 (2010),
Overview of Energy-Saving of Networks, December.
[b-ITU-T FG-FN NWvirt] ITU-T Focus Group on Future Networks FG-FN-OD-73 (2010),
Framework of Network Virtualization, December.
[b-Anderson] Anderson, T., Peterson, L., Shenker, S., and Turner, J. (2005),
Overcoming the Internet impasse through virtualization, Computer,
IEEE Computer Society, Vol. 38, No. 4, pp. 34-41.
[b-Baliga2007] Baliga, J., et al. (2007), Photonic Switching and the Energy
Bottleneck, Proc. IEEE Photonics in Switching, August.
[b-Bohl] Bohl, O., Manouchehri, S., and Winand, U. (2007), Mobile
information systems for the private everyday life, Mobile Information
Systems, December.
[b-CCNX] Project CCNx (Content-Centric Networking).
<http://www.ccnx.org/>
[b-Chabarek] Chabarek, J., et al. (2008), Power Awareness in Network Design and
Routing, in Proc. IEEE INFOCOM'08, April.
[b-Chiba] Chiba, T., and Yokota H. (2009), Efficient Route Optimization
Methods for Femtocell-based All IP Networks, WiMob'09, October.
[b-Clark] Clark, D., Wroclawski, J., Sollins, K., and Braden, R. (2005), Tussle
in Cyberspace: Defining Tomorrow's Internet, IEEE/ACM
Transactions on Networking, Vol. 13, No. 3, June.
[b-Dannewitz] Dannewitz, C. (2009), NetInf: An Information-Centric Design for the
Future Internet, in Proc. 3rd GI/ITG KuVS Workshop on The Future
Internet, May.
[b-EC FI] European Commission, Information Society and Media Directorate-
General (2009), Future Internet 2020: Visions of an Industry Expert
Group, May.
<http://www.future-internet.eu/fileadmin/documents/reports/FI_Panel_Report_v3.1_Final.pdf>
[b-Gunaratne] Gunaratne, C. et al. (2008), Reducing the energy consumption of
Ethernet with adaptive link rate (ALR), IEEE Trans. Computers,
Vol. 57, No. 4, pp. 448-461, April.
[b-Gupa] Gupta, M., and Singh, S. (2003), Greening of the Internet, Proc. ACM
SIG-COMM'03, August.
[b-HIP] IETF Host Identity Protocol (hipHIP) Working Group.
<http://datatracker.ietf.org/wg/hip/>
[b-Iiyama] Iiyama, N., et al. (2010), A Novel WDM-based Optical Access
Network with High Energy Efficiency Using Elastic OLT, in Proc.
ONDM'2010, 2.2, February.
[b-Jacobson] Jacobson, V., et al. (2009), Networking Named Content,
CoNEXT 2009, Rome, December.

18 Rec. ITU-T Y.3001 (05/2011)
[b-Kafle] Kafle, V. P., and Inoue, M. (2010), HIMALIS: Heterogeneous
Inclusion and Mobility Adaption through Locator ID Separation in
New Generation Network, IEICE Transactions on Communications,
Vol. E93-B No. 3, pp.478-489, March.
[b-Kimura] Kimura, H., et al. (2010), A Dynamic Clock Operation Technique for
Drastic Power Reduction in WDM-based Dynamic Optical Network
Architecture, in Proc. S07-3, World Telecommunication Congress
(WTC).
[b-Kipler] Kilper, D. C., et al. (2004), Optical Performance Monitoring,
J. Lightwave Technol., Vol. 22, pp. 294-304.
[b-Koponen] Koponen, T., Chawla, M., Chun, B., et al. (2007), A data-oriented
(and beyond) network architecture, ACM SIGCOMM Computer
Communication Review, Vol. 37, No. 4, pp. 181-192, October.
[b-Kubo] Kubo, T., et al. (2010), In-line monitoring technique with visible light
form 1.3µm-band SHG module for optical access systems, Optics
Express, Vol. 18, No. 3.
[b-LISP] IETF Locator/ID Separation Protocol (lispLISP) Working Group.
<http://datatracker.ietf.org/wg/lisp/>
[b-MANA] Galis, A., et al. (2008), Management and Service-aware Networking
Architectures (MANA) for Future Internet – Position Paper: System
Functions, Capabilities and Requirements, University of Twente,
December.
[b-NAMED DATA] Named Data Networking. <http://www.named-data.net/>
[b-NETINF] Network of Information (NetInf). <http://www.netinf.org/>
[b-NICT Vision] National Institute of Information and Communications Technology,
Strategic Headquarters for New Generation Network R&D (2009),
Diversity & Inclusion: Networking the Future Vision and Technology
Requirements for a New-generation Network, February.
[b-Nishikawa] Nishikawa, K., et al. (2009), Scenario Editing Method for Automatic
Client Manipulation System, Asia-Pacific Network Operations and
Management Symposium.
[b-PSIRP] Publish-subscribe Internet Routing Paradigm (PSIRP).
<http://www.psirp.org/>
[b-Sarela] Särelä, M., Rinta-aho, T., and Tarkoma, S., RTFM: Publish/Subscribe
Internetworking Architecture, ICT-Mobile Summit 2008 Conference
Proceedings, Paul Cunningham and Miriam Cunningham (Eds), IIMC
International Information Management Corporation.
[b-TMF NGOSS] Tele Management Forum GB930, The NGOSS approach to Business
Solutions (2005), Release 1.0.
[b-UniverSELF] UniverSelf, realizing autonomics for Future Networks.
<http://www.univerself-project.eu/>






Printed in Switzerland
Geneva, 2012

SERIES OF ITU-T RECOMMENDATIONS
Series A Organization of the work of ITU-T
Series D General tariff principles
Series E Overall network operation, telephone service, service operation and human factors
Series F Non-telephone telecommunication services
Series G Transmission systems and media, digital systems and networks
Series H Audiovisual and multimedia systems
Series I Integrated services digital network
Series J Cable networks and transmission of television, sound programme and other multimedia signals
Series K Protection against interference
Series L Construction, installation and protection of cables and other elements of outside plant
Series M Telecommunication management, including TMN and network maintenance
Series N Maintenance: international sound programme and television transmission circuits
Series O Specifications of measuring equipment
Series P Terminals and subjective and objective assessment methods
Series Q Switching and signalling
Series R Telegraph transmission
Series S Telegraph services terminal equipment
Series T Terminals for telematic services
Series U Telegraph switching
Series V Data communication over the telephone network
Series X Data networks, open system communications and security
Series Y Global information infrastructure, Internet protocol aspects and next-generation
networks
Series Z Languages and general software aspects for telecommunication systems