You are on page 1of 55

COMSATS UNIVERSITY ISLAMABAD

ABBOTTABAD CAMPUS

DEPARTMENT OF COMPUTER SCIENCE

SECTION: BCS-1B

LAB ASSIGNMENT – 1 (Introduction To ICT)

 PREPARED BY:
 FAHAD HASSAN
 FA22-BCS-078

 SUBMITTED TO:
 MAAM FAIZA QAZI
LAB ASSIGNMENT N0.1

By

Faiza Qazi
CIIT/FA22-BCS-078 /ATD

COMSATS University Islamabad


Abbottabad Campus - Pakistan
FALL-2022

ii
TASK 1

1.1. Create Table of Contents for the following document.

Contents
CHAPTER 1............................................................................................................................ 5
INTRODUCTION..................................................................................................................... 5
1.1 Information Centric Networks (ICNs)..................................................................................5
1.2 Content Centric Networks (CCN).........................................................................................6
1.3 Difference between TCP/IP and CCN Communication Model.............................................6
1.4 CCNx Protocol...................................................................................................................... 7
1.4.1 CCNx Definitions............................................................................................................8
1.5 CCN Routing....................................................................................................................... 10
1.6 CCN Transport...................................................................................................................10
1.7 CCN Security......................................................................................................................10
1.8 CCN Caching......................................................................................................................11
1.9 Caching in Information Centric Fog-Computing.................................................................12
1.10 In-network caching Challenges........................................................................................12
1.11 Challenges in existing CCN content caching....................................................................12
1.12 Problem Statement.........................................................................................................13
1.13 Motivation.......................................................................................................................13
1.14 Thesis Organization.........................................................................................................14
CHAPTER 2.......................................................................................................................... 16
RELATED WORK................................................................................................................... 16
CHAPTER 3.......................................................................................................................... 20

iii
SYSTEM ARCHITECTURE....................................................................................................... 20
3.1 Major Components............................................................................................................21
3.1.1 CCN-Fog Routers.........................................................................................................22
3.1.2 End Nodes: Source......................................................................................................22
3.2 CCN Node Architecture.....................................................................................................22
3.2.1 PIT............................................................................................................................... 22
3.2.1 FIB............................................................................................................................... 23
3.2.3 Content Store (CS).......................................................................................................23
CHAPTER 4.......................................................................................................................... 25
SYSTEM MODEL................................................................................................................... 25
4.1 Optimization Model for in-network caching......................................................................25
4.2 On-path Caching................................................................................................................27
CHAPTER 5.......................................................................................................................... 30
PERFORMANCE EVALUATION.............................................................................................. 30
5.1 Experimental Setup...........................................................................................................30
5.2 Performance parameters..................................................................................................32
5.2.1 Hit Ratio......................................................................................................................32
5.2.2 Latency........................................................................................................................ 33
5.2.3 Path Stretch................................................................................................................. 33
5.2.4 Link load...................................................................................................................... 33
5.3 Discussion on Latency performance..................................................................................33
5.3.2 Latency Performance for GEANT network topology using different content..............36
5.4 Discussion on Cache hit rate..............................................................................................36
5.5 Discussion on path stretch performance...........................................................................39
5.6 Link Load Performance Evaluation....................................................................................39
CHAPTER 6.......................................................................................................................... 43
CONCLUSION AND FUTURE WORK....................................................................................... 43

iv
6.1 Conclusions discussion......................................................................................................44
6.2 Future Work......................................................................................................................44
CHAPTER 7.......................................................................................................................... 46
REFRENCES.......................................................................................................................... 46
References.......................................................................................................................... 47

v
TASK 2

1.2. Create Table of Figures for the following document.

Figure 1: Interest and Data packets in CCN................................................................................................9


Figure 2: CCN based FOG architecture.....................................................................................................21
Figure 3: CCN based FOG node architecture...........................................................................................23
Figure 4: Latency performance of Optimize Cache scheme using different popularity model.................35
Figure 5: Latency performance on GEANT network using different content universe...........................37
Figure 6: Hit rate performance of Optimized cache at different popularity model...................................38
Figure 7: Path stretch performance of Optimize Cache at different cache size and popularity model......40
Figure 8......................................................................................................................................................41
Figure 9: Link load performance of proposed scheme at low and high popularity model........................42
Figure 10: Link load performance of proposed scheme at different popularity model with fixed cache
size.............................................................................................................................................................43

TASK 3

1.3. Create Table of Table for the following document.

Table 1: Notations and their meanings......................................................................................................25


Table 2: Simulation setup..........................................................................................................................31
TASK 4

1.4. Customize the following equations.

a.

b 3
x+x

 = [ ] dx
4
f ( b ) −¿ f ( a ) a 4 x+ 2 x 3
ax + 4
3 x−x

2
b.

 √ dx
∫ √αx++1
∞ n

γ
−∞ β

3
TASK 5

1.5. Create the following table.

A1

C1 D1 E1

B1 0 1 3 4

B1 4 6 7 8 F1

3 7 8 9

G1 H1 I1 J1 K1

L1 M1 N1 O1 P1

TABLE:

4
A1

D1 E1

C1

B1 B1 0 1 2 3 F1

4 5 6 5

7 8 9 0

G1 H1 I1 J1 K1

L1 M1 N1 O1 P1

TASK 6

5
1.6. Create and customize any of your own choice Graphs.

WEEKLY REPORT
6

0
ADNAN AWAIS FAHAD KAIF

THEORY LAB VIVA

Note: Your graph should be properly labeled with genuine data.

6
CHAPTER 1
INTRODUCTION
This chapter introduces the key concepts and key terms discussed in the thesis. It gives a brief overview
of Content Centric Networks (CCNs). Afterwards, it discusses different terms used in CCNs. In the
end, the problem addressed in our work is presented. The thesis outline and organization are also part of
this chapter.

1.1 Information Centric Networks (ICNs)


The current internet host-centric architecture was designed in 1960s. Host centric development
describes the sharing of physical resources. All conversation in the current framework is based on
point-point communication between named hosts. Today people are more interested in contents rather
than their location. Therefore to get the requested contents it should be mapped to the machine hosting
it. A user has to visit a particular server to get its requested contents. In reality multiple users interested
in same contents are responsible of creating load on server. For each requested content end-to-end
communication is established results in redundant traffic in networks.
ICNs make the content on first plane and try to solve the problem of host centric architecture. Host
centric problem is resolved by assigning contents globally unique and location independent name. User
interested in a content only specify the name of the content the network will retrieve it from anywhere
regardless of their location. In this regard ICN approach is different from today’s internet framework.
To cope with the evaluation of internet towards massive content distribution many ICN architecture
have been introduced. TRAID [1], DONA [2], PURSUIT [3], PSIRP [4], 4WARD [5], SAIL [6], CCN
[7] and NDN [8] all are ICN based architecture.

1.2 Content Centric Networks (CCN)


CCN a novel architecture is shifting end-to-end communication to a content centric infrastructure. The
new idea of routing named content is basically derived from Internet Protocol (IP).CCN acquire and
obtain contents by name and decouple location from identity and access [7,9]. CCN architecture is
similar to Publish and Subscriber service model. The new idea of routing named content is basically
derived from Internet Protocol (IP).

7
CCN makes TCP/IP more simple, durable and scalable. The Internet provide the communication
between exactly two hosts one is asking for resource and another is resource provider. Both of the host
have identifier (IP addresses) in IP packets one for the source and other for destination. In the 50 years
of packet networking computer and storage has become cheap and ubiquitous commodities. Internet
connectivity and cheap storage enables access to a astonish amount of data. In 2008 alone nearly 500
Exabyte of data was created. [1]. People prefer internet for what it contain rather than where content
lies. Hence named data is much better abstraction than named Host. CCN a novel infrastructure
provides content delivery as a basic network characteristic. [10].

Storage management and request routing are tightly coupled in CCNs transport protocol, hence proving
efficient use of resources as contrast to traditional Content Delivery Networks (CDNs) infrastructures.
CCN is a communication infrastructure built on name data. CCN aims at switching address based
internet infrastructure to named-content based one. [11].To carry the information instead of network
addresses content names are used. The contents can be reside in any caching node in the network. The
requested data can be deliver by any caching node therefore data is not necessarily connected with
content publisher.

CCN architecture is differ from traditional host based communication frameworks in many aspects.
Network addresses are changed with content names in CCN. To reduce bandwidth utilization the
concept of in-network caching is introduced in CCN and nodes are enabled with caching capabilities.
Therefore on the basis of caching algorithm the requested content can be deliver by any CCN caching
node rather than original source.

1.3 Difference between TCP/IP and CCN Communication Model


With rapid advancement in mobile technology have changed the people perception about mobile
phones. In the beginning mobile phones were only used to make calls. However with rising technology
of high speed, and large bandwidth network like 3G, WiFi, and 4G, users can access web from their
mobile phones. These devices now become essential part of everyone’s life and primary source of
digital content generation. Users are now capable to create and share their own contents.

The usage pattern of the internet has become content oriented while today the only way to retrieve
content is end-to-end communication. Consumers are only interested in contents rather than their
location. CCN is a networking architecture based on principal that a communication framework should

8
allow a user to concentrate on data retrieval rather than data physical location. To improve delivery
speed and decrease latency of the contents CCN enable in-network caching. Configuration of CCN
node is simple security is built into the network at data level. When compared to the TCP/IP
communication framework CCN has following distinct features.

1 CCN is a receiver-centric communication framework where receiver ask about information by


sending an interest packet. In response to an interest packet at most one data message is delivered.
2 CCN model uses hierarchical content name scheme. Similar to URLs content is given hierarchical
names instead of addressing specific hosts. Longest prefix matching and forwarding decisions are
used to forward the interest packets.
3 CCN routers are cached enable and use their cache to serve future content requests. With these
above mentioned features CCN is expected to resolve current internet security, mobility and multi-
path support issues.

1.4 CCNx Protocol


In CCN content sharing is done through CCNx protocol which is the transport protocol for CCN.
Instead of connecting hosts to other host CCNx efficiently delivers contents. When delivering contents
back to user the data packets may be cache at each CCNx router. Broadcast or multicast data packet
delivery make efficient use of network in case when many people are interested in same the same
content. For named data packet CCNx provide location-independent delivery services. Application run
the CCNx over some lower level communication service capable of transmitting data. The lower layer
service may be physical transport or another network or transport protocol. CCNx protocol; support
wide range of network applications. CCNx not only support video and document files but also supports
real-time communication and delivery protocol. It carry conversation between hosts just like TCP.
Multiple application are supported by protocol by leaving naming convention to applications. CCNx
provides end to end communication between hosts therefore rather than being implemented as separate
layer it is integrated into application processing. [12]

1.4.1 CCNx Definitions


Following are some definitions used in CCNx protocol

9
1.4.1.1 Message
Message is the CCNx packet. Term message is used to avoid misunderstanding with the lower-layer
packet that may be carrying CCNx message. A single lower-layer packet (UDP packet) may contain
more than one CCNx message. CCNx message field does not have fixed length value. Data formats of
CCNx are define by XML schemas and encoded with explicitly identified field boundaries. The CCNx
protocol based on of interest and data packets as shown in Figure.1.1. The Interest message request
contents by name. Data packet is used to supply data in corresponding to Interest packet. CCNx is a
reciever oriented communication protocol.

Figure 1: Interest and Data packets in CCN

A user sends an interest packet for interested data over a available connectivity. Any party receiving the
interest and having data that matches the interest may transmit matching Content to use.Data can only
transmitted in response to interest that matches the Dta.

10
Interest message can be multicast or broadcast in order to reach multiple potetienal sources of data with
minimal bandwidth cast. In response to a single received Interest message at most one Content message
is transnitted. One-to-one matching between interest and data message avoids consuming bandwidth to
send data anywhere it is not wanted. Figure.1 describes interest and data packets in ICN.

1.4.1.2 Party
Any object in the network that use CCNx protocol for communication. Parties include both machines
and applications using protocols.

1.4.1.3 Content Identification


Irrespective of location of entities or machines CCNx undertake transfer of contents with their names.
CCNX uses Hierarchical name schemes to name contents with number of components. Hierarchical
structure of CCNx name contents is similar to IP addresses with arbitrary length of name and name
contents. Unlike the classless framework of IP addresses, component division is explicitly defined in
CCNx. The CCNx protocol is only dependent on hierarchical names structures so names may contain
encrypted data such as arbitrary binary data.

A CCNx name occasionally defines a data chunk, but particularly CCNx name describes a collection of
data by naming a point in the name tree under which there may be multiple data pieces. Similar to the
address of a network in the host addressing structure of IP framework where network address recognize
collection of hosts attach to that network, a name in CCNx identifies collection of data. Similar to IPv4
addressing scheme that assigns a prefix of the IP addresses the CCNx name is the prefix of the name of
every piece of the content in the collection. For these reasons a CCNx may be referred simply prefix or
name prefix.

1.4.1.4 URL Identification


URL representation is usually used to represent CCNx name for convince. For example an HTTP
URL https://www.ccnx.org/name/work/class/presentation.pdf can be represented under
ccnx URL as: ccnx://name/work/class/presentation.pdf.

1.4.1.5 XML Representation


In CCNx protocol there is no restriction on component byte sequence, therefore representing as XML
hex binary or base64Binary encoding may be needed. The name of chunk of data contain as its final,

11
most specific component known as digest component, a value that is derived from the data. Digest
component is redundant and not transmitted as it is derived from data itself.

1.5 CCN Routing


Named based routing between source and destination is used in CCN to forward packets. For required
contents the requester broadcast interest packet in the network. At the FIB of each intermediate router
using longest-prefix match this interest is forwarded this interest is forwarded to the name prefix of
destination. PIT store all each incoming interest’s information, multiple requests for the same content is
also aggregated. If at any intermediate router have copy of the requested contents are found the
requested data is send back on the reverse path of requester.

1.6 CCN Transport


No transport-layer functionality is provided by CCN architecture. Applications, supporting libraries and
forwarding algorithm provide transport-layer functionalities in CCN. The information needed to
transport is included in content by hierarchical name scheme. Such infrastructure avoids the needs for
transport-layer functionalities such as port and sequence number. The state of each outstanding interest
in PIT is observed by application itself. Retransmission is initiated after a specific timeout. Each
interest packet has a limited lifetime to cope with the problem of congestion in the network. Moreover,
the facility of caching contents at intermediate nodes eases any congestion damages in the network.
This is because the retransmitted packets will be fulfil be the intermediate node from its cache.

1.7 CCN Security


Content publishers provide security in CCN by cryptographically signing each data packet. Better
routing scalability is provided by hierarchical name schemes. Data integrity is achieved by signing
content with publisher’s secret key. However the trust in signing the key is established through some
external means. The publishers key (PK) in CCN is not embedded in the naming of content. Therefore
self-certification is not possible although this help with the human readability of names. Verification of
key is done through multiple methods. These methods include information through a global PKI, direct
information, information through a friend, and getting information through a trusted third party.

1.8 CCN Caching


The most astonishing feature of CCN that differentiate it from the current internet architecture is the in-
network cashing mechanism. Although the research about content cache has grown since the

12
introduction of web, in-network cashing mechanism further reduces the response delay by implanting
cache space into the network. In-Network caching has transform the centralized caching technique to an
uncoordinated and decentralized environment. [13]. Each router of the CCN encompasses the inherited
feature pf build in memory module feature to cache chunks pass by. CCN network equipment as routers
and gateways are cache enriched, instead of providing the storage at the edge of the network like P2P
mechanism or stand-alone web cache proxies. CCN architecture is dependent on in-networking caching
strategies, and nodes efficiency of the CCN heavily rely on the performance of caching strategy used.
[14].

The main advantage of the cashing in CCN is the reduction of cost in up streaming and down streaming
of data, contents and interest. Furthermore cashing features includes traffic reduction, reduction in data
redundancy and limiting the bottle neck queuing. The advantages of in place caching includes efficient
bandwidth utilization, reduction of information waste and minimization of information misuse.

The CS in every router cache the content and this caching is analogous to buffer memory in the IP
packets, however after forwarding data packets IP routers cannot reuse them. However the CCN
provide the caching of contents on intermediate nodes, therefore allow the node to satisfy the future
need of any particular content. User is more secure as content name does not reveal any information
about user.

The main advantage of the cashing in CCN is the reduction of cost in up streaming and down
streaming of data, contents and interest. Furthermore cashing features includes traffic reduction,
reduction in data redundancy and limiting the bottle neck queuing. The advantages of in place caching
includes efficient bandwidth utilization, reduction of information waste and minimization of
information misuse. However beside all these potentials the question is how and when to cache is very
important. [15].Hence a lot of research on cache placement and replacement has been conducted.
Content placement scheme decides where an object in the delivery path across the routers should be
cashed.

1.9 Caching in Information Centric Fog-Computing


The use of CCN with FOG computing not only provide processing but also caching at edge nodes.
Hence the cloud could be better manage with other paradigms such as Content Centric Networks and
fog computing. CCN fog infrastructure that takes cashes at the edge node through fog computing by

13
identifying objects by names instead of IP addresses. Such combination of ICN with fog will place
everything residing on cloud everything in the Internet closer to user. The challenge of assigning IP
addresses to all the devices is minimize through ICN naming. Information from the cloud is retrieved
through in and off network cashing. A promising feature of the fog computing is the provision of
processing at leaf (smart devices, mobile devices) node of the cloud. Fog is inter-operable to provide
off-network processing and in-network cashing through CCN. Fog enabled CCN caching would make
information dissemination faster with a lower latency, less excessive bandwidth consumption and
reduce the streaming time. CCN-Fog takes a step further towards shorter latency, better mobility, and
higher data communication efficiency for Fog computing.

1.10 In-network caching Challenges


Besides all these potentials in-network caching also encompasses various challenges. These challenges
include network cache model, cache placement and replacement policies, request-caching routing and
content placement. All these issues effect the performance of in-network caching [16]. Content
placement scheme decides where and what object in the delivery path across the routers should be
cached. To determine which contents to cache at a node is essential for better in-network caching
performance. In CCN the deployment of network-wide cache is an expensive task. With predefined
cache size at each node it is not possible to cache all contents passing through a node. The question is
how and when to cache is very important. Content placement scheme decides where an object in the
delivery path across the routers should be cashed. Therefore to minimize the content duplication
efficient caching mechanism is essential. Efficient caching policies maximize cache utilization both at
core and edge network and minimize content redundancy.

1.11 Challenges in existing CCN content caching


Existing works on in-network caching in CCN mainly focus on reducing cache redundancy to improve
cache hit. Leave Copy Everywhere [17] is the CCN default caching scheme. In LCE every router stores
all the content to be delivered, and replace these contents in the least recently used order. LCE assumes
that all nodes occupy a large storage space hence it is a costly and sub optimal scheme. ProbCache [13]
scheme caches contents probabilistically. The basic purpose of ProbCache is to efficiently manage
cache resources by reducing cache redundancy. The main objective discussed in [18] is to reduce cache
redundancy using off-path central router. Proposed scheme improve network performance in terms of
duplication and transmission delay. However, excessive bandwidth utilization for each caching update

14
(cached, replaced or dropped) is required by ICN routers to update the ICN Manager. Work proposed
by [9], [19] investigate that caching only at a subset of node(s) along the content delivery path can
improve innetwork caching performance in terms of cache and server hit. Latency-aware caching LAC
is proposed in [20] to reduce average latency to retrieve content from any router.

In all of the above policies limited attention has been given on cache frequency reduction and hence
saving of bandwidth consumption and minimizing contents retrieval latency. Keeping popular content
in the cache for longer time increases hit ratio and reduce server access. Getting contents from cache
rather than server also saves bandwidth consumption and latency. Although the content popularity and
hop reduction while caching content in CCN is discussed in some existing caching policies, such as [9],
[13], [18], [20], the bandwidth consumption is still not optimal.

1.12 Problem Statement


Existing work focuses to achieve better hit ratio and ignored to achieve minimize the total latency of the
contents on all the routers, therefore saving the bandwidth utilization.

Considering a Fog enabled network having 𝐕𝐧 routers with CCN based caching, and limited cache size
𝐂𝐢. Given a large number of contents in the network ,the problem is to place the contents on the routers
while satisfying the constraints of latency and bandwidth utilization with an objective to improve, 1)Hit
ratio, 2)Path stretch, 3)Latency, and 4)Link load.

1.13 Motivation
In all of the existing policies limited attention has been given on cache frequency reduction and hence
saving of bandwidth consumption and minimizing contents retrieval latency. Keeping popular content
in the cache for longer time increases hit ratio and reduce server access. Getting contents from cache
rather than server also saves bandwidth consumption and latency. Although the content popularity and
hop reduction while caching content in CCN is discussed in some existing caching policies, such as [9],
[13], [18], [20], the bandwidth consumption is still not optimal.

To this end, we propose an optimal and real-time CCN in-network caching scheme using in Fog
environments. Through Fog computing, caching at edge nodes and uniquely identifying contents with
names rather than IP addresses will make access to information residing on cloud closer to user. Our
scheme determine optimized set of content to be cached at each node towards the edge based on content
popularity and content distance from the content source. We consider following perspectives of in

15
network caching using idea of Fog (a) to place the content near the user (b) Lower bandwidth
consumption (c) efficiently managing cache resources by reducing cache redundancy, and (d) Lower
latency in information dissemination.

The main contribution of this work is as follows:

We propose an optimized caching policy that reduces the content retrieval latency by caching content
near the user at the edge.

We design an in-network caching management policy that jointly considers content popularity and hop
reduction to reduce bandwidth consumption.

The design of our caching policy caches popular content for longer time, therefore decreases the cache
operation.

To demonstrate the effectiveness of our optimize caching policy we perform simulation in Icarus [21],
an ICN simulator specifically design for analyzing caching and routing polices in ICN. We compare
optimal caching policy against existing caching polices in ICN, LCE [14], LCD [14], CL4M [19] and
ProbCache [13] using GARR a real world internet topology. We study the impact of various parameters
such as cache size, content popularity model and stretch on our policy. We find a significant
improveme3t in latency, cache hit ratio and stretch as compare to the state-of-art.

1.14 Thesis Organization


This thesis is organized as follows. Chapter 1 contains the introduction to content centric networks,
problem statement, and motivations. Chapter 2 consists of a detailed description of work related in the
area of carpooling services. In Chapter 3, the system architecture along with definitions used in our
work is discussed. Chapter 4 discusses our proposed system model, chapter 5 validates our proposed
system, followed by the conclusion in chapter 6.

16
CHAPTER 2
RELATED WORK

This chapter presents the literature review of the carpooling services.

Network caching policies in CCN has attracted the attention of many researchers is the recent years.
Yusung Kim et.al [14] proposed LCE and LCD. LCE is the default caching scheme in ICN
infrastructure. Using LCE caching scheme every router stores all the content to be deliver and replace
these content in the least recently used order. LCE assumes that all nodes occupies a large storage space
hence it is a costly and sub optimal scheme. This cache policy is only efficient if it has sufficient space
to meet the demanded hit ratio. Data redundancy is the main issue of this scheme. LCD [14] cashing
scheme copies the contents at the direct neighbor of requesting node after a cache hit occurs. Data
redundancy that was encounter in LCE is minimized. The algorithm is aimed at keeping contents as
closer to user as possible. The contents should be popular enough to cause a cache hit before it is
evicted from the cache.

To overcome the limitation of LCE a native cashing policy Cache Less for more is proposed by Wei
Koong Chai et.al [19]. The proposed scheme cache at only one chosen intermediate node for each
request along the delivery path. The concept of betweennes centrality is used. Which is the measure of
number of times a specific node resides on the delivery path for all pairs of nodes in a network
topology. Cashing at such node will not only minimize cache replacement but also increase cache hit
rate by caching where cache hit is most probable to happen.

Ioannis Psaras et al. [13] focuses on the distribution of content in the routers cache using In-Network
caching concept. ProbCache scheme caches contents probabilistically. The basic purpose of ProbCache
is efficient management of cache resources by reducing cache redundancy. The mentioned scheme
allows caching space to other traffic sharing same path. The proposed scheme prefer keeping large
cache at edge. However content popularity distribution was not consider by the scheme. The proposed
approach incur high computational cost as compared to simple scheme such as random cashing.

Jing Ren et al [22] proposed a distributed caching scheme of ICNs that reduces bandwidth consumption
and limit cache operations by considering input/output operations of cache storage. Max-Gain In-
Network (MAGIC) is a distributed caching scheme along the delivery path of the contents considers the
content popularity and hop reduction jointly to reduce bandwidth consumption. To reduce number of
caching operation the cache penalty is also encounter when making cashing placement decision.

The paper [18] proposed cooperative caching policy where on-path cashing is facilitate by off-path
routers that are strategically placed by service provider. Basically ICN supports on-path caching, off-
path caching can reduce duplication of contents and improve overall system performance. However the
bandwidth utilization of the proposed scheme is very high, because all the edge routers have to send
their state information to ICN manager.

APC [23] supports energy efficient content distribution. Using available cache space the APC caches
frequently requested contents in the routers. Energy consumption can be minimize by minimizing hop
count. The scheme is compared with LCE only. APC scheme is lack of theoretical details and perform
numerical evolution.

18
CachinMobile [24. The energy consumption is minimized by using D2D communication. Various
potential issues related to dynamic mobility, bandwidth resource scheduling and interference
management are ignored.

Most of these works does not consider effect of user mobility on caching placement policies. The paper
[25] utilized user mobility and trajectories information for proactive and adaptive caching at Base
Stations and User Terminals. Future position of the user can provide seamless handover and content
download for user in case of proactive caching. However the scheme may get issue when collecting
user data, user security may be at risk. Furthermore as user terminals are growing enormously, therefore
collecting data for all the users for adaptive caching can be challenging. To motivate the users to use
D2D communication is also challenging.

The paper [26] proposes an ICN infrastructure that takes cashes at the edge node through fog
computing (as off-network cache) by identifying objects by names instead of IP addresses. Adding ICN
off-path cache combine with fog computing lower bandwidth utilization. The proposed scheme is just a
conceptual infrastructure, and didn’t detail about model.

Latency aware Caching Strategy for ICN (greedy caching) to determine set of content to be cache at
each node in the network is proposed in [20]. Based on request rate of a content from users the scheme
start caching most popular contents at network edge. After caching contents at edge the algorithm
recalculates the relative popularity of the contents based on request miss from downstream to cache
contents in core.

Authors in [27] proposed a “Cache Strategy in Content Centric Networks based on Node’s
Importance”. Most popular contents are placed at most important node in the network and vice versa.
Importance of a node is calculated by its flow rate. Flow rate is defined as a number of user accessing
node, request rate of contents at node, and distance of the node from the rest of nodes in the network.
However the proposed scheme only focused on the geographical connectivity of the node and ignore
contents distribution in the network.

Contrary to the [27] the authors in [28] argued that it is not necessary that if a node have a higher graph
connectivity can cache contents optimally. Rather a node that can cache a large number of contents is a
good choice. They proposed a content based centrality matric that takes into account that how well a
node is connected to the contents the network delivers. The most frequently accessed contents are

19
placed at a node with highest Content Base Centrality (CBC). CBC of a node is defined as the ratio
between the sums of shortest path between all users to all contents passing through that node to all the
shortest path between all the users to all the contents. However the proposed scheme only focused on
content distribution and ignored geographical location of the node.

In all of the above policies limited attention has been given on cache frequency reduction and hence
saving of bandwidth consumption and minimizing contents retrieval latency. Keeping popular content
in the cache for longer time increases hit ratio and reduce server access. Getting contents from cache
rather than server also saves bandwidth consumption and latency. Although the content popularity and
hop reduction while caching content in CCN is discussed in some existing caching policies, such as [9],
[14], [22], [18], the bandwidth consumption is still not optimal.

To this end, we propose an optimal and real-time CCN in-network caching scheme using in Fog
environments. Through Fog computing, caching at edge nodes and uniquely identifying contents with
names rather than IP addresses will make access to information residing on cloud closer to user. Our
scheme determine optimized set of content to be cached at each node towards the edge based on content
popularity and content distance from the content source. We consider following perspectives of in
network caching using idea of Fog (a) to place the content near the user (b) Lower bandwidth
consumption (c) efficiently managing cache resources by reducing cache redundancy, and (d) Lower
latency in information dissemination.

20
CHAPTER 3
SYSTEM ARCHITECTURE
This chapter discusses the architecture of our proposed system in detail. Each module of the
architecture is described briefly along with the definitions of terms used.

3.1 Major Components


In this work, we realize a Fog architecture empowered with novel CCN principle as shown in Figure. 2.
We model a network as a connected graph G (V, E), consisting of V=

{V1, V2, V3…Vn} nodes and E= {e1, e2, e3, ..., em} links. Let F= {f1, f2, f3, …, fk} be the set of available
contents and S= {S1, S2, S3…Sn} be the set of servers in the network. Initially all contents are
distributed in the network servers, which are directly connected to routers. Followings are the major
components of the proposed CCN based Fog architecture. Figure.3.1 describes our proposed CCN
based Fog network architecture.

Figure 2: CCN based FOG architecture

22
End Nodes: Users

CCN user is the request originator. CCN users is the initiator of the request and forward it to the edge
routers linked to them. A user send an Interest packet for intersted data over a available connectivity.
Any party receiving the Interest and having data that matches the Interest may transmit matching
content to user.

3.1.1 CCN-Fog Routers


CCN-Fog routers are standard routers with added caching capabilities. Each CCN-Fog router is
provided with extra storage universe to cache contents passing through it, therefore, can serve the
subsequent request for the same content from local cache. Delivery of requested contents by user must
be ensured either by CCN router or source server. In our proposed caching method, if the cache of the
CCN-Fog router is full, then the Least Recently Used cache policy is incorporate to cache the new
contents arrived.

3.1.2 End Nodes: Source


With respect to CCN content there are two types of sources, CCN routers and CCN servers. CCN
servers are the content originator or permanent source of contents whereas CCN routers store contents
temporary in local cache

3.2 CCN Node Architecture


Pending Interest Table (PIT), Forwarding Interest Base (FIB) and Content Store (CS) are three data
structure included in each CCN-Fog node .[29] as shown in Figure.3.2.

3.2.1 PIT
Pending and satisfied interest are contain in PIT. The entries in the PIT table contain the interface of
incoming contents, the content name, the timer to manage PIT entries, and a NONCE value for the
identification of individual interest packet. PIT records Interest name and incoming face locally when
an interest packet is forward out. When the data message is returned back, the PIT looks the data
message name in the table. The CCN node will send the data packet through the connected face(s) of
the matched entries in case of existence of a matched interest name. PIT also avoids multiple request
forwarding for the same content. When an Interest message for same content is received from multiple
incoming faces, only first one is forwarded others are added in PIT waiting for data message back. As

23
soon as CCN node receive the data message in return of interest, all other attached interest message can
get data message [30].

Figure 3: CCN based FOG node architecture

3.2.1 FIB
Routing the contents to the next node towards the content source is the responsibility of FIB.
Maintaining and managing outgoing interfaces and name prefixes is also responsibility of FIB. CCN
FIB is similar to IP router FIB. IP router FIB is filled with routing message while CCN FIB is filled by
content advertisement. A content provider send out the content advertisement in the network to publish
some available contents. On receiving such content advertisement every CCN router add the content
name or prefix together with incoming interface in the FIB. By default CCN support multicasting i.e.an
entry of CCN FIB can have multiple outgoing face. Provision of forwarding strategy is done through
forwarding algorithm.

3.2.3 Content Store (CS)


CS provides cache to store contents available on the current node and content received from other
nodes based on described cache policy of node. CCN node cache content is CS and provide in-network
caching for the purpose of minimizing network bandwidth, latency demands and server loads [23].

24
For simplicity we assumed that cached contents’ units have same size [6], [14]. To get the popularity of
the requested contents we also consider that each intermediate router counts the frequency of the
requested content [14]. Each caching node can obtain hop count i.e., the number of hops from it to the
original server with the help of TTL associated with data packet if Internet Protocol (IP) stack is used.
For the rest of the paper we use word “router” or “cache node”, interchangeably.

25
CHAPTER 4
SYSTEM MODEL

In this section, we discuss in detail the proposed heuristic Optimized Content Caching scheme. The
proposed scheme considers (a) frequency of requested contents (b) distance of requested contents from

requestor to source to cache contents at network edge.

4.1 Optimization Model for in-network caching


Main objective of this work is to achieve minimal latency to get the requested contents. We talk the
question of how a router with restricted caching capacity caches contents in the network, hence latency
to get contents is minimized and bandwidth utilization saving is maximized. Latency is end to end
packet transmission time. For the optimal decision each router should cache contents which should
ensure minimum latency. Table 1 shows the notation and parameters used.

Table 1: Notations and their meanings

Parameters Meaning

V Set of routers in the network

F Set of content items

λ (f, v) Request rate of fєF at vєV

Ci (v) Cache size of node vєV

Size (f) Cache size of node vєV

Src (f) an original source for fєF

D (u, v) Distance between request and source node in number of hop count

𝑅ℎ Number of nodes the request traversed from requestor to source


𝐶ℎ Number of hopes content traversed during content delivery

T (u, v) Time consumed by node u to download contents from node v

Decision Variable

X (u, f) Takes value 1 if node v cache a copy of content fk, and 0 otherwise

Y (f, v, u) Takes value 1 if node u downloads a copy of content fk from v, and

0 otherwise

The optimal content caching can be formulated as follows:

𝑚𝑖𝑛 ∑ ∑ (𝑓, 𝑣) × ∑ (𝑢, 𝑣) (𝑓, 𝑣, 𝑢) 1)

𝑣∈𝑉 𝑓∈𝐹 𝑢∈𝑉

s.t

∑𝑓∈𝐹 (𝑢, 𝑓) × 𝑠(𝑓) ≤ 𝐶𝑖(𝑣) , ∀𝑣 ∈ 𝑉 2)

(𝑢, 𝑓) ≥ (𝑓, 𝑢, 𝑣), ∀ 𝑢, 𝑣, 𝑓 3)

∑𝑢∈𝑉 (𝑓, 𝑢, 𝑣) = 1, ∀𝑣, 𝑘 4)

(𝑓, 𝑣) = {0,1}, 𝑌(𝑓, 𝑢, 𝑣) = {0,1}, ∀𝑓, 𝑢, 𝑣

With this caching policy, the formula (1) represents our objective function to minimize total latency for
all requests, the formula (2) describes finite cache size at each router, the formula (3) denotes that
intermediate node can only deliver a content 𝑓𝑘, if and only if the node has stored the contents in its
cache. Finally, the formula (4) signify that each request must be entertained by at least one router.

27
To achieve our objective function of minimizing latency for all requests we have to decide on whether
to cache the requested contents or not, subject to the large number of content in the network and finite
cache capacity of the node.

4.2 On-path Caching


With a limited cache size each router while serving request should intelligently decide whether or not to
cache the requested content, and what will be the best possible location to cache content. We propose a
heuristic mechanism for optimal content placement to minimize latency. Each CCN router stores
frequently asked contents by users to increase probability hit. Further to minimize transmission delay
each router caches the contents from a distance source. Based on above mechanism each on-path router
computes Cache Gain Probability (CGP) of content (f) for each caching content using following
formula and compare this with the score of existing cached contents in case if cache is full.

Where λ (f, v) is the request rate of content 𝑓 at node v, and 𝐷(𝑣, 𝑢) is the distance consumed by node
v to get content from node u i.e., the number of hop counts traversed by the request. (𝑣, 𝑢) can be
obtain with the help of TTL associated with data packet if Internet Protocol (IP) stack is used. λ (f, v) is
normalize request rate of contents

(normalized with maximum request rate 𝜆(𝑓, 𝑣)𝑚𝑎𝑥 of cached contents). (𝑓, 𝑣)𝑚𝑎𝑥 is the maximum rate
of requests known at router at any instance of time and (𝑣, 𝑢)𝑥 is maximum hop count in the network.
If the cached content is served through the node

adjacent to the requesting node, then 𝐷(𝑣, 𝑢) = 1 and . If

a new content (not cached is served) then and 𝐷(𝑣, 𝑢) is normalized hop count for
this node. In case if a content is cached at CCN router then λ (f, v) increases linearly for every new
request for the same content. If a content that is not cached at CCN router is served, then its λ (f, v) = 1.

To count the number of hops that a request message traversed, it is required that request message need
to record 𝑅ℎ value, and the header of data message should include both 𝑅ℎand 𝐶ℎ values Every router

28
increases the 𝑅ℎ value of Interest packet by one. The content source attaches the 𝑅ℎ value it sees on
Interest message to the content message. Every router increases the 𝐶ℎ value of content message by
one. Therefore, during the delivery of data message back to the client, the 𝑅ℎ value in the content
message remains constant and represents the path length of this specific content. While the 𝐶ℎ value

denotes the number of hops the content message has travelled so far. Hence as

content gets closer to client.

Furthermore, every time a cache hit occurs we update the 𝐶𝐺𝑃 of 𝑓 so that content that are frequently
accessed have high CGP. Following equation is used to update CGP, where 𝑃𝑖𝑛𝑖𝑡 (0,1) is initialization
constant.

(𝑓) = 𝐶𝐺(𝑓) + (1 − 𝐶𝐺𝑃(𝑓)) × 𝑃𝑖𝑛𝑖𝑡 (6)

Similarly CGP of the rest of contents that are not accessed must be aged with their value being reduced
in the process. The aging equation is shown in (7) where (0, 1) is the aging constant and k is the number
of time units elapsed since the last time the metric was aged.

(𝑗) = 𝐶𝐺(𝑗) + 𝛾𝑘 (7)

We now propose a heuristic algorithm for on-path caching which run on each CCN router
independently.

Algorithm 1: On-path Content Caching on CCN router

Cache size=Ci;

1: for any incoming request for the content f (k) do

2: Update maximum rate of request: (𝑓, 𝑣)

3: Update maximum possible hops (𝑣, 𝑢)

29
4: if cache hit on router then

5:

6: else

7:

8: Update CGP of the accessed content

9: (𝑓) = 𝐶𝐺(𝑓) + (1 − 𝐶𝐺𝑃(𝑓)) × 𝑃𝑖𝑛𝑖𝑡

10: Update the CGP of rest of contents

11: (𝑗) = 𝐶𝐺(𝑗) + 𝛾𝑘

12: 𝐢𝐟 𝐶𝑖 > 0

13: cache content f (k)

14: end if 15: else

16: 𝐢𝐟 𝐶𝑖 == 0

17: 𝐢𝐟 (𝑓) ≥ min{𝐶𝐺𝑃(𝑗) ∀𝑗}𝑡ℎ𝑒𝑛

18: cache contents 𝑓

19: end if

20: end if

21: end for

30
CHAPTER 5
PERFORMANCE EVALUATION

5.1 Experimental Setup


This section define our experimental system and simulation results. The performance of our proposed
optimize content cache policy is compare against state-of-art caching policies, namely Cache less for
More (CL4M) [19], ProbCache [13], Leave Copy everywhere [14], and Leave Copy Down [14]. We
explain these caching strategies below.

LCE: In this content caching strategy the contents are cached at each router along the path as contents
are being downloaded, and are replaced in the least recently used order [14].

LCD: LCD Caching scheme copies the contents at the direct neighbor of requesting node after a cache
hit occurs. The algorithm is aimed at keeping contents as closer to user as possible [14].

ProbCache: ProbCache scheme caches contents probabilistically. The basic purpose of ProbCache is
efficient management of cache resources by reducing cache redundancy. The mentioned scheme allows
caching space to other traffic sharing same path. The proposed scheme prefer keeping large cache at
edge [13].

CL4M: To make cache decision, this policy leverages the number of shortest path traversing a cache
(i.e., the concept of betweennes centrality is used). To maximize the probability of cache hit, this
scheme caches content at node with the greatest betweennes centrality [19].

To route request from requester to source, we assume the network uses Dijkstra’s shortest weighted
path routing, where delay on the links corresponds to weights. If the content is found at any
intermediate router enroute to source, then it is served from that router cache. To demonstrate the
effectiveness of our optimized caching policy we perform simulation in Icarus [21], an ICN simulator
specifically designed for analyzing caching and routing polices in ICN. Scenario generation,
experiment orchestration, experiment execution, and results collection are the four basic building
blocks of Icarus. In simulation, each content request is considered as an event. Therefore, whenever an
event occurs the corresponding timestamp, receiver, and source of event is stored. The result collection
block of the simulator gathers the result of simulation. Through the sum of delays on each link
traversed during content download, the latency is calculated in Icarus.

Extensive experiments are performed on various real world topologies, namely WIDE which is
Japanese Academic Network consisting of 30 nodes and 33 edges, GEANT (European academic
network), and GARR (Italian computer network). GEANT consists of 40 nodes and 60 edges, is an
academic network spread around the world. With 61 nodes and 89 edges GARR is the Italian national
computer network for universities. We utilized the topology GEANT in the paper as the other
topologies had insignificant effect on the results.Table2 describes our simulation setup

Table 2: Simulation setup

No of warm-up requests 40,000

No of 40,000
measured

requests

Popularity model 0.6, 0.8, 1.0

No of content universe 10,000

Cache size 4%-20%

Request rate 1.0 request/sec

In our simulation, the caches are initialized with first 10,000 contents and subsequent requests are used
for performance evaluation. Zipfian distribution with skewness α [0 1] is assumed as probability of
requesting a content. Originally contents are stored and uniformly distributed in content sources
(server). Routers with degree one are considered as users. Table X represents Following simulation
configuration is considered for result generation, content universe F=1000, cache size of each node in
32
the simulation parameters. The network varies from 4%-20% of total content universe, content
popularity skewness (α) varies from 0.6 to 1.0. Where α with value 0.6 refers to low popularity model,
similarly α with value 0.8 stands for normal popularity model, and α with value 1.0 shows high
popularity model. Results in the figures are averaged over 10 experiments.

The primary purpose of caching contents in the network is to: (1) lower the content retrieval latency
whereby the cached contents near the users can be retrieved fast than from the original server, (2)
reduce traffic and congestion while caching contents in the network because fewer links are traversed in
the network, and (3) to reduce load on server as each cache hit means one less request serving on
server.

5.2 Performance parameters


To demonstrate the superiority of our optimized content cache strategy we use different performance
metrics, such as latency, cache hit ratio, and path stretch. These performance metrics can be calculated
as follows:

5.2.1 Hit Ratio


When analyzing an individual router, if a content is found in node cache, we report a hit operation,
otherwise, a miss operation is reported. When cache miss operation occurred, the contents are retrieved
from server. Cache hit ratio i.e., the portion of requested content served by node cache measures the
efficiency of routers, given in (8).[14]

𝑐𝑎𝑐ℎ𝑒ℎ𝑖𝑡𝑠 (8)

5.2.2 Latency
Latency is number of time units (in millisecond) a content take to reach at user or the delay taken to
deliver a content, and can be calculated as follows. [21]

𝐿𝑎𝑡𝑒𝑛𝑐𝑦 = 𝑠𝑢𝑚 𝑜𝑓 𝑑𝑒𝑙𝑎𝑦𝑠 𝑜𝑛 𝑟𝑒𝑞𝑢𝑒𝑠𝑡ℎ𝑜𝑝 + 𝑠𝑢𝑚 𝑜𝑓 𝑑𝑒𝑙𝑎𝑦𝑠 𝑛𝑡𝑒𝑛𝑡ℎ𝑜𝑝 (9)

5.2.3 Path Stretch


Stretch is defined as the percentage of the path that a content has travelled to retrieve the content or the
ratio between the actual path and the shortest path length. Actual path length is the sum of hop count of

33
request and content path length, while shortest path length is the sum of request and content path length
in number of hop count. Path stretch is calculated using following formula. [14]

𝑟𝑒𝑞𝑠𝑝𝑎𝑡ℎ𝑙𝑒𝑛 + 𝑐𝑜𝑛𝑠𝑝𝑎𝑡ℎ𝑙𝑒𝑛 (10)

5.2.4 Link load


Link load is the number of bytes a link traversed per unit time to retrieve the requested content, can be
calculated using equation below. [21]

where,

𝐷𝑢𝑟𝑎𝑡𝑖𝑜𝑛 = 𝑐𝑜𝑛𝑡𝑒𝑛𝑡 𝑟𝑒𝑡𝑟𝑖𝑒𝑣𝑒𝑙 𝑡𝑖𝑚𝑒 − 𝑐𝑜𝑛𝑡𝑒𝑛𝑡 𝑟𝑒𝑞𝑢𝑒𝑠𝑡 𝑡𝑖𝑚𝑒 (12)

We calculate the performance upgrading of using strategy X over strategy Y by taking the percentage
of difference between two strategies divided by the performance of strategy Y. Our experimental results
show that optimized cache strategy outperforms state-of-art strategies for a wide range of simulation
parameters.

5.3 Discussion on Latency performance


Latency performance improvement of our proposed optimized cache strategy for a variety of different
settings is described in this section.

5.3.1 Performance for GEANT network topology

We perform a simulation to demonstrate the latency results for different caching strategies at different
popularity rates ranging from 0.6 to 1.0. From derived results we observe that optimized content cache
policy is better than state-of-art policies by 4%-18% for different cache sizes (ranges from 4%-20% of
content universe). Better cache utilization is the key reason behind the greater performance of
optimized cache strategy.Figure.5.1 describes the superior latency performance of the proposed scheme
at different popularity model, and cache size.

34
Figure 4: Latency performance of Optimize Cache scheme using different popularity model

35
5.3.2 Latency Performance for GEANT network topology using different content
Performance improvement of the proposed scheme for GEANT when using different content universe
is shown in Figure.5.2. Results are generated on fixed cache size, which is about 5% of content size.
Content universe varied from 10,000 to 50,000. Our simulation results show that performance of
optimize cache is superior to rest of polices for different content sizes as well. Results show that as size
of contents increase, latency decreases. As the content-to-cache size is kept constant therefore as with
increase in content universe absolute cache size increases causing latency decrement, cache utilization
increases as popular contents are readily available in cache. Primary purpose of our simulation is to
show the scalability of our algorithm. These results show that our algorithm of optimize cache can work
with large cache content universe which show efficiency of our technique.

5.4 Discussion on Cache hit rate


Hit rate is an important performance parameter for CCN caching strategies. Cache hit occurs when a
request is fulfilled by cache router, while a miss encounters if request is served by original content
source. High hit rate demonstrates that more requests are being served by intermediate cache routers,
therefore, reduces load on original source. We compare the cache hit ratio of optimize cache strategy
with state-of-the-art strategies in different environments ranging from low to high popularity rate
models (α=0.6, α= 0.8, and α= 1.0). We perform simulation on different cache sizes. Figure 5.3 shows
that cache hit ratio of optimized cache at different popularity models with different cache sizes is
superior to state-of-the-art techniques. We notice that optimized cache strategy improves the hit rate by
9%-35%. The primary reason for improvement is intelligent content placement technique adopted by
the Optimized Cache where every node cache content on number of requests it receives from
downstream nodes, and caching contents from distinct routers at edge. We observe that the hit rate
increases with increase in cache size ratio and content popularity skewness. The bigger the cache size
ratio, the greater the cache hit ratio, however, our proposed optimized cache strategy outperforms the
rest of the strategies at small cache size as well. Similarly, greater value of skewness parameter
indicates more popular contents are requested resulting in greater hit rate.

36
Figure 5: Latency performance on GEANT network using different content universe

37
Popularity model( α=0.6)

0.12
0.1
Hit ratio

0.08
0.06
0.04
0.02
0
0.04 0.08 0.12 0.16 0.2
Cache size

Probcache CL4M LCE LCD Opt-cache

Popularity model( α=0.8)

0.3
0.25
Hit ratio

0.2
0.15
0.1
0.05
0.04 0.08 0.12 0.16 0.2
Cache Size

Probcache CL4M LCE LCD Opt-cache

Popularity model( α=1.0)

0.45
0.4
0.35
Hit ratio

0.3
0.25
0.2
0.15
0.1
0.04 0.08 0.12 0.16 0.2
Cache size

Probcache CL4M LCE LCD Opt-cache

Figure 6: Hit rate performance of Optimized cache at different popularity model

38
5.5 Discussion on path stretch performance
Although hit rate provides an indication of the percentage of the requests served within the network,
path stretch defines the percentage of the path that a content has travelled to retrieve the content or the
ratio between the actual path and the shortest path length. Figure.5.4 shows the path stretch ratio at
different popularity models and cache size ratio. We observe that similar to hit ratio, hop count of
optimized cache decreases at different popularity models with different cache size, and performance is
superior to state-of art techniques. From the figure we notice that optimized cache strategy decreases
average hop count from 18%-51%. The primary reason for improvement is intelligent content
placement technique adopted by the Optimized Cache where every node cache content on number of
requests it receives from downstream nodes, and caching contents from distinct routers at edge.

5.6 Link Load Performance Evaluation


Link load is the number of bytes a link traversed to get content from the source. It is an important
performance parameter to get the bandwidth utilization of the contents in the network. Less bandwidth
utilization indicates low traffic rate in the network, causing latency minimization for contents retrieval.
A good caching algorithms traverse minimum bytes to get contents.

We perform simulation for link load at:

Low popularity model (α=0.6) and high popularity rate (α=1.0) at variable cache size (4%-20%), we
find a significant improvement in link load performance for proposed scheme. Figure.5.5 shows the
improvement in the link load performance of proposed scheme.

Different popularity model with fix cache size, with this scenario the performance of the proposed
scheme is superior to state-of-the-art schemes. Figure.5.6 describes the superior performance of the
proposed schemes

The basic reason behind the improve performance is better cache placement of popular contents at the
edge nodes. Popular contents are intelligently cached at edge routers near the users.

39
Figure 7: Path stretch performance of Optimize Cache at different cache size and popularity model

40
Popularity model( α=0.6)

70

60
Link load (bytes/ms)

50

40

30

20

10

0
0.04 0.08 0.12 0.16 0.2
Cache size

Probcache CL4M LCE LCD Opt-cache

Figure 8

41
Popularity model( α=1.0)

60

50
Link load (bytes/ms)

40

30

20

10

0
0.04 0.08 0.12 0.16 0.2
Cache size

Probcache CL4M LCE LCD Opt-cache

Figure 9: Link load performance of proposed scheme at low and high popularity model

42
Cache size=5%

70

60
Link load (bytes/ms)

50

40

30

20

10

0
0.6 0.8 1
Popularity model

Probcache CL4M LCE LCD Opt-cache

Cache size=10%

70

60
Link load (bytes/ms)

50

40

30

20

10

0
0.6 0.8 1
Popularity model

Probcache CL4M LCE LCD Opt-cache

Figure 10: Link load performance of proposed scheme at different popularity model with fixed cache size

43
CHAPTER 6
CONCLUSION AND FUTURE WORK
In this chapter we summarize the main contributions of the thesis and also discuss about future work
which can be done as an extension of our work.

6.1 Conclusions discussion


In this work, we have analyzed, designed, and evaluated a content placement scheme in CCN with Fog
Computing. There are various conflicting challenges in content caching as we increase the cache size
the more popular contents can be cached, however deployment cost increases. Similarly, if we try to
reduce the cache size the deployment cost decreases but lesser amount of popular contents can be
cache. With limited cache size and a large number of contents in the network’ intelligent content cache
minimizes bandwidth utilization and latency of content retrieval. Achieving a trade-off between
conflicting objectives is a major challenge that has been addressed in our work. Our main contribution
is the mathematical model that is not dependent on any particular simulation framework. Proposed
scheme determine optimized set of content to be cached at each node towards the edge based on content
popularity and content distance from the content source.

From the simulation study, it is seen that our proposed caching scheme has the potential in enhancing
the system performance as compared to existing state-of-the-art. We verify proposed scheme with a
real-world network topology GEANT containing 40 nodes and 61 edges. The results show that our
proposed scheme outperforms latency results from 4%-18%, similarly the hit ratio, stretch and link load
performance of proposed scheme is superior to other schemes. Proposed cache strategy improves the hit
rate by 9%-35%, we also notice that proposed cache strategy decreases average hop count from 18%-
51%. The results show that proposed content placement technique intelligently cache contents on edge
nodes.

6.2 Future Work


As a part of the future work we plan to implement the proposed optimized scheme using different
topologies.

We plan to use our proposed scheme on a content based centrality, where instead of caching content on
edge, the nodes with high flow rate cache popular contents.

We also plan to move our scheme towards dynamic topologies and study the impact of mobility on
proposed scheme.
45
Our work can also be extended by using Genetic algorithm

46
CHAPTER 7
REFRENCES

47
References

[1] D.R. Cheriton, M. Gritter, “Triad: a new next-generation internet architecture” 2000

[2] T.Koponen, M.Chawla, B.Chun, .A.Ermolinskiy, and K.J.Kim, “A data-oriented

(future and beyond) network architecture”. 2007

[3] FP7 PURSUIT project (Online). Available: http://www.fp7-pursuit.eu/PursuitWeb/

[4] FP7 PSIRP project (Online). Available: http://www.psirp.org/ [4]

[5] FP7 4WARD project (Online). Available: http://www.4ward-project.eu/ [5]

[6] FP7 SAIL project. [Online]. Available: http://www.sail-project.eu/ [6]

[7] V .Jacobson, D.K.Smetters, J.D.Thornton,” Networking named content”, In:

Proceedings of the 5th International Conference on Emerging Networking

Experiments and Technologies, 2009

[8] L. Zhang, A.Afanasyev, J.Burke, V.Jacobson, L.Wang L, B.Zhang, “Named data

networking.” SIGCOMM Computer Communication Rev 44(3):66–7, 2014

[9] X.Yuemei, L.YangI: “A Novel Cache Size Optimization Scheme based on

Manifold Learning in Content Centric Networking”, 2013

[10] G.Carofiglo, G.Morabito, “From Content Delivery today to Information Centric Networking'.
2013

[11] V.Athanasios, L.Zhe, S.Gwenda, “Information Centric Networking Research Challenges and
Opportunities”, 2015

[12] D.D.Ahir, B.Prashant,” Content Centric Networking and its Applications” Volume 3, No. 12,
December 2012

48
[13] I.Psaras, W.K.Chai, G.Pavlou, “Probabilistic In-Network Caching for
Information-Centric Network “, 2012
[14] C.Bernardini, T.Silverston, O.Festor, “A Comparison of Cashing Strategies for Content
Centric Networking”, 2016

[15] I.Abdullah, S.Arif, S.Hassan, “Survey on cashing approaches in information centric


networking”, 2015

[16] T.H. Luan, L.Gao, Z. Li, “Fog Computing: Focusing on Mobile Users at the Edge”, 2015'

[17] Y.kim, I.Yeom, “Performance analysis of in-network caching for content centric networking”,
Vol.57, 2013

[18] H.k.Rath, B.Panigrahi, and A.Simha, “On Cooperative On-Path and Off-Path
Caching Policy for Information Centric Networks”, IEEE 30th International
Conference on Advanced Information Networking and Applications, 2016

[19] W. K. Chai, D. He, I. Psaras, and G. Pavlou 'Cache less for more” in information-centric
networks':2013

[20] B.Banerjee, A.Seetharam, C.Tellambura,” Greedy Caching: A Latency-aware


Caching Strategy for Information-centric Networks”, Department of Electrical and
Computer Engineering, University of Alberta, Canada 2016

[21] L.Saino, I.Psaras, and .Pavlou, '”Icarus: a Caching Simulator For Information Centric
Networking (ICN)”, 2014

[22] J. Ren, W. Qi, C. Westphal, J. W. Kejie Lu, S. Liu, and S. Wang in “MAGIC: a distributed
max-gain in-network caching strategy in information- centric networks,” IEEE INFOCOM
Workshop NOM 2014

[23] J.Li, B.Liu, and H.Wu, ' Energy-Efficient In-Network Caching for Content-
Centric Networking' IEEE COMMUNICATIONS LETTERS, VOL. 17, NO. 4, APRIL 2013

[24] S.Wing, X.Huang, Y.Lui:' CachinMobile: An Energy Efficient User Cashing


Scheme for Fog Computing , 2016

49
[25] R.Xang, X.Peng , “Mobility- Aware Caching for Content Centric Wireless Networks
Modelling and Methodology 'IEEE Communications Magazine,2016

[26] I.Abdullah, S.Arif, S.Hassan: 'Ubiquitous shift with Information Centric Network
Cashing Using Fog Computing , 2015

[27] Y.He, D.Y.Zhu," A Cache Strategy in Content Centric Networks based on

Node’s Importance, Information Technology Journal 13(3):588-592, 2014 [28] J.A.Khan,

Y.AhmedContent-based Centrality Metric for Collaborative Caching in Information-Centric

Fogs May, 2017

[29] S.H. Ahmed, “Content-Centric Networks”, Springer Briefs in Electrical and

Computer Engineering, 2016

[30] B. Mathieu, P. Truong, J.F. Peltier, W. You " Media Networks; Architectures, and

Application Standards" Chapter: "Information-Centric Networking: Current Research

Activities and Challenges" 2016

50

You might also like