Professional Documents
Culture Documents
Lab Assignment 1
Lab Assignment 1
ABBOTTABAD CAMPUS
SECTION: BCS-1B
PREPARED BY:
FAHAD HASSAN
FA22-BCS-078
SUBMITTED TO:
MAAM FAIZA QAZI
LAB ASSIGNMENT N0.1
By
Faiza Qazi
CIIT/FA22-BCS-078 /ATD
ii
TASK 1
Contents
CHAPTER 1............................................................................................................................ 5
INTRODUCTION..................................................................................................................... 5
1.1 Information Centric Networks (ICNs)..................................................................................5
1.2 Content Centric Networks (CCN).........................................................................................6
1.3 Difference between TCP/IP and CCN Communication Model.............................................6
1.4 CCNx Protocol...................................................................................................................... 7
1.4.1 CCNx Definitions............................................................................................................8
1.5 CCN Routing....................................................................................................................... 10
1.6 CCN Transport...................................................................................................................10
1.7 CCN Security......................................................................................................................10
1.8 CCN Caching......................................................................................................................11
1.9 Caching in Information Centric Fog-Computing.................................................................12
1.10 In-network caching Challenges........................................................................................12
1.11 Challenges in existing CCN content caching....................................................................12
1.12 Problem Statement.........................................................................................................13
1.13 Motivation.......................................................................................................................13
1.14 Thesis Organization.........................................................................................................14
CHAPTER 2.......................................................................................................................... 16
RELATED WORK................................................................................................................... 16
CHAPTER 3.......................................................................................................................... 20
iii
SYSTEM ARCHITECTURE....................................................................................................... 20
3.1 Major Components............................................................................................................21
3.1.1 CCN-Fog Routers.........................................................................................................22
3.1.2 End Nodes: Source......................................................................................................22
3.2 CCN Node Architecture.....................................................................................................22
3.2.1 PIT............................................................................................................................... 22
3.2.1 FIB............................................................................................................................... 23
3.2.3 Content Store (CS).......................................................................................................23
CHAPTER 4.......................................................................................................................... 25
SYSTEM MODEL................................................................................................................... 25
4.1 Optimization Model for in-network caching......................................................................25
4.2 On-path Caching................................................................................................................27
CHAPTER 5.......................................................................................................................... 30
PERFORMANCE EVALUATION.............................................................................................. 30
5.1 Experimental Setup...........................................................................................................30
5.2 Performance parameters..................................................................................................32
5.2.1 Hit Ratio......................................................................................................................32
5.2.2 Latency........................................................................................................................ 33
5.2.3 Path Stretch................................................................................................................. 33
5.2.4 Link load...................................................................................................................... 33
5.3 Discussion on Latency performance..................................................................................33
5.3.2 Latency Performance for GEANT network topology using different content..............36
5.4 Discussion on Cache hit rate..............................................................................................36
5.5 Discussion on path stretch performance...........................................................................39
5.6 Link Load Performance Evaluation....................................................................................39
CHAPTER 6.......................................................................................................................... 43
CONCLUSION AND FUTURE WORK....................................................................................... 43
iv
6.1 Conclusions discussion......................................................................................................44
6.2 Future Work......................................................................................................................44
CHAPTER 7.......................................................................................................................... 46
REFRENCES.......................................................................................................................... 46
References.......................................................................................................................... 47
v
TASK 2
TASK 3
a.
b 3
x+x
∫
= [ ] dx
4
f ( b ) −¿ f ( a ) a 4 x+ 2 x 3
ax + 4
3 x−x
2
b.
√ dx
∫ √αx++1
∞ n
γ
−∞ β
3
TASK 5
A1
C1 D1 E1
B1 0 1 3 4
B1 4 6 7 8 F1
3 7 8 9
G1 H1 I1 J1 K1
L1 M1 N1 O1 P1
TABLE:
4
A1
D1 E1
C1
B1 B1 0 1 2 3 F1
4 5 6 5
7 8 9 0
G1 H1 I1 J1 K1
L1 M1 N1 O1 P1
TASK 6
5
1.6. Create and customize any of your own choice Graphs.
WEEKLY REPORT
6
0
ADNAN AWAIS FAHAD KAIF
6
CHAPTER 1
INTRODUCTION
This chapter introduces the key concepts and key terms discussed in the thesis. It gives a brief overview
of Content Centric Networks (CCNs). Afterwards, it discusses different terms used in CCNs. In the
end, the problem addressed in our work is presented. The thesis outline and organization are also part of
this chapter.
7
CCN makes TCP/IP more simple, durable and scalable. The Internet provide the communication
between exactly two hosts one is asking for resource and another is resource provider. Both of the host
have identifier (IP addresses) in IP packets one for the source and other for destination. In the 50 years
of packet networking computer and storage has become cheap and ubiquitous commodities. Internet
connectivity and cheap storage enables access to a astonish amount of data. In 2008 alone nearly 500
Exabyte of data was created. [1]. People prefer internet for what it contain rather than where content
lies. Hence named data is much better abstraction than named Host. CCN a novel infrastructure
provides content delivery as a basic network characteristic. [10].
Storage management and request routing are tightly coupled in CCNs transport protocol, hence proving
efficient use of resources as contrast to traditional Content Delivery Networks (CDNs) infrastructures.
CCN is a communication infrastructure built on name data. CCN aims at switching address based
internet infrastructure to named-content based one. [11].To carry the information instead of network
addresses content names are used. The contents can be reside in any caching node in the network. The
requested data can be deliver by any caching node therefore data is not necessarily connected with
content publisher.
CCN architecture is differ from traditional host based communication frameworks in many aspects.
Network addresses are changed with content names in CCN. To reduce bandwidth utilization the
concept of in-network caching is introduced in CCN and nodes are enabled with caching capabilities.
Therefore on the basis of caching algorithm the requested content can be deliver by any CCN caching
node rather than original source.
The usage pattern of the internet has become content oriented while today the only way to retrieve
content is end-to-end communication. Consumers are only interested in contents rather than their
location. CCN is a networking architecture based on principal that a communication framework should
8
allow a user to concentrate on data retrieval rather than data physical location. To improve delivery
speed and decrease latency of the contents CCN enable in-network caching. Configuration of CCN
node is simple security is built into the network at data level. When compared to the TCP/IP
communication framework CCN has following distinct features.
9
1.4.1.1 Message
Message is the CCNx packet. Term message is used to avoid misunderstanding with the lower-layer
packet that may be carrying CCNx message. A single lower-layer packet (UDP packet) may contain
more than one CCNx message. CCNx message field does not have fixed length value. Data formats of
CCNx are define by XML schemas and encoded with explicitly identified field boundaries. The CCNx
protocol based on of interest and data packets as shown in Figure.1.1. The Interest message request
contents by name. Data packet is used to supply data in corresponding to Interest packet. CCNx is a
reciever oriented communication protocol.
A user sends an interest packet for interested data over a available connectivity. Any party receiving the
interest and having data that matches the interest may transmit matching Content to use.Data can only
transmitted in response to interest that matches the Dta.
10
Interest message can be multicast or broadcast in order to reach multiple potetienal sources of data with
minimal bandwidth cast. In response to a single received Interest message at most one Content message
is transnitted. One-to-one matching between interest and data message avoids consuming bandwidth to
send data anywhere it is not wanted. Figure.1 describes interest and data packets in ICN.
1.4.1.2 Party
Any object in the network that use CCNx protocol for communication. Parties include both machines
and applications using protocols.
A CCNx name occasionally defines a data chunk, but particularly CCNx name describes a collection of
data by naming a point in the name tree under which there may be multiple data pieces. Similar to the
address of a network in the host addressing structure of IP framework where network address recognize
collection of hosts attach to that network, a name in CCNx identifies collection of data. Similar to IPv4
addressing scheme that assigns a prefix of the IP addresses the CCNx name is the prefix of the name of
every piece of the content in the collection. For these reasons a CCNx may be referred simply prefix or
name prefix.
11
most specific component known as digest component, a value that is derived from the data. Digest
component is redundant and not transmitted as it is derived from data itself.
12
introduction of web, in-network cashing mechanism further reduces the response delay by implanting
cache space into the network. In-Network caching has transform the centralized caching technique to an
uncoordinated and decentralized environment. [13]. Each router of the CCN encompasses the inherited
feature pf build in memory module feature to cache chunks pass by. CCN network equipment as routers
and gateways are cache enriched, instead of providing the storage at the edge of the network like P2P
mechanism or stand-alone web cache proxies. CCN architecture is dependent on in-networking caching
strategies, and nodes efficiency of the CCN heavily rely on the performance of caching strategy used.
[14].
The main advantage of the cashing in CCN is the reduction of cost in up streaming and down streaming
of data, contents and interest. Furthermore cashing features includes traffic reduction, reduction in data
redundancy and limiting the bottle neck queuing. The advantages of in place caching includes efficient
bandwidth utilization, reduction of information waste and minimization of information misuse.
The CS in every router cache the content and this caching is analogous to buffer memory in the IP
packets, however after forwarding data packets IP routers cannot reuse them. However the CCN
provide the caching of contents on intermediate nodes, therefore allow the node to satisfy the future
need of any particular content. User is more secure as content name does not reveal any information
about user.
The main advantage of the cashing in CCN is the reduction of cost in up streaming and down
streaming of data, contents and interest. Furthermore cashing features includes traffic reduction,
reduction in data redundancy and limiting the bottle neck queuing. The advantages of in place caching
includes efficient bandwidth utilization, reduction of information waste and minimization of
information misuse. However beside all these potentials the question is how and when to cache is very
important. [15].Hence a lot of research on cache placement and replacement has been conducted.
Content placement scheme decides where an object in the delivery path across the routers should be
cashed.
13
identifying objects by names instead of IP addresses. Such combination of ICN with fog will place
everything residing on cloud everything in the Internet closer to user. The challenge of assigning IP
addresses to all the devices is minimize through ICN naming. Information from the cloud is retrieved
through in and off network cashing. A promising feature of the fog computing is the provision of
processing at leaf (smart devices, mobile devices) node of the cloud. Fog is inter-operable to provide
off-network processing and in-network cashing through CCN. Fog enabled CCN caching would make
information dissemination faster with a lower latency, less excessive bandwidth consumption and
reduce the streaming time. CCN-Fog takes a step further towards shorter latency, better mobility, and
higher data communication efficiency for Fog computing.
14
(cached, replaced or dropped) is required by ICN routers to update the ICN Manager. Work proposed
by [9], [19] investigate that caching only at a subset of node(s) along the content delivery path can
improve innetwork caching performance in terms of cache and server hit. Latency-aware caching LAC
is proposed in [20] to reduce average latency to retrieve content from any router.
In all of the above policies limited attention has been given on cache frequency reduction and hence
saving of bandwidth consumption and minimizing contents retrieval latency. Keeping popular content
in the cache for longer time increases hit ratio and reduce server access. Getting contents from cache
rather than server also saves bandwidth consumption and latency. Although the content popularity and
hop reduction while caching content in CCN is discussed in some existing caching policies, such as [9],
[13], [18], [20], the bandwidth consumption is still not optimal.
Considering a Fog enabled network having 𝐕𝐧 routers with CCN based caching, and limited cache size
𝐂𝐢. Given a large number of contents in the network ,the problem is to place the contents on the routers
while satisfying the constraints of latency and bandwidth utilization with an objective to improve, 1)Hit
ratio, 2)Path stretch, 3)Latency, and 4)Link load.
1.13 Motivation
In all of the existing policies limited attention has been given on cache frequency reduction and hence
saving of bandwidth consumption and minimizing contents retrieval latency. Keeping popular content
in the cache for longer time increases hit ratio and reduce server access. Getting contents from cache
rather than server also saves bandwidth consumption and latency. Although the content popularity and
hop reduction while caching content in CCN is discussed in some existing caching policies, such as [9],
[13], [18], [20], the bandwidth consumption is still not optimal.
To this end, we propose an optimal and real-time CCN in-network caching scheme using in Fog
environments. Through Fog computing, caching at edge nodes and uniquely identifying contents with
names rather than IP addresses will make access to information residing on cloud closer to user. Our
scheme determine optimized set of content to be cached at each node towards the edge based on content
popularity and content distance from the content source. We consider following perspectives of in
15
network caching using idea of Fog (a) to place the content near the user (b) Lower bandwidth
consumption (c) efficiently managing cache resources by reducing cache redundancy, and (d) Lower
latency in information dissemination.
We propose an optimized caching policy that reduces the content retrieval latency by caching content
near the user at the edge.
We design an in-network caching management policy that jointly considers content popularity and hop
reduction to reduce bandwidth consumption.
The design of our caching policy caches popular content for longer time, therefore decreases the cache
operation.
To demonstrate the effectiveness of our optimize caching policy we perform simulation in Icarus [21],
an ICN simulator specifically design for analyzing caching and routing polices in ICN. We compare
optimal caching policy against existing caching polices in ICN, LCE [14], LCD [14], CL4M [19] and
ProbCache [13] using GARR a real world internet topology. We study the impact of various parameters
such as cache size, content popularity model and stretch on our policy. We find a significant
improveme3t in latency, cache hit ratio and stretch as compare to the state-of-art.
16
CHAPTER 2
RELATED WORK
Network caching policies in CCN has attracted the attention of many researchers is the recent years.
Yusung Kim et.al [14] proposed LCE and LCD. LCE is the default caching scheme in ICN
infrastructure. Using LCE caching scheme every router stores all the content to be deliver and replace
these content in the least recently used order. LCE assumes that all nodes occupies a large storage space
hence it is a costly and sub optimal scheme. This cache policy is only efficient if it has sufficient space
to meet the demanded hit ratio. Data redundancy is the main issue of this scheme. LCD [14] cashing
scheme copies the contents at the direct neighbor of requesting node after a cache hit occurs. Data
redundancy that was encounter in LCE is minimized. The algorithm is aimed at keeping contents as
closer to user as possible. The contents should be popular enough to cause a cache hit before it is
evicted from the cache.
To overcome the limitation of LCE a native cashing policy Cache Less for more is proposed by Wei
Koong Chai et.al [19]. The proposed scheme cache at only one chosen intermediate node for each
request along the delivery path. The concept of betweennes centrality is used. Which is the measure of
number of times a specific node resides on the delivery path for all pairs of nodes in a network
topology. Cashing at such node will not only minimize cache replacement but also increase cache hit
rate by caching where cache hit is most probable to happen.
Ioannis Psaras et al. [13] focuses on the distribution of content in the routers cache using In-Network
caching concept. ProbCache scheme caches contents probabilistically. The basic purpose of ProbCache
is efficient management of cache resources by reducing cache redundancy. The mentioned scheme
allows caching space to other traffic sharing same path. The proposed scheme prefer keeping large
cache at edge. However content popularity distribution was not consider by the scheme. The proposed
approach incur high computational cost as compared to simple scheme such as random cashing.
Jing Ren et al [22] proposed a distributed caching scheme of ICNs that reduces bandwidth consumption
and limit cache operations by considering input/output operations of cache storage. Max-Gain In-
Network (MAGIC) is a distributed caching scheme along the delivery path of the contents considers the
content popularity and hop reduction jointly to reduce bandwidth consumption. To reduce number of
caching operation the cache penalty is also encounter when making cashing placement decision.
The paper [18] proposed cooperative caching policy where on-path cashing is facilitate by off-path
routers that are strategically placed by service provider. Basically ICN supports on-path caching, off-
path caching can reduce duplication of contents and improve overall system performance. However the
bandwidth utilization of the proposed scheme is very high, because all the edge routers have to send
their state information to ICN manager.
APC [23] supports energy efficient content distribution. Using available cache space the APC caches
frequently requested contents in the routers. Energy consumption can be minimize by minimizing hop
count. The scheme is compared with LCE only. APC scheme is lack of theoretical details and perform
numerical evolution.
18
CachinMobile [24. The energy consumption is minimized by using D2D communication. Various
potential issues related to dynamic mobility, bandwidth resource scheduling and interference
management are ignored.
Most of these works does not consider effect of user mobility on caching placement policies. The paper
[25] utilized user mobility and trajectories information for proactive and adaptive caching at Base
Stations and User Terminals. Future position of the user can provide seamless handover and content
download for user in case of proactive caching. However the scheme may get issue when collecting
user data, user security may be at risk. Furthermore as user terminals are growing enormously, therefore
collecting data for all the users for adaptive caching can be challenging. To motivate the users to use
D2D communication is also challenging.
The paper [26] proposes an ICN infrastructure that takes cashes at the edge node through fog
computing (as off-network cache) by identifying objects by names instead of IP addresses. Adding ICN
off-path cache combine with fog computing lower bandwidth utilization. The proposed scheme is just a
conceptual infrastructure, and didn’t detail about model.
Latency aware Caching Strategy for ICN (greedy caching) to determine set of content to be cache at
each node in the network is proposed in [20]. Based on request rate of a content from users the scheme
start caching most popular contents at network edge. After caching contents at edge the algorithm
recalculates the relative popularity of the contents based on request miss from downstream to cache
contents in core.
Authors in [27] proposed a “Cache Strategy in Content Centric Networks based on Node’s
Importance”. Most popular contents are placed at most important node in the network and vice versa.
Importance of a node is calculated by its flow rate. Flow rate is defined as a number of user accessing
node, request rate of contents at node, and distance of the node from the rest of nodes in the network.
However the proposed scheme only focused on the geographical connectivity of the node and ignore
contents distribution in the network.
Contrary to the [27] the authors in [28] argued that it is not necessary that if a node have a higher graph
connectivity can cache contents optimally. Rather a node that can cache a large number of contents is a
good choice. They proposed a content based centrality matric that takes into account that how well a
node is connected to the contents the network delivers. The most frequently accessed contents are
19
placed at a node with highest Content Base Centrality (CBC). CBC of a node is defined as the ratio
between the sums of shortest path between all users to all contents passing through that node to all the
shortest path between all the users to all the contents. However the proposed scheme only focused on
content distribution and ignored geographical location of the node.
In all of the above policies limited attention has been given on cache frequency reduction and hence
saving of bandwidth consumption and minimizing contents retrieval latency. Keeping popular content
in the cache for longer time increases hit ratio and reduce server access. Getting contents from cache
rather than server also saves bandwidth consumption and latency. Although the content popularity and
hop reduction while caching content in CCN is discussed in some existing caching policies, such as [9],
[14], [22], [18], the bandwidth consumption is still not optimal.
To this end, we propose an optimal and real-time CCN in-network caching scheme using in Fog
environments. Through Fog computing, caching at edge nodes and uniquely identifying contents with
names rather than IP addresses will make access to information residing on cloud closer to user. Our
scheme determine optimized set of content to be cached at each node towards the edge based on content
popularity and content distance from the content source. We consider following perspectives of in
network caching using idea of Fog (a) to place the content near the user (b) Lower bandwidth
consumption (c) efficiently managing cache resources by reducing cache redundancy, and (d) Lower
latency in information dissemination.
20
CHAPTER 3
SYSTEM ARCHITECTURE
This chapter discusses the architecture of our proposed system in detail. Each module of the
architecture is described briefly along with the definitions of terms used.
{V1, V2, V3…Vn} nodes and E= {e1, e2, e3, ..., em} links. Let F= {f1, f2, f3, …, fk} be the set of available
contents and S= {S1, S2, S3…Sn} be the set of servers in the network. Initially all contents are
distributed in the network servers, which are directly connected to routers. Followings are the major
components of the proposed CCN based Fog architecture. Figure.3.1 describes our proposed CCN
based Fog network architecture.
22
End Nodes: Users
CCN user is the request originator. CCN users is the initiator of the request and forward it to the edge
routers linked to them. A user send an Interest packet for intersted data over a available connectivity.
Any party receiving the Interest and having data that matches the Interest may transmit matching
content to user.
3.2.1 PIT
Pending and satisfied interest are contain in PIT. The entries in the PIT table contain the interface of
incoming contents, the content name, the timer to manage PIT entries, and a NONCE value for the
identification of individual interest packet. PIT records Interest name and incoming face locally when
an interest packet is forward out. When the data message is returned back, the PIT looks the data
message name in the table. The CCN node will send the data packet through the connected face(s) of
the matched entries in case of existence of a matched interest name. PIT also avoids multiple request
forwarding for the same content. When an Interest message for same content is received from multiple
incoming faces, only first one is forwarded others are added in PIT waiting for data message back. As
23
soon as CCN node receive the data message in return of interest, all other attached interest message can
get data message [30].
3.2.1 FIB
Routing the contents to the next node towards the content source is the responsibility of FIB.
Maintaining and managing outgoing interfaces and name prefixes is also responsibility of FIB. CCN
FIB is similar to IP router FIB. IP router FIB is filled with routing message while CCN FIB is filled by
content advertisement. A content provider send out the content advertisement in the network to publish
some available contents. On receiving such content advertisement every CCN router add the content
name or prefix together with incoming interface in the FIB. By default CCN support multicasting i.e.an
entry of CCN FIB can have multiple outgoing face. Provision of forwarding strategy is done through
forwarding algorithm.
24
For simplicity we assumed that cached contents’ units have same size [6], [14]. To get the popularity of
the requested contents we also consider that each intermediate router counts the frequency of the
requested content [14]. Each caching node can obtain hop count i.e., the number of hops from it to the
original server with the help of TTL associated with data packet if Internet Protocol (IP) stack is used.
For the rest of the paper we use word “router” or “cache node”, interchangeably.
25
CHAPTER 4
SYSTEM MODEL
In this section, we discuss in detail the proposed heuristic Optimized Content Caching scheme. The
proposed scheme considers (a) frequency of requested contents (b) distance of requested contents from
Parameters Meaning
D (u, v) Distance between request and source node in number of hop count
Decision Variable
X (u, f) Takes value 1 if node v cache a copy of content fk, and 0 otherwise
0 otherwise
s.t
With this caching policy, the formula (1) represents our objective function to minimize total latency for
all requests, the formula (2) describes finite cache size at each router, the formula (3) denotes that
intermediate node can only deliver a content 𝑓𝑘, if and only if the node has stored the contents in its
cache. Finally, the formula (4) signify that each request must be entertained by at least one router.
27
To achieve our objective function of minimizing latency for all requests we have to decide on whether
to cache the requested contents or not, subject to the large number of content in the network and finite
cache capacity of the node.
Where λ (f, v) is the request rate of content 𝑓 at node v, and 𝐷(𝑣, 𝑢) is the distance consumed by node
v to get content from node u i.e., the number of hop counts traversed by the request. (𝑣, 𝑢) can be
obtain with the help of TTL associated with data packet if Internet Protocol (IP) stack is used. λ (f, v) is
normalize request rate of contents
(normalized with maximum request rate 𝜆(𝑓, 𝑣)𝑚𝑎𝑥 of cached contents). (𝑓, 𝑣)𝑚𝑎𝑥 is the maximum rate
of requests known at router at any instance of time and (𝑣, 𝑢)𝑥 is maximum hop count in the network.
If the cached content is served through the node
a new content (not cached is served) then and 𝐷(𝑣, 𝑢) is normalized hop count for
this node. In case if a content is cached at CCN router then λ (f, v) increases linearly for every new
request for the same content. If a content that is not cached at CCN router is served, then its λ (f, v) = 1.
To count the number of hops that a request message traversed, it is required that request message need
to record 𝑅ℎ value, and the header of data message should include both 𝑅ℎand 𝐶ℎ values Every router
28
increases the 𝑅ℎ value of Interest packet by one. The content source attaches the 𝑅ℎ value it sees on
Interest message to the content message. Every router increases the 𝐶ℎ value of content message by
one. Therefore, during the delivery of data message back to the client, the 𝑅ℎ value in the content
message remains constant and represents the path length of this specific content. While the 𝐶ℎ value
denotes the number of hops the content message has travelled so far. Hence as
Furthermore, every time a cache hit occurs we update the 𝐶𝐺𝑃 of 𝑓 so that content that are frequently
accessed have high CGP. Following equation is used to update CGP, where 𝑃𝑖𝑛𝑖𝑡 (0,1) is initialization
constant.
Similarly CGP of the rest of contents that are not accessed must be aged with their value being reduced
in the process. The aging equation is shown in (7) where (0, 1) is the aging constant and k is the number
of time units elapsed since the last time the metric was aged.
We now propose a heuristic algorithm for on-path caching which run on each CCN router
independently.
Cache size=Ci;
29
4: if cache hit on router then
5:
6: else
7:
12: 𝐢𝐟 𝐶𝑖 > 0
16: 𝐢𝐟 𝐶𝑖 == 0
19: end if
20: end if
30
CHAPTER 5
PERFORMANCE EVALUATION
LCE: In this content caching strategy the contents are cached at each router along the path as contents
are being downloaded, and are replaced in the least recently used order [14].
LCD: LCD Caching scheme copies the contents at the direct neighbor of requesting node after a cache
hit occurs. The algorithm is aimed at keeping contents as closer to user as possible [14].
ProbCache: ProbCache scheme caches contents probabilistically. The basic purpose of ProbCache is
efficient management of cache resources by reducing cache redundancy. The mentioned scheme allows
caching space to other traffic sharing same path. The proposed scheme prefer keeping large cache at
edge [13].
CL4M: To make cache decision, this policy leverages the number of shortest path traversing a cache
(i.e., the concept of betweennes centrality is used). To maximize the probability of cache hit, this
scheme caches content at node with the greatest betweennes centrality [19].
To route request from requester to source, we assume the network uses Dijkstra’s shortest weighted
path routing, where delay on the links corresponds to weights. If the content is found at any
intermediate router enroute to source, then it is served from that router cache. To demonstrate the
effectiveness of our optimized caching policy we perform simulation in Icarus [21], an ICN simulator
specifically designed for analyzing caching and routing polices in ICN. Scenario generation,
experiment orchestration, experiment execution, and results collection are the four basic building
blocks of Icarus. In simulation, each content request is considered as an event. Therefore, whenever an
event occurs the corresponding timestamp, receiver, and source of event is stored. The result collection
block of the simulator gathers the result of simulation. Through the sum of delays on each link
traversed during content download, the latency is calculated in Icarus.
Extensive experiments are performed on various real world topologies, namely WIDE which is
Japanese Academic Network consisting of 30 nodes and 33 edges, GEANT (European academic
network), and GARR (Italian computer network). GEANT consists of 40 nodes and 60 edges, is an
academic network spread around the world. With 61 nodes and 89 edges GARR is the Italian national
computer network for universities. We utilized the topology GEANT in the paper as the other
topologies had insignificant effect on the results.Table2 describes our simulation setup
No of 40,000
measured
requests
In our simulation, the caches are initialized with first 10,000 contents and subsequent requests are used
for performance evaluation. Zipfian distribution with skewness α [0 1] is assumed as probability of
requesting a content. Originally contents are stored and uniformly distributed in content sources
(server). Routers with degree one are considered as users. Table X represents Following simulation
configuration is considered for result generation, content universe F=1000, cache size of each node in
32
the simulation parameters. The network varies from 4%-20% of total content universe, content
popularity skewness (α) varies from 0.6 to 1.0. Where α with value 0.6 refers to low popularity model,
similarly α with value 0.8 stands for normal popularity model, and α with value 1.0 shows high
popularity model. Results in the figures are averaged over 10 experiments.
The primary purpose of caching contents in the network is to: (1) lower the content retrieval latency
whereby the cached contents near the users can be retrieved fast than from the original server, (2)
reduce traffic and congestion while caching contents in the network because fewer links are traversed in
the network, and (3) to reduce load on server as each cache hit means one less request serving on
server.
𝑐𝑎𝑐ℎ𝑒ℎ𝑖𝑡𝑠 (8)
5.2.2 Latency
Latency is number of time units (in millisecond) a content take to reach at user or the delay taken to
deliver a content, and can be calculated as follows. [21]
33
request and content path length, while shortest path length is the sum of request and content path length
in number of hop count. Path stretch is calculated using following formula. [14]
where,
We calculate the performance upgrading of using strategy X over strategy Y by taking the percentage
of difference between two strategies divided by the performance of strategy Y. Our experimental results
show that optimized cache strategy outperforms state-of-art strategies for a wide range of simulation
parameters.
We perform a simulation to demonstrate the latency results for different caching strategies at different
popularity rates ranging from 0.6 to 1.0. From derived results we observe that optimized content cache
policy is better than state-of-art policies by 4%-18% for different cache sizes (ranges from 4%-20% of
content universe). Better cache utilization is the key reason behind the greater performance of
optimized cache strategy.Figure.5.1 describes the superior latency performance of the proposed scheme
at different popularity model, and cache size.
34
Figure 4: Latency performance of Optimize Cache scheme using different popularity model
35
5.3.2 Latency Performance for GEANT network topology using different content
Performance improvement of the proposed scheme for GEANT when using different content universe
is shown in Figure.5.2. Results are generated on fixed cache size, which is about 5% of content size.
Content universe varied from 10,000 to 50,000. Our simulation results show that performance of
optimize cache is superior to rest of polices for different content sizes as well. Results show that as size
of contents increase, latency decreases. As the content-to-cache size is kept constant therefore as with
increase in content universe absolute cache size increases causing latency decrement, cache utilization
increases as popular contents are readily available in cache. Primary purpose of our simulation is to
show the scalability of our algorithm. These results show that our algorithm of optimize cache can work
with large cache content universe which show efficiency of our technique.
36
Figure 5: Latency performance on GEANT network using different content universe
37
Popularity model( α=0.6)
0.12
0.1
Hit ratio
0.08
0.06
0.04
0.02
0
0.04 0.08 0.12 0.16 0.2
Cache size
0.3
0.25
Hit ratio
0.2
0.15
0.1
0.05
0.04 0.08 0.12 0.16 0.2
Cache Size
0.45
0.4
0.35
Hit ratio
0.3
0.25
0.2
0.15
0.1
0.04 0.08 0.12 0.16 0.2
Cache size
38
5.5 Discussion on path stretch performance
Although hit rate provides an indication of the percentage of the requests served within the network,
path stretch defines the percentage of the path that a content has travelled to retrieve the content or the
ratio between the actual path and the shortest path length. Figure.5.4 shows the path stretch ratio at
different popularity models and cache size ratio. We observe that similar to hit ratio, hop count of
optimized cache decreases at different popularity models with different cache size, and performance is
superior to state-of art techniques. From the figure we notice that optimized cache strategy decreases
average hop count from 18%-51%. The primary reason for improvement is intelligent content
placement technique adopted by the Optimized Cache where every node cache content on number of
requests it receives from downstream nodes, and caching contents from distinct routers at edge.
Low popularity model (α=0.6) and high popularity rate (α=1.0) at variable cache size (4%-20%), we
find a significant improvement in link load performance for proposed scheme. Figure.5.5 shows the
improvement in the link load performance of proposed scheme.
Different popularity model with fix cache size, with this scenario the performance of the proposed
scheme is superior to state-of-the-art schemes. Figure.5.6 describes the superior performance of the
proposed schemes
The basic reason behind the improve performance is better cache placement of popular contents at the
edge nodes. Popular contents are intelligently cached at edge routers near the users.
39
Figure 7: Path stretch performance of Optimize Cache at different cache size and popularity model
40
Popularity model( α=0.6)
70
60
Link load (bytes/ms)
50
40
30
20
10
0
0.04 0.08 0.12 0.16 0.2
Cache size
Figure 8
41
Popularity model( α=1.0)
60
50
Link load (bytes/ms)
40
30
20
10
0
0.04 0.08 0.12 0.16 0.2
Cache size
Figure 9: Link load performance of proposed scheme at low and high popularity model
42
Cache size=5%
70
60
Link load (bytes/ms)
50
40
30
20
10
0
0.6 0.8 1
Popularity model
Cache size=10%
70
60
Link load (bytes/ms)
50
40
30
20
10
0
0.6 0.8 1
Popularity model
Figure 10: Link load performance of proposed scheme at different popularity model with fixed cache size
43
CHAPTER 6
CONCLUSION AND FUTURE WORK
In this chapter we summarize the main contributions of the thesis and also discuss about future work
which can be done as an extension of our work.
From the simulation study, it is seen that our proposed caching scheme has the potential in enhancing
the system performance as compared to existing state-of-the-art. We verify proposed scheme with a
real-world network topology GEANT containing 40 nodes and 61 edges. The results show that our
proposed scheme outperforms latency results from 4%-18%, similarly the hit ratio, stretch and link load
performance of proposed scheme is superior to other schemes. Proposed cache strategy improves the hit
rate by 9%-35%, we also notice that proposed cache strategy decreases average hop count from 18%-
51%. The results show that proposed content placement technique intelligently cache contents on edge
nodes.
We plan to use our proposed scheme on a content based centrality, where instead of caching content on
edge, the nodes with high flow rate cache popular contents.
We also plan to move our scheme towards dynamic topologies and study the impact of mobility on
proposed scheme.
45
Our work can also be extended by using Genetic algorithm
46
CHAPTER 7
REFRENCES
47
References
[1] D.R. Cheriton, M. Gritter, “Triad: a new next-generation internet architecture” 2000
[10] G.Carofiglo, G.Morabito, “From Content Delivery today to Information Centric Networking'.
2013
[11] V.Athanasios, L.Zhe, S.Gwenda, “Information Centric Networking Research Challenges and
Opportunities”, 2015
[12] D.D.Ahir, B.Prashant,” Content Centric Networking and its Applications” Volume 3, No. 12,
December 2012
48
[13] I.Psaras, W.K.Chai, G.Pavlou, “Probabilistic In-Network Caching for
Information-Centric Network “, 2012
[14] C.Bernardini, T.Silverston, O.Festor, “A Comparison of Cashing Strategies for Content
Centric Networking”, 2016
[16] T.H. Luan, L.Gao, Z. Li, “Fog Computing: Focusing on Mobile Users at the Edge”, 2015'
[17] Y.kim, I.Yeom, “Performance analysis of in-network caching for content centric networking”,
Vol.57, 2013
[18] H.k.Rath, B.Panigrahi, and A.Simha, “On Cooperative On-Path and Off-Path
Caching Policy for Information Centric Networks”, IEEE 30th International
Conference on Advanced Information Networking and Applications, 2016
[19] W. K. Chai, D. He, I. Psaras, and G. Pavlou 'Cache less for more” in information-centric
networks':2013
[21] L.Saino, I.Psaras, and .Pavlou, '”Icarus: a Caching Simulator For Information Centric
Networking (ICN)”, 2014
[22] J. Ren, W. Qi, C. Westphal, J. W. Kejie Lu, S. Liu, and S. Wang in “MAGIC: a distributed
max-gain in-network caching strategy in information- centric networks,” IEEE INFOCOM
Workshop NOM 2014
[23] J.Li, B.Liu, and H.Wu, ' Energy-Efficient In-Network Caching for Content-
Centric Networking' IEEE COMMUNICATIONS LETTERS, VOL. 17, NO. 4, APRIL 2013
49
[25] R.Xang, X.Peng , “Mobility- Aware Caching for Content Centric Wireless Networks
Modelling and Methodology 'IEEE Communications Magazine,2016
[26] I.Abdullah, S.Arif, S.Hassan: 'Ubiquitous shift with Information Centric Network
Cashing Using Fog Computing , 2015
[30] B. Mathieu, P. Truong, J.F. Peltier, W. You " Media Networks; Architectures, and
50