Professional Documents
Culture Documents
Least fresh first cache replacement policy for NDN-based IoT networks
PII: S1574-1192(18)30088-9
DOI: https://doi.org/10.1016/j.pmcj.2018.12.002
Reference: PMCJ 983
Please cite this article as: M. Meddeb, A. Dhraief, A. Belghith et al., Least fresh first cache
replacement policy for NDN-based IoT networks, Pervasive and Mobile Computing (2018),
https://doi.org/10.1016/j.pmcj.2018.12.002
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to
our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form.
Please note that during the production process errors may be discovered which could affect the
content, and all legal disclaimers that apply to the journal pertain.
Least Fresh First Cache Replacement Policy for
NDN-based IoT Networks
Maroua Meddeba,b , Amine Dhraiefa , Abdelfettah Belghithc,∗, Thierry
Monteilb,d , Khalil Drirab , Hassan Mathkourc
a
HANA Lab, University of Manouba, Tunisia
b
LAAS-CNRS, Université de Toulouse, CNRS, Toulouse, France
c
College of Computer and Information Sciences, King Saud University, Saudi Arabia
d
Université de Toulouse, INSA, Toulouse, France
Abstract
∗
Corresponding author. Tel.: +966 535920540
Email address: abelghith@ksu.edu.sa (Abdelfettah Belghith)
1. Introduction
2
where the authors proposed a local storage service at the edge of the network
to temporarily buffer generated data. The produced data is then selectively
processed and/or synchronized with the cloud only when necessary depend-
ing on the application strategy and requirements. Such an edge storage is
intended to better shape the network traffic according to the network condi-
tions. In this paper, we rather concentrate on the in-network caching.
Some studies [4, 5, 6] have already identified the Named Data Networking
(NDN) architecture as the most suitable ICN architecture for IoT systems.
On the other hand, in an IoT context, data are transient and frequently
updated by the producer. As a consequence, copies stored in caching nodes
may become out of date after a certain period of time, and need to be evicted.
Caching mechanisms use already cache replacement policies in order to allow
the storing of new items once the cache is full. However, existing policies are
based on the incoming requests frequency or content popularity and are not
interested in data validity.
In this work, we propose the Least Fresh First (LFF) cache replacement
policy. The rationale behind this policy is to predict sensors future events
based on their past behavior. To this end, we rely on the Autoregressive
Moving Average (ARMA) time series model [7]. To the best of our knowl-
edge, none of the studies that addressed the cache replacement policies have
considered the cache freshness requirement. To evaluate our proposal, we
carry out extensive simulations using the ccnSim simulator [8]. We compare
LFF to the different well-known cache replacement policies in an ICN-based
3
IoT environment with regard to various performance metrics. The obtained
results show that our proposal significantly improves data freshness com-
pared to other policies. In addition, it improves the system performances in
terms of server hit reduction ratio, hop reduction ratio and response latency.
The remainder of this paper is organized as follows: We give in section
2 an overview of ICN and its use in an IoT environment. We present in
section 3 the most cited caching policies. We detail our proposal in section
4. In section 5, we evaluate the performance of our proposal and analyze the
obtained results. We finally conclude the paper in section 6.
Caching policies address two main issues: where to cache? and which
data item to evict once the cache is full?.
4
can be used in any type of topology. The main objective of the consumer
cache strategy is firstly to reduce the caching cost while maintaining the
system performances in terms of hop reduction and server hit reduction ratios
[12] and secondly to enhance the percentage of the freshness o the requested
content. [12] seems to be the only paper that has investigated the cache
coherency issue for ICN based IoT [13] and [14].
The betweenness centrality strategy [15] depends on the betweenness
parameter calculated for each node in the topology. The parameter measures
the number of times a node belongs to a path connecting any two nodes in
the topology. Nodes with the highest betweenness centrality parameter store
a copy of the data. Finally, the ProbCache caching strategy [16] selects the
cache node with a probability inversely proportional to the distance between
the consumer and the producer. That means this strategy privileges one or
more nodes which are close to the consumer who sent the request.
5
(LFU) policy proposes to keep popular objects in the cache to satisfy a high
number of requests. With LFU, the cache node keeps track of the number
of times a data object satisfies a request, and replaces the item with the
lowest frequency. The Random Replacement (RR) policy randomly chooses
the data item to be evicted. Caches with complex data structure motivate
the use of the randomized cache replacement policies. This is because the
RR policy does not require state information, in addition, with RR policy
both memory and processing power can be saved.
Some other specific strategies have been recently proposed for ICN. In [18],
authors introduced a Universal Caching (UC) strategy where the replacement
decision depends on a parameter assigned to each incoming content. This
parameter is based on the distance from the source to the current node, the
reachability of the router and the frequency of the content access. Results
show that this policy performs better than LRU and FIFO in ICN networks
in terms of cache hits and the average number of hops required to get the
requested content. In [19], Al-Turjman et al. introduced the Least-Value
First (LVF) cache replacement policy which takes into account the delay for
retrieving a content as well as the popularity and age of the content. LVF
was showed to outperform FIFO and LRU in terms of time-to-hit, hit-rate,
in-network delay and data publisher load.
In an IoT context, data are transient and frequently updated by the
producer. As a consequence, copies stored in caching nodes may become out
of date after a certain short period of time. Facing this stringent freshness
6
requirement, it is necessary to consider the data freshness to privilege the
eviction of stale contents from the cache. To the best of our knowledge,
existing cache replacement policies are enhanced by just embedding constant
expiration delay to cached contents (Time To Live). Such an expiration delay
can support the data freshness to a certain extent as it uses a fixed delay
which in turn may delete still valid content or by against keep in the cache
out-of-date content for a long time.
7
period (e.g;. 1 hour), a new value is recorded. The OnOff transmission mode
stipulates that the content is updated as soon as a new event occurs. We may
consider here the example of a presence sensor. The value 0 of the sensor
indicates the absence of persons. Once someone gets in the room, the value
is updated to 1. Finally, in the request-response transmission, as its name
indicates, the consumer sends directly a request to get the current value of
the sensor. In this latter case, sensors are considered to be passive and do
not follow any behavior and usually there is no need to cache their data.
Furthermore considering continuous transmission as a periodic with a tiny
period (), we may classify IoT events into two classes periodic and OnOff.
Concerning the periodic transmission, when the period T elapses, the
content is no longer valid. However, with the OnOff transmission, the sensor
behavior can not be predicted and the updates of values can be performed at
any time. For this reason, the prediction process is used with this mode. We
differentiate in Algorithm. 1 the two modes of transmission in the calculation
of Tf resh . In the case of a periodic sensor, Tf resh is the remaining time of the
period since the last update (Line 8). With an OnOff sensor, Tf resh will be
estimated using forecasting tools.
8
1: Input: Received request
2: Output:Tf resh
3: Data:
4: Sensors will receive requests from different consumers to retrieve the
sensor’s value
5: Begin
6: for each received request do
7: if Data.f low = ”P eriodic” then
8: Tf resh = (update time + T ) − current time
9: else
10: Estimate Tf resh
11: end if
12: end for
Algorithm 1: Calculation of Tf resh
approach can be regarded as a special case of the most general and most pow-
erfull algorithm of the Kalman filter algorithm [21]. Both are applied only to
the processes which satisfy the linear models with a finite number of param-
eters. The Kalman filter algorithm carries an additional strength, differently
from the Box and Jenkins approach, especially for handling missing data. In
our case, ARMA is sufficient since all the events will be recorded. Machine
learning techniques work better if we are dealing with a rather huge amount
of data and enough training is available, while ARMA is better for smaller
data sets.The exponential smoothing is suitable for forecasting data which
does not display any clear trending behavior or any seasonality. However,
the ARMA time series is designed for stationary time series. In this study,
we used real IoT data extracted from the ADREAM [22] building at LAAS-
CNRS laboratory which is a smart building. The building hosts our smart
9
apartment equipped with different sensors (temperature, humidity, lumines-
cence, presence, etc.) as well as actuators such as electric plugs attached to
different elements: lamps, fans, humidifier, etc. We have noticed a very low
variability of the data around their average value which led us to choose the
ARMA model [23] as a tool for forecasting.
ARMA model was introduced by Box and Jenkinsis in [24]. This model
is a very large family of stationary processes which has two advantages. On
the one hand, these processes are excellent and precise forecasting tools. The
forecasting error is proved to be less than or equal to 5% [25]. On the other
hand, methods are perfectly developed to estimate their parameters. The
prediction operation in ARMA is based not only on the past events but also
on some unexpected recent events. In fact, it consists in eliminating obvious
trends as periodicity and growth and then focusing on residue. This latter is
modeled and the forecast relies on such a model.
10
are respectively the number of Xi and i used in the calculation. P and q can
be constant and it was shown in [25] that this model needs a maximum of
30 values to have a good estimation. However, to optimize the calculation, p
and q could be variable and calculated according to the values of Xi and i .
Xn +φ1 Xn−1 +φ2 Xn−2 +· · ·+φp Xn−p = n +θ1 n−1 +θ2 n−2 +· · ·+θq n−q (1)
The ARMA process is the convolution of two processes that begins with
the AR (Autoregressive) process then the MA (Mobile Average) process. Xn
is an autoregressive process of order p, ARp . It is therefore determined by
the variance n of the white noise as well as its canonical polynomial Eq. 2,
where φ1 , . . . , φp are the parameters of the model.
The autoregressive model specifies that the future estimated value (Xn ) de-
pends linearly on its own previous values X1 , . . . , Xp and on a stochastic
11
term n which presents the imperfectly predictable term. n is the white
noise which measures the variance σ 2 given in equation Eq. 3.
n
X
n = σ 2 = 1/n Xi2 − X̄ 2 (3)
i=1
(n) (n)
Rp(n) φ(n) = CX,p ⇔ φ(n) = CX,p (Rp(n) )−1 (4)
n−k
X
n
CX (k) = 1/n Xj+k Xj (5)
j=1
(n) n n n
Rp = 1 CX (1) CX · · · CX
(2) (p − 1)
CX (1)
n
1 n
CX (1) · · · CX (p − 2)
n
Rp = We solve the
(n) n n n
C X (2) C X (1) 1 · · · C X (p − 3)
.. .. .. .. ..
. . . . .
n n n
CX (p − 1) CX (p − 2) CX (p − 3) · · · 1
ARp equation in Algorithm 2. As we can see from line 11 to line 21, p is set
according to the calculated φp . It is a question of choosing the best value of p
for the given distribution (Xi ) that will be sufficient for a precise estimation.
A corollary used in AR provides a tool for finding the right value of p when
we want to model a series with an AR process. We can, in fact, calculate
one by one the empirical partial correlations φp . If we compare the quantile
12
1: Input: (Xi )
2: Output:Xn+1
3: Data:
4: Sensors with full past event container
5: Sensors with an OnOff transmission mode
6: Begin
7: for each sensor do
8: Calculate n (Eq. 3)
9: for each received request do
10: p=1
11: while p ≤ n do
(n)
12: Calculate CX,p (Eq. 5)
(n)
13: Calculate (Rp )−1
14: Calculate φp (Eq.√4)
15: if | φp |≥ Z.975 / n then
16: if p ≥ 1 then
17: p--
18: end if
19: break
20: else
21: p++
22: end if
23: end while
24: Calculate Xn+1 (Eq. 2)
25: end for
26: end for
Algorithm 2: Calculation of the ARp process parameters
√
to the 0.975 order of the Gaussian ( Z.975 ), divided by n, we can see from
which value of p, the absolute value of φp remains smaller (line 15). We can
then admit that p − 1 is the best value (line 17) since p pushes the process
in the rejection region.
The Xn+1 (Tf resh ) obtained with the AR process is already sufficient es-
13
pecially with perfectly stationary series. However, for better precision, the
Moving Average process is used to readjust Tf resh . We call the MA process
of order q (also called M Aq ), the process defined by equation Eq. 6 where
θ1 , . . . , θq are the parameters of the model.
As the same way with the AR process, we calculate the MA process according
to equation Eq. 6, using i and θi . It is worth noticing that p and q can have
different values. After the calculation of the MA process, we can finally
deduce the adjusted value of Xn+1 . The calculated Tf resh is then appended
to the requested data and sent back to the consumer. As a consequence, all
cached contents will contain the freshness delay information.
To recapitulate, the data collection as well as the forecasting calcula-
tions are only performed at the gateways directly connected to the sensors.
Each gateway handles the sensors connected to it and maintains a queue for
each sensor. While the data collection is done for each event, the prediction
14
process is only invoked upon the arrival of a requests. In fact, the data col-
lection is invoked when a new event occurs and each sensor event is stored
in its specific queue. The prediction is only performed when a request ar-
rives to the gateway. The queue used in the forecasting process is the queue
corresponding to the requested sensor. The estimated lifetime called Tf resh
is then appended to the data packet. Authors in [27] showed that the com-
plexity of the forecasting model using ARMA is O(n), where n is the number
of collected data usually set to n ≤ 30. As such, the cumulative prediction
complexity at each gateway is O(n) ∗ numberof receivedInterests.
15
1: Input: A new data item to cache
2: Output: A data item to evict from the cache
3: Data:
4: Data to cache
5: Cache = (c1 , c2 , ...cn ) s.a: n ≤ cache size
6: ci the position of the data item to be evicted s.a: 1 ≤ i ≤ n
7: Begin
8: i=1
9: F ound = F alse
10: while NOT F ound AND i ≤ n do
11: if ci .name = Data.name then
12: F ound = T rue
13: else
14: i=i+1
15: end if
16: end while
17: if F ound then
18: ci .version = Data.version
19: ci .cache time = current time
20: ci .Tf resh = Data.Tf resh
21: else
22: if n 6= cache size then
23: cn+1 = DAT A
24: else
25: for each Ci ∈ C1..n do
26: cevict = min(ci .Tf resh + ci .cache time)
27: Evict(cevict )
28: end for
29: end if
30: end if
Algorithm 3: Least Fresh First
16
the consumer-cache caching strategy. Consumer1 is interested in content
/Home/room2 /pre. Since there is no entry in cache nodes that can satisfy
this request, the data is retrieved from the producer. When the response
reaches node n3, according to the adopted caching strategy, a copy of the
content will be stored in the cache. However, the cache is full, so one item
must be evicted to allow the caching of the new one. The candidate to be
ejected is the one with the least value of Tf resh + cache time which measures
the time from which the content is considered invalid. In our scenario, the
content /Home/room1/hum is deleted (red line) and the new item is pushed
into the stack (green line). The CS structure maintained by node n1 is not
full, then the new item is directly pushed into the stack. We remark that the
cache at n3 filled up faster than the cache at n1. This is because n3 stored
the content requested by both Consumer1 and Consumer2, while n1 caches
the content of requests from Consumer1 only.
Sensor2
n6
Home\room2\pre
n5
Consumer2
n3
n1
Sensor1
Consumer1
n4 n2
Home\room1\hum
CS
Id Data cache_time T_fresh
\Home\room2\pre 1 10:55am 2h
\Home\room1\Tmp 23° 10:30am 30m
\Home\room2\sound 620 09:36am 1h30
CS
\Home\room1\hum 50 08:52am 26min
Id Data cache_time T_fresh
\Home\room2\pre 1 10:55am 2h
\Home\room1\Tmp 23° 10:30am 30m
\Home\room2\sound 620 09:36am 1h30
\Home\kitchen\flame 0 09:11am 15min
\Home\room1\hum 50 08:52am 26min
\Home\kitchen\gas 0 08:05am 3h
17
4. Performance evaluation
In the simulation scenario, we need to fix two distribution laws: the dis-
tribution governing the generation of Interest requests at each consumer, and
the distribution governing the choice of the content to be requested among
all available contents. For the former, we assume that each consumer request
contents following a Poisson process with parameter λ = 1; that is 1 request
per second on the average for each consumer. The latter distribution concerns
the selection of the content among available contents (the content catalog).
Under IoT, contents have close probabilities to be requested. Therefore,
Interest packets are assumed to be uniformly distributed as in [28, 29].
Authors in [30] showed that in existing studies the ratio of the cache size
C
C over the catalog size F , F
∈ [10−5 ; 10−1 ]. To be faithful to this constraint,
C
in our simulation, we set F
= 10−3 . We consider the cache size C = 4 chunks
and the catalog size F = 4000 files. In our simulation, we do not consider file
fragmentation and we suppose that each file is presented as a unique chunk.
Many topologies can be used to evaluate ICN aspects. We choose the
18
Transit-Stub (TS) topology [31] which can model an IoT topology. Our
topology is composed of 260 nodes distributed on 2 transit domain with
on average 10 transit nodes connected each one to 2 stub domains with on
average 6 stub nodes. The 4000 sensors are connected to 40 Gateways. We
consider 25 consumers already connected at the beginning of the simulation.
The producers and their consumers are distributed in such a way that they do
not belong to the same transit domain. However, a gateway can be connected
to both the consumer and the producer.
As we have already mentioned, our simulations were carried out with real
IoT data extracted from ADREAM. We choose periodic sensors with different
periods T varying from 1s to 1h. We used smart meters such as temperature,
humidity and luminescence. Concerning the OnOff sensors, we used devices
having different variance, for example, a presence detection sensor injected in
a corridor has not the same update frequency as a presence detection sensor
injected in a bedroom. We report that the majority of considered sensors
have a sensing variance between 7s and 13s.
We consider the Hop Reduction Ratio, the Server hit Reduction Ratio
and the Response Latency metrics. Then, we propose the Validity metric to
examine the freshness of requested data.
The Hop Reduction Ratio α measures the reduction of the number of hops
traversed to satisfy a request compared to the number of hops required to
19
retrieve the content from the server.
PR
PN hir
r=1 Hir
i=1 R
α=1− (8)
N
PN
serverhiti
β = 1 − Pi=1
N
(9)
i=1 totalReqi
20
content including valid and invalid ones. In equation Eq. 11, we respectively
note validi and invalidi as the number of valid and invalid content received
by consumer i and satisfied by a cache node.
PN
i=1 validi ∗ 100
V alidity(%) = PN (11)
i=1 validi + invalidi
21
1 1
LFF LFF
RR RR
LRU LRU
LFU LFU
0.8 0.8 FIFO
FIFO
Server hit Reduction Ratio
0.4 0.4
0.2 0.2
0
0
LCE LCD ProbCache Btw
Edge Consumer cache LCE LCD ProbCache Btw
Edge Consumer cache
22
On the other hand, edge-caching and consumer-cache have almost the same
number of evictions and perform good results. The minor difference between
these two strategies stems from the fact that consumer-cache strategy makes
contents closer to consumers. We report under the consumer-cache strategy
from 0.84 to 0.92 of server hit reduction. The hop reduction ratio is about
0.76 to 0.89, implying that requests only cross 11% of hops on the path to-
wards the producer. Finally, with our strategy, the response latency varies
from 93ms to 121ms. LCD after a certain number of requests, tends to LCE
and all path nodes become caches. For this reason, LCD results are not as
bad as LCE. Concerning ProbCache and Btw, cache nodes are selected in the
middle of the request path and probably closer to consumers, in the case of
ProbCache. Simulation results of these two strategies are medium compared
to other caching strategies. Results in [10] showed that the edge nodes are the
best placement for cache nodes. Our findings confirm this conclusion. In fact,
edge-caching reports good results. Consumer-cache has the best simulation
results because requests are, in most cases, satisfied by the first hop node.
250 200
LFF
RR
LRU
200 LFU
FIFO 150
Response Latency (ms)
Number of evictions
150
100
100
50
50
0 0
LCE LCD ProbCache Btw
Edge Consumer cache LCE LCD ProbCache Btw Edge Consumer−cache
23
Now, we evaluate the impact of the different cache replacement policies.
Figures 2, 3, 4 show that LFF and RR policies outperform LRU, LFU and
FIFO policies. Recall that in an IoT environment, requests are uniformly
distributed and all sensors have close probabilities of being solicited. In
other words, contents are randomly requested. This fact explains why the
RR policy outperforms LRU and LFU. The FIFO policy aims to keep each
content as long as possible in the cache node regardless of the frequency
with which each content is requested. But also, the evicted item is not
uniformly selected. This policy may be suitable with closed queue-based
request distribution. In our scenario, FIFO presents the worst results.
Concerning our proposed cache replacement policy LFF, the selection
process of the content to be evicted is not related to the incoming requests
but to the data freshness. For this reason, our policy does not contradict a
uniform request distribution. This explains why RR and LFF findings are
very close to each other. The minor difference between these two policies is
due to the fact that the RR policy does not follow any logic. It can delete a
content which has just been stored or rather the opposite, keep in the cache
a content for a long time. The LFF is more coherent vis-a-vis the contents
lifetimes.
Figure. 2 reports from 0.65 to 0.93 of server hit reduction under LFF
and from 0.56 to 0.92 with RR policy. Under LRU policy, from 43% to
89% of requests are satisfied by cache nodes. This figure portrays between
0.38 and 0.87 of server hit reduction using LFU. Finally, with FIFO policy,
24
100
LFF
RR
LRU
80 LFU
FIFO
Validity % 60
40
20
0
LCE LCD ProbCache Btw Edge Consumer−cache
Figure 6: Validity %
results are almost from 0.32 to 0.84. Figure. 3 gives the same performance
results. LFF policy outperforms other replacement policies with 0.75 to 0.89
for hop reduction ratio. RR performs close results, about 0.67 to 0.86. This
ratio is about 0.58 to 0.83 and between 0.53 and 0.80 under LRU and LFU
respectively. With FIFO policy, requests traverse from 24% to 56% of the
path towards the producer. The response latency, depicted in Figure. 4, is
the lowest with LFF policy with 83ms to 115ms. It is about 93ms to 135ms
under RR policy. With LRU and LFU, it respectively varies from 108ms
to 170ms and from 113ms to 185ms. The FIFO policy reports the longest
response latency with 121ms to 221ms.
The proposed LFF cache replacement policy maximizes the content va-
lidity percentage to meet the IoT freshness requirement. It thrives to predict
the exact delays of updates in order to eliminate copies supposed to be in-
valid. Figure. 6 depicts the percentage of fresh content with different cache
replacement policies and caching strategies.
LRU, LFU, FIFO and RR policies do not consider the data freshness in
25
the eviction process. That means an item may remain stored in the cache for
a long time even if it is no longer valid. Figure. 6 compares different cache
replacement policies in term of data freshness. Since among the compared
cache replacement policies, only our policy considers the data freshness re-
quirement, we can intuitively deduce that it performs better results in term
of data validity %.
As it is shown in Figure. 6, LRU, LFU and FIFO have almost the same
percentage of content validity, from 52% with consumer-cache to 45% with
LCE. In fact, LRU and LFU policies usually keep in the cache for a long
time more solicited contents until they are evicted. The same with FIFO, it
follows the logic to keep in the cache all the contents for the longest time.
The RR policy has slightly better results by comparing it to LRU, LFU
and FIFO, from 61% with consumer-cache to 52% with LCE. This policy is
random and does not follow any law to manage the lifetime of a content in a
cache. Concerning our proposed LFF cache replacement policy, it calculates
the required lifetime for which the content is supposed to be valid. Then, at
the eviction time, if all the content are valid, it selects the one who has the
least lifetime remaining and if many contents are already invalid, it selects the
one being invalid the longest. The LLF policy, combined with any caching
strategy, significantly increases the data validity percentage. Figure.6 shows
that this percentage can reach 96% with consumer-cache and 81% under
LCE.
26
5. Conclusion
6. Acknowledgement
The authors extend their appreciation to the Deanship of Scientific Re-
search at King Saud University for funding this work through research group
no RGP-1436-031
References
[1] V. Jacobson, D. K. Smetters, J. D. Thornton, M. . Plass, N. H. Briggs,
R. L. Braynard, Networking named content, in: CoNEXT ’09, ACM,
2009, pp. 1–12.
27
[2] Y. Zhang, a. et, Icn based architecture for iot: Requirements and chal-
lenges (2013).
28
[13] S. Arshad, A. M. Awais, M. H. Rehmani, Information-centric networking
based caching and naming schemes for internet of things: A survey and
future research directions, IEEE Com Surveys and Tutorials.
[14] I. U. Din, S. Hassan, M. K. Khan, M. Guizani, O. Ghazali, A. Habbal,
Caching in information-centric networking: Strategies, challenges, and
future research directions, IEEE Commu Surveys and Tutorials.
[15] W. K. Chai, D. He, I. Psaras, G. Pavlou, Cache ”less for more” in
information-centric networks, in: Networking, IFIP’12, Springer-Verlag,
2012, pp. 27–40.
[16] I. Psaras, W. K. Chai, G. Pavlou, Probabilistic in-network caching for
information-centric networks, in: ICN’12, ACM, 2012, pp. 55–60.
[17] V. Sourlas, P. Flegkas, L. Tassiulas, A novel cache aware routing scheme
for information-centric networks, Computer Networks 59 (2014) 44 – 61.
[18] B. Panigrahi, S. Shailendra, H. K. Rath, A. Simha, Universal caching
model and markov-based cache analysis for information centric net-
works, in: ANTS’14, 2014, pp. 1–6.
[19] F. M. Al-Turjman, A. E. Al-Fagih, H. S. Hassanein, A value-based cache
replacement approach for information-centric networks, in: LCN’13,
2013, pp. 874–881.
[20] R. Liu, W. Wu, H. Zhu, D. Yang, M2M-oriented qos categorization in
cellular network, in: WiCOM’11, 2011, pp. 1–5.
[21] P. E. Caines, Relationship between box-jenkins-strm control and kalman
linear regulator, Proc. Institution Elec. Engineers 119 (5) (1972) 615–
620.
[22] LAAS-CNRS, Adream (2013).
URL http://www.laas.fr/1-32329-Le-batiment-intelligent
-Adream-instrumente-et-econome-en-energie.php
[23] S. Makridakis, M. Hibon, Arma models and the boxjenkins methodology,
Journal of Forecasting 16 (3) (1997) 147–163.
[24] G. Box, G. M. Jenkins, Time Series Analysis: Forecasting and Control,
1st Edition, Holden-Day Inc., San Francisco, 1970.
29
[25] P. J. Brockwell, R. A. Davis, Time Series: Theory and Methods,
Springer New York, 1991.
30