Professional Documents
Culture Documents
Physical Communication
journal homepage: www.elsevier.com/locate/phycom
article info a b s t r a c t
Article history: In Content-Centric network (CCN), the capacity of router nodes caches the contents to satisfy the
Received 20 May 2020 requests from subsequent consumers, and reduce the network overhead. Considering the limited cache
Received in revised form 23 September 2020 capacity, the update of cached contents becomes a key element affecting the caching performance for
Accepted 11 November 2020
the future 6G network. In this paper, a replacement strategy based content popularity and cache gain
Available online 20 November 2020
(PGR) is proposed for 6G-CCN. In the proposed scheme, the popularity of contents in the current
Keywords: cycle is determined according to the time interval and frequency of consumers’ requests. In addition,
Content-Centric network the distance between the router node and consumer affects the gain of caching. Further considering
Content popularity the dynamic popularity of content and gain of caching, a realistic content value function is provided.
Cache gain When the cache space reaches the upper bound, for the router node, the existing content with
Content replacement
lower value will be replaced by newly arrived content with a higher value. Simulation results show
that, compared with the conventional least frequently used (LFU) and least recently used (LRU), the
proposed replacement strategy can effectively improve the space utilization and cache hit rate of router
nodes, as well as reduce the average hops of consumers’ content acquisition for 6G-CCN.
© 2020 Elsevier B.V. All rights reserved.
https://doi.org/10.1016/j.phycom.2020.101238
1874-4907/© 2020 Elsevier B.V. All rights reserved.
Y. Ji, X. Zhang, W. Liu et al. Physical Communication 44 (2021) 101238
This paper designs a time tag for each content in RNs based on Algorithm 1: Set forward path
the LRU to record the time when the content was last accessed. Initialize tbefore , tnow , fk , pk (T ), Hs
for routing nodes on the delivery path from consumer
Assuming that tnow is the current time, tbefore is the most recent
to Source sever do
content request time, so the interval is: updata tbefore , tnow
HS ← HS + 1
tinterval = tnow − tbefore . (6) fk ← fk + 1
calculate pk (T )
Based on the above analysis, combining the content popularity
if content k in the CS
and cache gains, the value of content k can be obtained from: then send content back to the consumer
1 discard Interest Packet
Valk (T ) = pk (T ) + Gk (T ). (7) else
etinterval forward the Interest Packet to the next hop
From (7), it can be seen that the smaller tinterval , the greater the towards Source server
end if
probability of content being requested in the current period , so end for
the popularity of content becomes the major factor affecting the
value of content. On the contrary, the bigger tinterval , the lower
the content popularity. If the popularity of cached content within Table 2
The processing of Data Packet.
the RN is relatively small at this time, cached gain becomes an
Algorithm 2: Select the replaced content
important basis for selecting replacement content.
The proposed replacement strategy uses interval and fre- Initialize HR
for routing nodes on the delivery path from Source
quency of the requested content as important factors to deter-
sever to the consumer do
mine the popularity of the content. Update the popularity of if there is enough space
content in real time to ensure that the content in the RN is fresh. then cache the content k in cs
When the difference between content popularity is small, the else
cache gain determines whether the cached content is replaced. get tbefore , tnow , fk , pk (T ), HS , HR
calculate Valk
This strategy not only caches the content that has become more if Valk (T ) > Valmin (T )
popular in the future at RNs, improves the cache hit rate, but also then evict the content with v almin (T )
places the content at nodes closer to the consumer, reducing the and insert content k in CS
traffic overhead in the network. else
forward the Data Packet to the next hop
towards the consumer
3.4. The algorithm flow of the proposed strategy end if
end if
The proposed algorithms are shown in Tables 1 and 2, for end for
interest packet and data packet respectively. For the interest
packet, when it reaches the routing node, update tnow , tbefore , Hs Table 3
and fk calculate pk (T ). Check the CS, if there is request content, Simulation parameters.
return the data packet, otherwise forward the interest packet Parameter Default value Variation range
to the next hop node. On the other hand, for the data packet, Routing nodes 50 –
when it arrives the cache node, if there is enough space to cache, Links 150 –
the content is cached. Otherwise, the cache value of the arriving Bandwidth/Mbps 10 –
content is calculated. If the value is greater than the minimum Contents 10 000 –
Consumers 18 –
cached content, the content is replaced, otherwise it is directly
Source server 1 –
forwarded to the next hop node. Attenuation factor (β ) 0.4 –
Moreover, in these two algorithms, each cached content in the Cache size 10% 2% ∼ 20%
cache space has its cache value. When the cache space reaches Zipf–Mandelbrot (α ) 0.7 0.3 ∼ 1.7
the upper limit and new content arrives, compare the value of Simulation time/s 100 –
Cycle (T)/s 5 –
contents, the content with the smallest cache value is replaced to
meet the storage requirements of the new content. The pseudo
code of the proposed strategy PGR cache replacement algorithm
is as follows: arrival rate obeys the Poisson distribution, λ = 100 req/s. For the
convenience of research, the cache space size of each RN is the
4. Performance analysis and simulation results same, the default is 10% of the total content, and the variation
range is 2% ∼ 20%. The main simulation parameters are shown
4.1. Simulation platform and parameter settings in Table 3. Taking LCE [20] as the cache placement strategy,
comparing the proposed PGR scheme with LRU and LFU.
The simulation environment ndnSIM [18] is a simulation mod-
ule based on NS-3. This module can realize the simulation of the 4.2. Simulation performance index
basic functions of NDN, and can modify the code to replace the
cache and routing strategy. The results are imported into MATLAB This paper mainly uses two indicators to evaluate the impact
for curve drawing and performance analysis. of different parameters on system performance:
The randomly generated network topology is used as the
simulation topology. The total amount of contents in the network (1) Cache hit ratio (CHR): Refer to the probability that the
is 10,000, and each content has the same size. Its popularity consumer request is satisfied by cache nodes instead of
follows the Zipf–Mandelbrot [19] distribution, that the default the source server. It is a typical parameter that reflects
parameter α = 0.7, changes from 0.3 to 1.7. The content request the performance of the cache strategy. The more times
4
Y. Ji, X. Zhang, W. Liu et al. Physical Communication 44 (2021) 101238
a consumer’s request to be hit on a RN, the higher the reflects the distance between the RN and the consumer.
cache hit rate, which can reduce the load on the side of The closer the RN is to consumers, the smaller the number
source server and improve system performance. Therefore, of hops for consumers to obtain content. This improves the
we have user experience and reduces network traffic overhead.
n
CHR = , (8) 4.3. Result analysis
N
where n represents the number of requests satisfied by (1) The impact of cache size
cache nodes, N equals to the total number of contents Figs. 3 and 4 show that, the cache hit rate and average hop
requested by consumers. count are changed with the cache size of the RN when the Zipf–
(2) Average hop count: Refer to the number of hops a con- Mandelbrot distribution parameter α = 0.7. It can be seen from
sumer takes to get content from a RN or source server. It Fig. 3 that the cache hit rate of the three replacement strategies
5
Y. Ji, X. Zhang, W. Liu et al. Physical Communication 44 (2021) 101238
Fig. 5. Change of cache hit rate with parameter α (cache size is 10% of the total content).
increases with the increase of cache size. The cache hit rate of the total content. As can be seen from Fig. 5, when the parameter
the proposed PGR strategy is always higher than LRU and LFU. α value is small, the popularity of the content is distributed
As the cache size of a RN increases, the number of contents that widely, the popularity of each content is not much different, and
can be cached increases, resulting in a decrease in the number the consumers’ preference for the content are not obvious, so
of consumers requests met by the source server, and the cached the content is scattered and cached at RNs in the network. The
contents hit rate increases. When the cache space reaches the LRU and the LFU are based on the content request interval or
upper limit, the LRU replaces the content that has not been content request frequency. It is not sensitive to the changes of
requested for a long time with the newly arrived content. The time, cannot accurately quantify the content popularity, resulting
LRU ignores the popularity of the newly arrived content, and it in the RNs caching a large amount of ‘‘outdated" contents, and
may happen that the content with low popularity replaces the the cache hit rate is low. The distribution of content popularity
content with high popularity. The LFU takes the number of times becomes concentrated as the parameter α increases. Consumers’
a consumer requests content as content popularity, and high preferences are concentrated on a small number of highly popular
popularity content replaces low popularity content, but ignores content. Therefore, only a small number of less popular content
the popularity changing over time. The early popular content will needs to be provided by the source server, and the cache hit rate
be ‘‘expire", but because the popularity of new content is always increases. The cache hit rates of the three replacement strategies
lower than it, it always exists at the RN, wasting cache space. also increase as the parameter α increases, but the performance
The proposed PGR strategy controls the content popularity with of the proposed PGR strategy is always better than the other
the interval of requested content. The smaller the interval be- two. The bigger parameter α , the more obvious the advantage.
tween two content requests, the more frequently the consumers This is because the PGR takes the characteristics of popularity
requests the content and the higher the content popularity, so changes with time into account, uses the content request interval
that the highly popular content is always kept at the RN. This to control the cached content to keep them fresh. It can also be
guarantees that consumers’ requests can be satisfied quickly, and seen from Fig. 6 that with the increase of the parameter α , the
the hit rate of cached content increases accordingly. content popularity distribution is concentrated, the cache space
Fig. 4 shows that the average hop count decreases with the of routing nodes is effectively used, and the average hop count
increase in cache size. The larger the cache space allocated to decreases as the cache hit rate increases.
the RN, the more contents the consumers get from nodes, and
the number of content acquisition hops decreases. The average 5. Conclusions
hop count of the proposed PGR strategy is significantly smaller
than LFU and LRU, because the strategy takes cache gains into Since that the cached content needs to be deleted and replaced
account. When the popularity of each cached content in the RN when the cache capacity is full in CCN, this paper proposed
is not much different, the value of the cached content is mainly a replacement strategy based on content popularity and cache
determined by the cache gains. The closer to the consumer, the gain to overcome this issue. The proposed PGR strategy reflects
higher the cache gain, avoiding replacing the content near the the change of content popularity with time through the content
consumer and reducing the number of content acquisition hops. request interval, and cache gains control the distance between
(2) The impact of Zipf–Mandelbrot distribution parameter α contents and consumers. Simulation results showed that the per-
Figs. 5 and 6 show the changes of cache hit rate and average formance of the proposed PGR strategy in terms of the cache
hop count with parameter α when the cache size is set to 10% of hit rate and average hop count is better than the LRU and LFU
6
Y. Ji, X. Zhang, W. Liu et al. Physical Communication 44 (2021) 101238
Fig. 6. Change of average hop count with parameter α (cache size is 10% of the total content).
schemes. Moreover, for CCN networks, the replacement strategy [8] L.B. Sokolinsky, LFU-K: An effective buffer management replacement al-
has attracted much attention from researchers, but the researches gorithm, in: International Conference on Database System for Advanced
Applications, Berlin Heidelberg, Germany, 2004, pp. 670–681.
on the content popularity were based on the entire content,
ignoring the difference between the content chunks, i.e., the [9] G. Carofiglio, M. Gallo, L. MUscariello, Bandwidth and storage sharing
climax part of a song is more popular than the others. In our performance in information centric networking, in: Proceeding of the 2011
ACM SIGCOMM Conference, ACM, New York, 2011, pp. 1–6.
future work, we will take the difference popularity between the
content chunks to improve the dynamic characteristics of content [10] T. Ma, W. Tian, B. Wang, et al., Weather data sharing system: an
popularity. agent-based distributed data management, IET Softw. 5 (1) (2011) 21–31.
This work was supported by the National Natural Science [14] P. Cao, S. Irani, Cost-aware WWW proxy caching algorithms, in: Proceed-
Foundation of China under Grant 61971245. ings of the USENIX Symposium on Internet Technologies and Systems,
USENIX Association, Berkeley, 1997, pp. 193–206.
7
Y. Ji, X. Zhang, W. Liu et al. Physical Communication 44 (2021) 101238
Yancheng Ji, Doctor of communication and informa- Wenfei Liu, the master of foreign linguistics and ap-
tion system, Graduated from the Xidian University plied linguistics from Hubei University of Technology
in 2011. He is currently an Associate professor at in 2008. He is currently a lecturer in School of Foreign
the School of Electronics and Information, Nantong Languages in Hubei Polytechnic University. His research
University. His research interests include wireless com- interests focus on the translation theory and practice.
munication networks, cooperative communication, and
non-orthogonal multiple-access technology.
Xiao Zhang, Graduated from the Nantong University in Guoan Zhang, Doctor of communication and informa-
2017. Working towards her master’s degree in School tion system, Graduated from the Southeast University
of Electronics and Information, Nantong University. Her in 2002. He is currently a Professor at the School
research interests include GPU parallel processing and of Electronics and Information, Nantong University.
Content-Centric network. His research interests include wireless communica-
tion networks, and software radio communication
algorithm.