You are on page 1of 8

Physical Communication 44 (2021) 101238

Contents lists available at ScienceDirect

Physical Communication
journal homepage: www.elsevier.com/locate/phycom

Full length article

Replacement based content popularity and cache gain for 6G


Content-Centric network

Yancheng Ji a , , Xiao Zhang a , Wenfei Liu b , Guoan Zhang a
a
School of Information Science and Technology, Nantong University, Nantong 226000, China
b
School of Foreign Languages, Hubei Polytechnic University, Huangshi 435000, China

article info a b s t r a c t

Article history: In Content-Centric network (CCN), the capacity of router nodes caches the contents to satisfy the
Received 20 May 2020 requests from subsequent consumers, and reduce the network overhead. Considering the limited cache
Received in revised form 23 September 2020 capacity, the update of cached contents becomes a key element affecting the caching performance for
Accepted 11 November 2020
the future 6G network. In this paper, a replacement strategy based content popularity and cache gain
Available online 20 November 2020
(PGR) is proposed for 6G-CCN. In the proposed scheme, the popularity of contents in the current
Keywords: cycle is determined according to the time interval and frequency of consumers’ requests. In addition,
Content-Centric network the distance between the router node and consumer affects the gain of caching. Further considering
Content popularity the dynamic popularity of content and gain of caching, a realistic content value function is provided.
Cache gain When the cache space reaches the upper bound, for the router node, the existing content with
Content replacement
lower value will be replaced by newly arrived content with a higher value. Simulation results show
that, compared with the conventional least frequently used (LFU) and least recently used (LRU), the
proposed replacement strategy can effectively improve the space utilization and cache hit rate of router
nodes, as well as reduce the average hops of consumers’ content acquisition for 6G-CCN.
© 2020 Elsevier B.V. All rights reserved.

1. Introduction achieve unlimited caching of contents. Therefore, when the cache


space usage reaches the upper limit or the cached content is no
The expansion of network scale and rise of new network longer popular, the relatively minor contents need to be deleted
technologies have not only brought about a surge in business or replace to leave the space for more important or popular con-
traffic, but also have been accompanied by problems such as tent. How to delete or update the cached content is the problem
network scalability, mobility, and security [1]. At the same time, that needs to be considered in the cache replacement strategy,
consumers are more concerned about the quality and speed of and it is also the focus and hot spot of CCN cache performance
contents acquisition than the source and transmission. The tradi- improvement research.
tional solution is to build an overlay network at the application Least Recently Used (LRU) [7] and Least Frequently Used
layer, such as content distribution network (CDN) [2], peer-to- (LFU) [8] are the traditional replacement strategy in CCN. LRU
peer network (P2P) [3] and vehicular network [4], which allevi- takes the interval between the current consumer’s request for
ates the problem of content distribution and shares to a certain cached content and the last as the reference for the choice of
extent, but not well adapted to various network applications. The replacement content. If a content had recently been requested,
content-centric network (CCN) [5] takes contents as the center LRU thinks that it is more likely to be requested again in the
of network, uses the name of content replaces the IP address as future, so using the newly arrived content to replace the one
the identification of content routing and forwarding, which has that has not been requested for a long time. LRU is easier to
become a typical representative of the new network architecture. implement than other replacement strategies, but does not take
In CCN, each router node (referred to as router node (RN) here- the actual popularity of content into account. A content with high
inafter) has a cache capability. By caching the content in an RN popularity is likely to be replaced by the newly arrived, even if
closer to the consumer to satisfy subsequent consumers’ requests, its popularity is not high. LFU believes that the frequency of con-
improve the content transmission rate, and reduce network over- sumers request for content can be considered as the popularity
head [6]. Due to the limitation of cache space, it is difficult to of contents. When the cache space of the RN reaches the upper
limit and new content arrives, the cached content with the least
∗ Corresponding author. frequency of requests will be replaced. LFU has a defect that the
E-mail addresses: jiyancheng@ntu.edu.cn (Y. Ji), 17110015@yjs.ntu.edu.cn cached content which was requested frequently in the past still
(X. Zhang), liuwenfei@hbpu.edu.cn (W. Liu), gzhang@ntu.edu.cn (G. Zhang). occupies cache space even if it is no longer popular, resulting

https://doi.org/10.1016/j.phycom.2020.101238
1874-4907/© 2020 Elsevier B.V. All rights reserved.
Y. Ji, X. Zhang, W. Liu et al. Physical Communication 44 (2021) 101238

in low cache space utilization. Similar to the first-in-first-out


principle of the queue, FIFO [9] replaces the first cached content.
The algorithm is simple, but the cache hit rate is low. SIZE [10]
preferentially replaces content with a large number of bytes, but
does not take the popularity of content into account, which may
cause the low-popular content of small bytes to remain in the
cache space for a long time, reducing the cache hit rate. LRU-
SP [11] considers the size and popularity of the content to try to
Fig. 1. The forwarding of Interest Packet.
make the cache hit rate higher, which effectively reduces cache
pollution issues. Arianfar [12] proposed a random algorithm that
selects replacement content by requesting access history in the
last few seconds of each SRAM entry. Although this type of cache • We compare the proposed PGR scheme with LRU and LFU
replacement strategy is easy to implement, but it lacks purpose in terms of cache hit rate and the average hops of con-
and rationality. Wang proposed an improved Hetero [13] strategy sumers’ content acquisition. The simulation results show
based on the Greedy Dual-Size strategy [14], where the Greedy that, the proposed PGR scheme outperforms the LRU and
Dual-Size strategy comprehensively considers the content size, LFU significantly, improve the overall performance of the
cache cost and content age, and sets the weight for each content CCN.
in the cache space, replacing the content with the smallest weight The remainder of this paper is organized as follows. In Sec-
each time. However, this strategy does not consider the number tion 2, we provide a brief discussion on CCN scheme. Section 3
of times that the cached content has been accessed in the past. includes the basis idea and design of the proposed PGR scheme.
This strategy takes the number of hops that a consumer needs Simulations and analytical results are presented in Section 4.
to take to obtain content as a cost, and replaces the least cost Finally, we conclude this work.
content with each replacement. Although this strategy takes the
overhead of caching content into account and can reduce the cost 2. The architecture of CCN
of content acquisition to a certain extent, but it does not take the
popularity of the content into account, which cannot reflect the The essential difference between CCN and TCP/IP is that, CCN
current situation of the content. uses the name of content as an identifier for information trans-
Clearly, LRU replaces the content that has not been requested mission. Consumers’ requests driven the network communica-
for the longest time. It is easy to see that the recently accessed tions. In CCN, there is two types of transmission packets: Interest
content has a higher probability of being accessed in the near Packet and Data Packet. The consumer sends an Interest Packet
future. On the other hand, the proposed algorithm only con- carrying the requested content name and forwards it through
siders the time when the data object is requested during the RNs. When the Interest Packet arrives at the content provider,
last time when the content is popular. When the distribution the Data Packet containing the requested content name, content,
changes, the adaptation performance will correspondingly de- and signature of the content provider will be sent to the con-
crease. By counting the request frequency of cached content in sumer along the reverse path with the Interest Packet routing
the past period of time, LFU believes that the higher the fre- to meet the consumer’s needs. In the entire communication, nei-
quency, the higher the use value. Therefore, when the cache space ther the Interest Packet nor the Data Packet carries any address
is insufficient, the content with the lowest request frequency is information of host or interface.
always replaced. This algorithm has a cache pollution problem, Contents broadcasting in the network are mainly completed
i.e., the content with high frequency in the past still occupies by three key data structures: content store (CS), pending interest
the cache space even if it is no longer requested, which reduces table (PIT) and forwarding information base (FIB) [15]. The CS
the utilization of the cache space. Although LRU and LFU take caches the passing Data Packets to satisfy the subsequent Inter-
the popularity of content into account and increase the hit rate est Packets. The PIT is used to record the Interest Packets and
of cached to a certain extent, they ignore the dynamic changes corresponding interface information that have been forwarded
of popularity over time, and cannot reflect the current content but have not been met. By this way, CCN can achieve the same
popularity in real time. In addition, they do not consider the information aggregation, avoid sending the same Interest Packets
benefits of caching content. Based on these observations, this frequently and wasting network resources. The FIB stores the
paper proposes a novel replacement strategy based on content next-hop interface information that reaches the content provider
popularity and cache gain (PGR). The proposed scheme combines (source server or RN) and is used to complete the forwarding of
the time interval and frequency of requested content; adopts a Interest Packets.
dynamic popularity estimation method to reduce the high oc- As shown in Fig. 1, when an Interested Packet arrives at a RN, it
cupancy of cache space caused by dynamic changes in content is first checking whether the CS has cached the requested content.
popularity; and increases cache revenue to reduce network over- If it has, the Data Packet will be sent to the consumer and the
head, enabling CCN network obtain higher cache performance. Interest Packet will be discarded. Otherwise, check whether there
The implementations and contributions of this manuscript are is the same entry in the PIT, and if so, add the arrival interface
summarized in the following: information of the requested content to the PIT, and discard the
Interest Packets at the same time to achieve the aggregation
• Considering the dynamic change of content popularity over of the information. Otherwise, the maximum matching query is
time, PGR considers the interval and frequency of consumers performed in the FIB, according to the information of FIB, the
requests to define content popularity, updates the content Interest Packet is forwarded to the next hop, and a new PIT table
popularity in real time, caches content that meets con- is established.
sumer’s preferences in time, and keeps cached content fresh. Fig. 2 shows the forwarding of Data packet. When a RN re-
• The performance gain of caching contents is mainly deter- ceives a Data Packet, if it has the requested content in the CS, send
mined by the cost of transmission and caching. PGR places the Data Packet to the consumer without caching. Otherwise,
contents at the RNs which closer to the consumers to obtain check whether the requested content entry exists in the PIT. If
higher cache gains, and reduce network overhead. it exists, forwarding the Data Packets to consumers according to
2
Y. Ji, X. Zhang, W. Liu et al. Physical Communication 44 (2021) 101238

by the frequency (number of times) that consumers request con-


tent. The higher the frequency (number of times), the more
popular the content is. However, in the actual network, the needs
of consumers are time-varying. For example, some contents are
affected by current affairs or unexpected situations in the current
period of time, causing consumers to pay more attention, but
as time goes by, consumers’ attention shifts, the content no
longer receives attention, and its popularity decreases. Therefore,
Fig. 2. The forwarding of Data Packet.
affected by current affairs or consumers needs, the popularity of
content changes dynamically, so a dynamic model of popularity
is needed to count the popularity of content in a cycle.
the Interest Packet arrival interface information (one or more), Using the estimation algorithm proposed in [17], the popu-
and deleting the entry in the PIT. The cache placement policy de- larity of content in the current cycle is calculated based on the
termines whether caching the contents [16]. If no entry matching popularity of the content in the previous cycle and the number of
the Data Packet is found in the PIT, it may be that the consumer times the consumers requested the content in the current cycle.
has abandoned the request or exceeds the life of the Interest
Packet, in these cases, discard the Data Packet directly. pk (T ) = β × pk (T − 1) + (1 − β ) × fk , (1)
The advantages of 6G-CCN can be summarized as: (1) The where pk (T ) represents the popularity of content k in the current
data can be obtained from any cache, not from the fixed channel, cycle, pk (T − 1) is the popularity of content k in the previous cycle,
therefore, there is no data channel security in a CNN network. fk is the number of content requests in the current cycle, β is the
(2) Compared with TCP/IP network, CCN has higher flexibility, attenuation factor, 0 < β < 1, which represents the proportion
security and robustness without performance loss. (3) Due to the of the previous cycle popularity in the current cycle.
ability of natural traffic regulation, when forwarding the data,
CCN can choose the forwarding strategy according to the link 3.2. Cache gain
condition to balance the whole network traffic. Therefore, one can
conclude that CCN will simplify and empower network efficiency Rather than forwarding consumers’ requests to the origin
and improve the security, which is envisioned as a potential server in traditional networks, in CCN consumers can directly
networking candidate for future 6G communications. Moreover, obtain contents from the RNs. Therefore, paying less cost of
originally, researchers began to study how to use the charac- caching can get more benefits, which reduce network traffic
teristics of SDN to deploy CCN problems from 2012. Currently, overhead effectively.
a number of design solutions have been proposed, especially The cost of contents delivery and caching determine the bene-
the routing and caching technologies of CCN under SDN have fits of caching contents. This paper mainly considers the number
been fully developed. Therefore, it is believed that the proposed of hops for contents transmission. Therefore, it is necessary to add
scheme can be further applied in SDN. a field in the Interest Packet and the Data Packet to record the
number of hops that the content passes through in the network
3. The basic idea and design of PGR transmission. The cost of content transmission is affected by
the distance between the RN and the consumer. The closer the
The interval between consumers request for the content re- distance is, the less the number of hops for content acquisition,
flects the changes of consumers’ needs for cached content. If the the lower the transmission cost, and the higher the cache benefit.
interval is short, it means that consumers request the content Therefore, this point should also be considered when selecting
alternative content.
more frequently and the popularity of the content is high. The
If the requested content k has not been cached within a certain
benefits of caching contents in RNs are determined by the cost of
period, the cost of transmission of the requested content to be
transmission and caching. The closer to consumers, the least cost
forwarded to the consumer is:
of transmission, the more benefits of caching contents. Evaluate
the value of cached content by combining content popularity and Tk_0 (T ) = fk × pnk × HS , (2)
cache gains. When the space of the RN is saturated and new con-
where pnkis the cost of one hop of content transmission, HS
tent arrives, the value of the content is calculated and compared
indicates the number of hops from the consumer to source server.
with the cached content. If it is greater than the minimum cached
The cost of caching required for the RN to cache the content k
content value, it is replaced with new content, otherwise forward
is:
the new content directly to the consumer. It is worth noting that,
for the replacement, according to the time label of the content, Ck (T ) = pnk × HR + psk , (3)
the time interval between the last two requests can be obtained,
where psk
is the cost of caching contents in RN; HR represents the
and the current popularity of the content can be calculated by number of hops from the cache node to the source server. When
combining the number of times the user requests the content in the cache hit, the transmission cost of content k is:
the period. The higher the popularity, the more in line with con-
sumer preferences, the higher the possibility of being cached. The Tk_1 (T ) = Ck (T ) + fk × pnk × (HS − HR )
proposed strategy takes the real-time changes of content pop- = pnk × [fk × (HS − HR ) + HR ] + psk . (4)
ularity into account, and caches content that meets consumers’
preferences at RNs which closer to consumers. While increasing Then, the benefits of caching content k in the RN during this
the cache hit rate, it reduces the network traffic overhead and period is:
improves the caching performance of 6G-CCN. Gk (T ) = Tk_0 (T ) − T k_1(T )

3.1. Content popularity


= (fk − 1) × pnk × HR − psk (5)
From (5), it can be seen that the benefit of caching content is
The popularity of content can clearly reflect the consumer’s affected by the content popularity and the distance from the node
demand for content. In a static network environment, the popu- to the consumer. The higher the content popularity and the closer
larity of content is less affected by time. It is mainly determined the consumer, the higher the cache gain.
3
Y. Ji, X. Zhang, W. Liu et al. Physical Communication 44 (2021) 101238

3.3. The calculation of cached content value Table 1


The processing of Interest Packet.

This paper designs a time tag for each content in RNs based on Algorithm 1: Set forward path

the LRU to record the time when the content was last accessed. Initialize tbefore , tnow , fk , pk (T ), Hs
for routing nodes on the delivery path from consumer
Assuming that tnow is the current time, tbefore is the most recent
to Source sever do
content request time, so the interval is: updata tbefore , tnow
HS ← HS + 1
tinterval = tnow − tbefore . (6) fk ← fk + 1
calculate pk (T )
Based on the above analysis, combining the content popularity
if content k in the CS
and cache gains, the value of content k can be obtained from: then send content back to the consumer
1 discard Interest Packet
Valk (T ) = pk (T ) + Gk (T ). (7) else
etinterval forward the Interest Packet to the next hop
From (7), it can be seen that the smaller tinterval , the greater the towards Source server
end if
probability of content being requested in the current period , so end for
the popularity of content becomes the major factor affecting the
value of content. On the contrary, the bigger tinterval , the lower
the content popularity. If the popularity of cached content within Table 2
The processing of Data Packet.
the RN is relatively small at this time, cached gain becomes an
Algorithm 2: Select the replaced content
important basis for selecting replacement content.
The proposed replacement strategy uses interval and fre- Initialize HR
for routing nodes on the delivery path from Source
quency of the requested content as important factors to deter-
sever to the consumer do
mine the popularity of the content. Update the popularity of if there is enough space
content in real time to ensure that the content in the RN is fresh. then cache the content k in cs
When the difference between content popularity is small, the else
cache gain determines whether the cached content is replaced. get tbefore , tnow , fk , pk (T ), HS , HR
calculate Valk
This strategy not only caches the content that has become more if Valk (T ) > Valmin (T )
popular in the future at RNs, improves the cache hit rate, but also then evict the content with v almin (T )
places the content at nodes closer to the consumer, reducing the and insert content k in CS
traffic overhead in the network. else
forward the Data Packet to the next hop
towards the consumer
3.4. The algorithm flow of the proposed strategy end if
end if
The proposed algorithms are shown in Tables 1 and 2, for end for
interest packet and data packet respectively. For the interest
packet, when it reaches the routing node, update tnow , tbefore , Hs Table 3
and fk calculate pk (T ). Check the CS, if there is request content, Simulation parameters.
return the data packet, otherwise forward the interest packet Parameter Default value Variation range
to the next hop node. On the other hand, for the data packet, Routing nodes 50 –
when it arrives the cache node, if there is enough space to cache, Links 150 –
the content is cached. Otherwise, the cache value of the arriving Bandwidth/Mbps 10 –
content is calculated. If the value is greater than the minimum Contents 10 000 –
Consumers 18 –
cached content, the content is replaced, otherwise it is directly
Source server 1 –
forwarded to the next hop node. Attenuation factor (β ) 0.4 –
Moreover, in these two algorithms, each cached content in the Cache size 10% 2% ∼ 20%
cache space has its cache value. When the cache space reaches Zipf–Mandelbrot (α ) 0.7 0.3 ∼ 1.7
the upper limit and new content arrives, compare the value of Simulation time/s 100 –
Cycle (T)/s 5 –
contents, the content with the smallest cache value is replaced to
meet the storage requirements of the new content. The pseudo
code of the proposed strategy PGR cache replacement algorithm
is as follows: arrival rate obeys the Poisson distribution, λ = 100 req/s. For the
convenience of research, the cache space size of each RN is the
4. Performance analysis and simulation results same, the default is 10% of the total content, and the variation
range is 2% ∼ 20%. The main simulation parameters are shown
4.1. Simulation platform and parameter settings in Table 3. Taking LCE [20] as the cache placement strategy,
comparing the proposed PGR scheme with LRU and LFU.
The simulation environment ndnSIM [18] is a simulation mod-
ule based on NS-3. This module can realize the simulation of the 4.2. Simulation performance index
basic functions of NDN, and can modify the code to replace the
cache and routing strategy. The results are imported into MATLAB This paper mainly uses two indicators to evaluate the impact
for curve drawing and performance analysis. of different parameters on system performance:
The randomly generated network topology is used as the
simulation topology. The total amount of contents in the network (1) Cache hit ratio (CHR): Refer to the probability that the
is 10,000, and each content has the same size. Its popularity consumer request is satisfied by cache nodes instead of
follows the Zipf–Mandelbrot [19] distribution, that the default the source server. It is a typical parameter that reflects
parameter α = 0.7, changes from 0.3 to 1.7. The content request the performance of the cache strategy. The more times
4
Y. Ji, X. Zhang, W. Liu et al. Physical Communication 44 (2021) 101238

Fig. 3. Change in cache hit rate with cache size (α = 0.7).

Fig. 4. Change in average hop count with cache size (α = 0.7).

a consumer’s request to be hit on a RN, the higher the reflects the distance between the RN and the consumer.
cache hit rate, which can reduce the load on the side of The closer the RN is to consumers, the smaller the number
source server and improve system performance. Therefore, of hops for consumers to obtain content. This improves the
we have user experience and reduces network traffic overhead.
n
CHR = , (8) 4.3. Result analysis
N
where n represents the number of requests satisfied by (1) The impact of cache size
cache nodes, N equals to the total number of contents Figs. 3 and 4 show that, the cache hit rate and average hop
requested by consumers. count are changed with the cache size of the RN when the Zipf–
(2) Average hop count: Refer to the number of hops a con- Mandelbrot distribution parameter α = 0.7. It can be seen from
sumer takes to get content from a RN or source server. It Fig. 3 that the cache hit rate of the three replacement strategies
5
Y. Ji, X. Zhang, W. Liu et al. Physical Communication 44 (2021) 101238

Fig. 5. Change of cache hit rate with parameter α (cache size is 10% of the total content).

increases with the increase of cache size. The cache hit rate of the total content. As can be seen from Fig. 5, when the parameter
the proposed PGR strategy is always higher than LRU and LFU. α value is small, the popularity of the content is distributed
As the cache size of a RN increases, the number of contents that widely, the popularity of each content is not much different, and
can be cached increases, resulting in a decrease in the number the consumers’ preference for the content are not obvious, so
of consumers requests met by the source server, and the cached the content is scattered and cached at RNs in the network. The
contents hit rate increases. When the cache space reaches the LRU and the LFU are based on the content request interval or
upper limit, the LRU replaces the content that has not been content request frequency. It is not sensitive to the changes of
requested for a long time with the newly arrived content. The time, cannot accurately quantify the content popularity, resulting
LRU ignores the popularity of the newly arrived content, and it in the RNs caching a large amount of ‘‘outdated" contents, and
may happen that the content with low popularity replaces the the cache hit rate is low. The distribution of content popularity
content with high popularity. The LFU takes the number of times becomes concentrated as the parameter α increases. Consumers’
a consumer requests content as content popularity, and high preferences are concentrated on a small number of highly popular
popularity content replaces low popularity content, but ignores content. Therefore, only a small number of less popular content
the popularity changing over time. The early popular content will needs to be provided by the source server, and the cache hit rate
be ‘‘expire", but because the popularity of new content is always increases. The cache hit rates of the three replacement strategies
lower than it, it always exists at the RN, wasting cache space. also increase as the parameter α increases, but the performance
The proposed PGR strategy controls the content popularity with of the proposed PGR strategy is always better than the other
the interval of requested content. The smaller the interval be- two. The bigger parameter α , the more obvious the advantage.
tween two content requests, the more frequently the consumers This is because the PGR takes the characteristics of popularity
requests the content and the higher the content popularity, so changes with time into account, uses the content request interval
that the highly popular content is always kept at the RN. This to control the cached content to keep them fresh. It can also be
guarantees that consumers’ requests can be satisfied quickly, and seen from Fig. 6 that with the increase of the parameter α , the
the hit rate of cached content increases accordingly. content popularity distribution is concentrated, the cache space
Fig. 4 shows that the average hop count decreases with the of routing nodes is effectively used, and the average hop count
increase in cache size. The larger the cache space allocated to decreases as the cache hit rate increases.
the RN, the more contents the consumers get from nodes, and
the number of content acquisition hops decreases. The average 5. Conclusions
hop count of the proposed PGR strategy is significantly smaller
than LFU and LRU, because the strategy takes cache gains into Since that the cached content needs to be deleted and replaced
account. When the popularity of each cached content in the RN when the cache capacity is full in CCN, this paper proposed
is not much different, the value of the cached content is mainly a replacement strategy based on content popularity and cache
determined by the cache gains. The closer to the consumer, the gain to overcome this issue. The proposed PGR strategy reflects
higher the cache gain, avoiding replacing the content near the the change of content popularity with time through the content
consumer and reducing the number of content acquisition hops. request interval, and cache gains control the distance between
(2) The impact of Zipf–Mandelbrot distribution parameter α contents and consumers. Simulation results showed that the per-
Figs. 5 and 6 show the changes of cache hit rate and average formance of the proposed PGR strategy in terms of the cache
hop count with parameter α when the cache size is set to 10% of hit rate and average hop count is better than the LRU and LFU
6
Y. Ji, X. Zhang, W. Liu et al. Physical Communication 44 (2021) 101238

Fig. 6. Change of average hop count with parameter α (cache size is 10% of the total content).

schemes. Moreover, for CCN networks, the replacement strategy [8] L.B. Sokolinsky, LFU-K: An effective buffer management replacement al-
has attracted much attention from researchers, but the researches gorithm, in: International Conference on Database System for Advanced
Applications, Berlin Heidelberg, Germany, 2004, pp. 670–681.
on the content popularity were based on the entire content,
ignoring the difference between the content chunks, i.e., the [9] G. Carofiglio, M. Gallo, L. MUscariello, Bandwidth and storage sharing
climax part of a song is more popular than the others. In our performance in information centric networking, in: Proceeding of the 2011
ACM SIGCOMM Conference, ACM, New York, 2011, pp. 1–6.
future work, we will take the difference popularity between the
content chunks to improve the dynamic characteristics of content [10] T. Ma, W. Tian, B. Wang, et al., Weather data sharing system: an
popularity. agent-based distributed data management, IET Softw. 5 (1) (2011) 21–31.

[11] C. Kai, Y. Kanbayashi, LRU-SP: A Size-Adjusted and Popularity-Aware LRU


Replacement Algorithm for Web Caching, IEEE Computer Society, 2000, pp.
Declaration of competing interest 48–53.

[12] S. Arianfar, P. Nikander, J. Ott, et al., On content-centric router design and


The authors declare that they have no known competing finan- implications, in: Re-Arichitecting the Internet Workshop, ACM, 2010, pp.
cial interests or personal relationships that could have appeared 1–6.
to influence the work reported in this paper. [13] J. Wang, B. Bensaou, Improving content-centric networks performance
with progressive diversity-load driven caching, in: Proceedings of the
Acknowledgment 2012 1st IEEE International Conference on Communications in China, IEEE,
Piscataway, 2012, pp. 85–90.

This work was supported by the National Natural Science [14] P. Cao, S. Irani, Cost-aware WWW proxy caching algorithms, in: Proceed-
Foundation of China under Grant 61971245. ings of the USENIX Symposium on Internet Technologies and Systems,
USENIX Association, Berkeley, 1997, pp. 193–206.

References [15] B. Ahlgren, C. Dannewitz, C. Imbrenda, et al., A survey of information-


centric networking, IEEE Commun. Mag. 50 (7) (2012) 26–36.
[1] J. Pan, R. Jain, S. Paul, et al., MILSA: A new evolutionary architecture for
[16] M. Zhang, H. Luo, H. Zhang, A survey of caching mechanisms in
scalability, mobility, and multihoming in the future internet, IEEE J. Sel.
information-centric networking, IEEE Commun. Surv. Tutor. 17 (3) (2015)
Areas Commun. 28 (8) (2010) 1344–1362.
1473–1499.
[2] J. Choi, J. Han, E. Cho, et al., A survey on content-oriented networking for
efficient content delivery, IEEE Commun. Mag. 49 (3) (2011) 121–127. [17] Y. Zhang, J. Zhao, G. Cao, Roadcast: A popularity aware content sharing
[3] A. Morse, Peer-to-peer crowdfunding: Information and the potential for scheme in vanets, ACM SIGMOBILE Mobile Comput. Commun. Rev. 13 (4)
disruption in consumer lending, Soc. Sci. Electron. Publ. 7 (1) (2015) (2010) 1–14.
463–482.
[4] W. Duan, et al., Emerging technologies for 5G-IoV networks: Applications, [18] S. Mastorakis, A. Afanasyev, I. Moiseenko, et al., NdnSIM 2.0: A New
trends and opportunities, IEEE Netw. 34 (5) (2020) 283–289. Version of the NDN Simulator for NS-3: NDN-0028, Xerox Palo Alto
[5] V. Jacobson, D.K. Smetters, J.D. Thronton, et al., Networking named content, Research Center, Palo Alto, USA, 2015.
in: Proceeding of the 5th International Conference on Emerging Network- [19] Q. Jiang, C. Tan, C. Phang, Understanding Chinese online users and their
ing Experiments and Technologies (CoNEXT09), Rome, Italy, 2009, pp. visits to websites: Application of Zipf’s law, Int. J. Inf. Manage. 33 (5) (2013)
1–12. 752–763.
[6] J. Zhang, W. Xie, F. Yang, et al., Mobile edge computing and application in
traffic offloading, Telecommun. Sci. 32 (7) (2016) 132–139. [20] N. Laoutaris, S. Syntila, I. Stavrakakis, Meta algorithms for hierarchical web
[7] M. Weng, Y. Shang, Y. Tian, The design and implementation of LRU-based caches, in: Proceedings of IEEE International Conference on Performance,
web cache, in: Communications and Networking in China (CHINACOM), Computing, and Communications, IEEE Press, Washington D. C, USA, 2004,
2013 8th International ICST Conference on, IEEE Computer Society, 2013. pp. 445–452.

7
Y. Ji, X. Zhang, W. Liu et al. Physical Communication 44 (2021) 101238

Yancheng Ji, Doctor of communication and informa- Wenfei Liu, the master of foreign linguistics and ap-
tion system, Graduated from the Xidian University plied linguistics from Hubei University of Technology
in 2011. He is currently an Associate professor at in 2008. He is currently a lecturer in School of Foreign
the School of Electronics and Information, Nantong Languages in Hubei Polytechnic University. His research
University. His research interests include wireless com- interests focus on the translation theory and practice.
munication networks, cooperative communication, and
non-orthogonal multiple-access technology.

Xiao Zhang, Graduated from the Nantong University in Guoan Zhang, Doctor of communication and informa-
2017. Working towards her master’s degree in School tion system, Graduated from the Southeast University
of Electronics and Information, Nantong University. Her in 2002. He is currently a Professor at the School
research interests include GPU parallel processing and of Electronics and Information, Nantong University.
Content-Centric network. His research interests include wireless communica-
tion networks, and software radio communication
algorithm.

You might also like