Professional Documents
Culture Documents
of technology
Academic Dissertation which, with due permission of the KTH Royal Institute of Technology,
is submitted for public defence for the Degree of Doctor of Philosophy on Friday the 4th
February 2022, at 10:00 a.m. in Sal C, Electrum, Kistagången 16, 164 40, Kista.
ISBN 978-91-8040-114-2
TRITA-EECS-AVL-2022:1
Abstract
the first time considering the energy consumption. Most network slicing studies con-
sider only radio access network resources. Intuitively, energy consumption goes down
if more bandwidth resources is allocated to users when RAN segment of the network
is considered. However, with end-to-end energy consumption model, presented in this
thesis, it is demonstrated that increasing bandwidth allocation also increases processing
energy consumption in the cloud and the fronthaul segment of the network. To deal
with this issue, we formulate a non-convex optimization problem to allocate end-to-
end resources to minimize the energy consumption of the network while guaranteeing
the slices’ QoS. We transform the problem to second order cone programming prob-
lem and solve the problem optimally. We show that end-to-end network slicing can
decrease the total energy consumption of the network compared to the radio access
network slicing.
Keywords: 6G, 5G, Energy efficiency, Machine learning, Reinforcement learning,
Network architecture, Sleep modes, Mobile networks.
v
Sammanfattning
Acknowledgements
Any long journey is destined to end and so is the PhD journey. During this duration I had
the opportunity to meet and work with lots of nice people and I received numerous helps
from my friends, family, and colleagues. Being at Radio Systems Lab at KTH was full
of experiences where I found very helpful colleagues and friends. I would like to express
my special thanks to Andres, Mats, and Ben for letting me to contribute the educational
activities, being teacher and Lab assistant. I would like to thank Ki Won and Marina for
their constructive comments and discussions in the RS Lab Friday meetings. My special
thanks go to my colleagues, Andres Laya, Amirhossein Ghanbari, Amirhossein Banuazizi,
Yanpeng, Peiliang, Istiak, Haris, Abbasi, Aftab, Forough, Hossein, Mustafa, Milad, Sara,
and Morteza, for their all supports, helps, and nice time we had together and wishing a
nice journey to our new colleagues Ozan, Ziant, Yasaman, Anders, and Mahmoud. Dur-
ing my PhD, I had the opportunity to collaborate with pioneer researchers in my field. I
would like to thank Amin, MohammadGalal, Ozlem for our nice and fruitful discussions,
collaborations, and meetings.
I am sincerely grateful to my supervisors Dr. Cicek Cavdar, and Professor Jens Zander.
They gave me the opportunity to personal and professional skills. We had a wonderful
and long meetings to discuss the novel ideas and research questions. I would like to thank
again Dr. Slimane Ben Slimane for reviewing my PhD thesis, Professor Muhammad A.
Imran for accepting to be opponent, Professor Emil Björnson for chairing the PhD defense
session and Dr. Pål Frenger, Professor Michela Meo, Professor Rolf Stadler, and Professor
Viktoria Fodor for accepting to be in a grading committee. My PhD was supported by two
European Celtic-Next projects, AI4Green and Soogreen. I would like to thank all of our
industry and academic partners for our insightful meetings and discussions with special
thanks to Professor Tijani Chahed from Telecom SudParis, France, and Dr. Azeddine Gati
from Orange Lab, France.
This acknowledgment would be incomplete without thanking my family. My parent,
brother, parent-in-law, and my brothers-in-law. My sincere and heartfelt and endless thanks
go to my wife, Zeinab, for her understanding and continuous supports and motivations
during this journey. Thank you for standing with me to overcome all challenges we faced.
This journey wouldn’t be great without you.
Meysam Masoudi,
Stockholm, January 2022.
Content
Content vii
List of Figures ix
List of Tables xi
1 Introduction 1
1.1 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Challenges of Mobile Networks . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Surge in Traffic Data and Subscribers . . . . . . . . . . . . . . . 3
1.2.2 High Energy Consumption . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 High Total Cost of Ownership . . . . . . . . . . . . . . . . . . . 5
1.2.4 Smart Network Management . . . . . . . . . . . . . . . . . . . . 6
1.2.5 Carbon Footprint and Life Cycle Assessment . . . . . . . . . . . 6
1.3 Literature Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.1 Green Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.2 Network Architecture Design . . . . . . . . . . . . . . . . . . . 10
1.4 Research Questions and Methodologies . . . . . . . . . . . . . . . . . . 17
1.4.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4.2 Research Methodologies . . . . . . . . . . . . . . . . . . . . . . 20
1.5 Thesis Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.6 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
vii
viii CONTENT
Bibliography 73
List of Figures
ix
x LIST OF FIGURES
xi
List of Acronyms
xiii
xiv LIST OF ACRONYMS
Parameter Definition
ASM Advance sleep mode
BS Base station
CaPEX Capital expenditure
CC Central cloud
CoMP Coordinated multipoint
CRAN Cloud radio access network
DRAN Distributed radio access network
DT Digital twin
DU Digital unit
DRAN Distributed RAN
EC Edge cloud
eCPRI Enhanced common public radio interface
eMBB Enhanced mobile broadband
HCRAN Hybrid CRAN
ICT Information and communications Technology
IoT Internet of things
MEC Mobile edge computing
ML Machine Learning
mMTC Massive machine type Communication
OPEX Operational expenditure
PON Passive optical network
QoS Quality of service
RAN Radio access network
RL Reinforcement learning
RDM Risk of decision making
TCO Total costs of ownership
TWDM Time and wavelength division multiplexing
UE User equipment
URLLC Ultra reliable low latency communication
VBS Virtualized BS
VCRAN Virtualized CRAN
Chapter 1
Introduction
The information and communication technology (ICT) industry is rapidly advancing to-
wards 5G and beyond networks with the aim of fulfilling the ever-increasing demands for
higher data rates and offering multitudes of services with diverse quality-of-service (QoS).
These diverse requirements pose a big challenge for the operation and management of 5G
and beyond systems. On one hand, 5G architectures should be capable of satisfying the
demands for different services which require more bandwidth and less latency for certain
types of services. On the other hand, the surge in data traffic and the number of equip-
ment, yields in high investment cost and higher energy consumption [4]. Therefore, there
will be a need to revisit the current architectures and network management methodologies
and fully exploit the capabilities of advanced signal processing techniques and use amount
of data for a proper network planning and operations. Along with the rapid evolution of
network architectures, new challenges such as scalability and network management com-
plexity arise. Artificial intelligence (AI) and machine learning (ML) are promising tools
for handling the complexity of the network management. On the other hand, the conven-
tional approaches can no longer solve efficiently the new raised challenges and deal with
the complexity of the 5G and beyond networks.
After investigating the energy and delay aware resource allocations in the mobile net-
works in [12–25], in this thesis, we deal with the issue of reducing the energy consumption
of future mobile networks and enhance the sustainability of the networks by utilizing AI
and ML based techniques. We highlight the potential opportunities, challenges, and open
problems of energy saving at different segments of the network. In particular, we look into
the energy saving at the base stations, green network architectures, and methods to save
energy in the network while maintaining the offered QoS. In this thesis, we investigate the
open research questions on the energy saving at the base stations (BSs). In this part, we
put effort to save energy at the BSs and meanwhile trying to eliminate the performance
degradation. In this study, we aim at proposing a framework to determine and manage the
risk of using energy saving features at the base stations. This is very crucial in the network,
because usually energy saving comes at the cost of performance degradation. Hence, it is
of great importance to monitor and keep the performance degradation as low as possible.
1
2 CHAPTER 1. INTRODUCTION
For the next step, we look into the end-to-end network architecture and investigate the
open problems and research gaps in green network architecture design. We argue that the
current network architecture is not designed for multitudes and heterogeneous services and
we need to move on to the new architecture. Therefore, we investigate different aspects of
novel network architectures ranging from migration cost to them, to supporting numerous
services, and to tailoring network resources for specific services. In this section of thesis,
we mainly focus on end-to-end network design to 1) evaluate the migration costs to the
new green architectures, and 2) propose joint network resource allocation for improving
the QoS of the users and services while minimizing the energy consumption. The proposed
solutions are evaluated with numerical experiments and the experiments results well prove
the efficiency of the proposed solutions.
The remaining part of this chapter is structured as follows. We provide the background
an motivation of this thesis in Section 1.1. We highlight the challenges and opportunities
of energy saving in the mobile network in Section 1.2. We survey the state of the art and
the research gaps in Section 1.3. We elaborate the research questions and research method-
ologies in Section 1.4. Finally, the thesis contributions are summarized in Section 1.5.
performance and robustness of 5G and beyond systems. These solutions are fueled by
the massive amount of data generated in the networks. Implementation of AI becomes
feasible because of the availability of powerful data processing devices and techniques.
These mechanisms allow wireless networks to be both predictive and proactive. In recent
years, AI/ML techniques have been applied in many aspects of 5G design including radio
resource allocation, network management, etc, to improve the energy efficiency of the net-
work, provide higher QoS [29]. However, the impact of applying these solutions on the
overall or end-to-end performance needs more investigation.
The total electricity consumption of the ICT networks were increased by almost 60 per-
cent from 2007 to 2020 owing to the increasing number of devices, subscribers, services
and network expansion. There are concerns in the industry that 5G will dramatically in-
crease total mobile network energy use if deployed in the same way as 3G and 4G to meet
the increasing traffic demand. In this case, Telecom operators often require to add new
equipment while keeping existing network assets. This method is not sustainable from an
energy cost and environmental perspective. Therefore, continuous improvement in energy
efficiency is essential in order to balance the growth in user numbers and data volumes,
meet current and future traffic demands and simultaneously addressing network energy
consumption and related carbon emissions. In one hand, rolling out the next generation of
mobile networks is inevitable to meet the new service and requirements, on the other hand,
the energy consumption curve is increasing. Therefore, there is a crucial need to make
effort on breaking the energy curve by building, operating and managing networks in more
intelligent and strategic way.
1.2. CHALLENGES OF MOBILE NETWORKS 5
For instance, migration from distributed to centralized radio access networks (C-RANs)
can be expensive in terms of capital expenditures due to the initial investment while it
has lower operational expenditures due to pooling baseband processing into the cloud and
reduced power consumption. Then, with the assessment of the the crossover time, the time
it takes to have lower C-RAN TCO than distributed-RAN, one can decide when to migrate
to the new architecture. Therefore, TCO analysis plays a pivotal role and is essential in a
network migration, architecture design, and network planning.
LCA. Fig. 1.5 shows three parts of ICT related LCA which each requires its own method-
ologies. Among them, ICT goods are the physical equipment used in the ICT industry. The
other two parts, i.e., ICT networks and ICT services, are not physical entities but logical
structures that are made up by the ICT goods, including hardware and software. On the
other hand, they require building premises, civil works to create cable ways, air condition-
ing, power generators and power storage such as UPS and so on. The environmental impact
assessment of ICT networks reflects the environmental impact of ICT goods employed in
the ICT networks and the environmental impact assessment of ICT services reflects the
environmental impact assessments of ICT goods and ICT networks employed in the ICT
services. Therefore, it is difficult to define the assessment boundary for each part in detail.
However, it is important that the boundaries do not overlap to avoid any double counting
the environmental impact.
Figure 1.5: Relationship between methodologies of LCAs for ICT goods, networks and
services
Carbon footprint explains the environmental impact of a product or service over its entire
process based on a LCA method. The world population is producing more than 56 Bil-
lion tons of CO2. The total annual operational carbon emissions of the ICT networks is
estimated to 169 M tonnes CO2 in 2015 [42]. This corresponds to 0.53% of the global
carbon emissions related to energy (about 32G tonnes), or 0.34% of all carbon emissions
(about 50 G tonnes). According to recent studies [43], energy footprint of mobile net-
work operations has reached up to 1.4% of global electricity consumption showing 0.2%
increase compared to 2015. The carbon footprint of mobile networks has 14% share of
that of whole ICT sector. This high global footprint shows the impact of mobile networks
on the environmental challenges and therefore reveals the necessity to drive the research
trends towards the green mobile networks.
On the other hand, mobile network traffic and mobile user subscriptions are growing
exponentially over the years, mainly due to the democratization of smartphones and tablets
and the increase of video content. At the same time, Internet of Things (IoT) related traffic
will gain in importance with the explosion of the number of connected objects. In order to
keep up or improve customers quality of experience (QoE), network operators densify net-
works with additional equipment. This comes with an increased energy consumption. This
results in two main challenges: economic, as operators’ margin is decreasing (OPEX), and
environmental, in a context aiming at reducing greenhouse gas emission. These concerns
spark an increasing interest for optimizing networks’ energy consumption in the research
community, both public and private, academic and industrial.
1.3. LITERATURE STUDY 9
impact of BS density and traffic load on potential energy savings. The study in [54] aims at
load-adaptive SM management considering deep, long-term SM subject to QoS guarantee.
In [55], the ON-OFF state of 4G BS is evaluated, quantifying the impact of BS sleep on
the QoS, e.g., dropping/delay. In [56], the authors evaluate the ON-OFF state of mm-wave
small cells, considering cellular traffic data from Tokyo. In [57], the authors propose a
stochastic model to perform dynamic tuning of the BSs configuration using ASMs.
Recently, machine learning (ML) techniques have attracted the great interests to be
used in the management of BS sleep modes (SMs) operations. However, implementing
such SMs can save BS energy at the cost of incurring serving delay to users. Thus, it is
crucial to devise novel yet efficient schemes to utilize ASMs without any adverse impact
on the offered QoS to the served users. Prior works report promising energy saving at
BSs utilizing ML in ASMs [58–63]. The authors in [58–60] investigate 4 SMs with pre-
defined order of (de)activation of SMs. They decide on duration of each SM. In [58] a
heuristic algorithm is proposed to implement the ASMs. They investigate the impact of the
synchronization signaling periodicity on the energy saving performance of ASMs. In [59],
the authors propose a dynamic algorithm based on Q-learning for SM selection. In [60],
they propose a traffic-aware strategy to build a codebook for mapping the traffic load to the
possible actions using Q-learning algorithm. In these studies, the authors assume a pre-
defined order of the decisions which may not result in optimal energy saving. The authors
in [61], relaxes the pre-defined order of SMs and propose a traffic-aware Q-learning based
algorithm for SM selection. However, they train their network based on fixed traffic pattern.
Since control signaling can limit the energy saving of ASMs, the study in [62] proposes
a control/data plane separation in 5G BSs which allows to the implement deeper SMs,
e.g, SM4, or longer SM duration. This separation can also be leveraged in other network
settings such as co-coverage scenarios when basic coverage cells may carry all periodic
control signaling, leaving capacity-enhancing ones with higher ability to sleep and save
energy. In [63] we propose to differentiate between the load levels and we consider High
and Low load levels as part of our states. However, this study is based on tabular methods
which 1) cannot capture long/short term time dependencies and 2) it is limited to two load
levels.
centers, only leaving behind RF and analog processing at the cell sites. On top of the afore-
mentioned benefits, this centralization can also be an enabler for coordinated multipoint
(CoMP) technology, allowing for more rapid inter-cell coordination on the millisecond
scale, hence better resource utilization and EE, especially for cell-edge users.
Despite appealing features of C-RAN architectures, there are disadvantages to be ad-
dressed in the literature. When all processing units are centralized, the capacity of the links
between radio units and center cloud may not be enough at some hours of the day. The
medium between radio units and center cloud must be scalable and low cost, meanwhile
have enough capacity to support transformation of raw I/Q signals to the center cloud. Fur-
thermore, an extra transportation delay is introduced in the network in which may make it
challenging to support delay sensitive applications. C-RAN architecture has been evolving
during past years to support multiple services requirements while maintaining the energy
consumption as low as possible. One way of improving the C-RAN architecture is to
add another layer of processing enabling the distribution of processing tasks between the
center cloud and the new layer called edge cloud. We call this flexible semi-centralized
architecture herein as hybrid C-RAN (H-CRAN) as proposed in SooGREEN. H-CRAN
is a multi-layer architecture where physical layer network function splits are optimized to
minimize power and midhaul bandwidth jointly. H-CRAN leverages the previous C-RAN
structure with functional splitting in a three-layer architecture to share the processing tasks
between CC and EC. In Figure 1.6, we depict the evolution of distributed RAN (DRAN) to
H-CRAN. In DRAN, each radio unit is equipped with digital unit and they are connected
to the core network via backhaul link. In C-RAN the idea is to pool all digital units and
RUs are connected to this pool via fronthaul link. In H-CRAN, another layer of processing
is added between RUs and the DU pool (central cloud), where the processing functions can
be distributed between processing layers. In the context of realizing a practical H-CRAN,
several research efforts have been exerted. H-CRAN consists of three layers namely, cell
layer, EC layer, and CC layer. The cell layer includes RUs that are being densified, each
serving several user equipment (UEs). The RUs are connected to the ECs. In fact, an EC
mainly acts as an aggregation point where the data of a group of RUs is collected at this
point. The fronthaul between the RUs and ECs is assumed to be mmWave links. Further-
more, the ECs transmit the aggregated data to the CC via midhaul. In this architecture,
the CC and ECs are equipped with DUs which are able to perform function processing of
requested content. Therefore, these DUs can serve any connected RUs by sharing their
computational resources. For instance, in upstream, traffic from cells can be partially pro-
cessed at the EC so that bandwidth requirement can be relaxed for midhaul links, then the
remaining processing will be conducted at the CC. However, the EC is usually less energy
efficient than the CC, because the number of DUs at the CC is larger than that in each EC.
Hence, sharing infrastructure equipment offers a multiplexing gain that results in higher
energy saving at the CC. The tradeoff becomes whether to save midhaul bandwidth with
improved delay performance by distributing functions at the ECs or to gain from power
saving which is an intrinsic feature of centralizing all functions at CC. Since in H-CRAN
we have a processing layer closer to the users, delay sensitive services can be served at the
EC, while delay-tolerant applications are served at the CC to benefit from energy saving
features of the cloud. We further distribute the processing functions to tackle the problem
12 CHAPTER 1. INTRODUCTION
are placed at the CC and functions below the split are placed at the EC.
quirements in the fronthaul links are summarized [7, 66]. These requirements make the
passive optical networks the most suitable transport medium in the fronthaul [67]. There-
fore, from the design perspective new challenges should be considered. These challenges
include, (i) optimal placement of pools to centralize the processing, (ii) optimal design of
fronthaul to guarantee the requirements, and (iii) cost of the network.
Table 1.1: Fronthaul capacity requirement for different split points
To tackle such challenges, the authors in [68] proposed an energy efficient aggregation
network and a pool placement optimization problem, to minimize the total aggregation in-
frastructure power. The authors in [69] proposed an optimization problem to determine the
configuration of each DU pool, which minimizes the deployment cost. The fronthaul con-
straint, which must guarantee a certain latency, the infrastructure deployment, and the pool
placement, are among the main challenges of C-RAN. The authors in [70] presented an
overview of the fronthaul requirements, and proposed architectures and transmission tech-
nologies for next generation optical access networks. The authors in [71] addressed the
fronthaul constraint problem by proposing several solutions, such as signal compression
and quantization, coordinated signal processing, and radio resource allocation optimiza-
tion. The study in [72] investigated the impact of wired and wireless backhaul on the
delay performance of cellular networks. In their study, the infrastructure cost of backhaul
deployment was also considered.
In order to assess the monetary aspect of the network, a thorough cost model of the
network is required. The authors in [73] modeled the power consumption of backhaul
with different scenarios and investigated the impact of the backhaul network to TCO in
cellular networks. They demonstrated that if a power efficient backhaul solution such as
fiber optic architecture is adopted, the share of backhauling power consumption will be
small compared to the radio access part. The study in [74] attempted to model the total
cost of backhauling with two different technologies, i.e., microwave and fiber. Based on
this study, it was shown that fiber is the most promising backhaul technology with high
capacity and low delay.
To provide a techno economic framework of C-RAN architecture, all the network costs
must be considered. Therefore, the studies [32,75–79] take into account TCO as their eval-
uation metric. In [75], the authors presented a methodology to evaluate TCO for different
technologies. Their study mainly focused on migration cost of optical access technolo-
gies considering infrastructure and technology upgrade. The authors in [76] proposed a
TCO model that considers scalability, CapEx and OpEx, as well as the uncertainty in dif-
ferent stages such as demand and penetration index. They have shown that according to
resource designation, there is a great sensitivity in the maximum expected benefit. The
authors in [77] analyzed the TCO of network migration towards passive optical network,
concerning both infrastructure and technology upgrades. In their study, they considered
1.3. LITERATURE STUDY 15
different migration starting times, customer penetration, node consolidation, and network
provider business roles in the fiber access networks. The study in [78] presented a compre-
hensive techno economic framework able to assess both the TCO and the business viability
of different deployments. The authors stated that TCO alone is not sufficient to understand
the profitability of different architectures and more parameters such as net present value
(NPV) and cash flow should be considered. The study in [79] modeled the costs of fiber
and microwave architectures and calculated the TCO for different geographical regions. In
their study, the optimal fronthaul design and functional splitting is missing.
The study in [32] formulated TCO minimization problem as a constraint programming
problem. In this study, the authors found the optimal functional split point for radio unit
and digital unit pair. However, the problem of finding optimal location for pools was not
resolved. The study in [80] provided a cost analysis of the deployment of the wireless
and optical x-haul segment for a fixed wireless access network. They considered RAN
functional splits with diverse requirements capacity and coverage requirements.
Caching is another approach that may not only relieve the fronthaul congestion but also
can reduce the content delivery latency. In caching, popular contents are cached into a
place closer to the users, e.g., in the edge cloud (EC), allowing user content demands to be
accommodated more easily and quickly. Since content access delay is an important factor
in caching problems, various algorithms and techniques have been proposed to incur lower
latencies. The challenges, paradigm, and potential solutions for caching are discussed
in [81]. In [82], cooperative hierarchical caching has been proposed to minimize the con-
tent access delay and boost the quality-of-experience (QoE) for end users. In [83], the
authors proposed caching algorithms to optimize the content caching locations and hence
reduce the delivery delay. In [84] the authors presented a caching structure and proposed
a cooperative multicast-aware caching strategy to reduce the average latency of delivering
content.
H-CRAN leverages the previous CC/EC structure with functional splitting in a three-
layer architecture to share the processing tasks between CC and EC. In [85], an end-to-
end delay model has been proposed for different functional split options. H-CRAN can
simultaneously employ caching and functional splitting to tackle the fronthaul bottleneck
problem. Although the existing content caching algorithms can reduce the service delay,
it is not easy to decide where to deploy the content caches since there is a trade-off in
balancing the centralized function processing and the distributed caching especially in H-
CRAN. It is worth noting that caching the content at the EC prevents us from functional
splitting since the content is already at EC and it is not meaningful to centralize processing
at CC. Due to this dependency, it is important to jointly decide whether to centralize or
distribute content caching together with network processing functions.
16 CHAPTER 1. INTRODUCTION
the authors study the challenges of mapping together the available resources in the various
layers of the network slicing architecture in order to compose an end-to-end slice. In [94],
the authors propose an end-to-end slicing approach in RAN, transport network, and core
network. However, the analysis of the impact of one on another segment is missing. In [95],
the authors propose an end-to-end network slicing framework for 5G resource allocation.
The proposed framework consists of RAN and core network. In RAN, they dynamically
allocate wireless resources to slices and base stations (BSs), and the aim is to maximize
the total rate in the system. The authors in [96] are dynamically allocating RAN resources,
i.e., bandwidth, and cloud resources, i.e, baseband processing resources, to each slice. The
objective of this framework is to minimize the violation of rates in each slice according to
their SLAs. The authors in [97] propose an algorithm to avoid over-provisioning resources
to the slices while still satisfying the latency constraints. They show that the bottleneck
for enhanced mobile broadband services is computing resources while the bottleneck for
URLLC services is the communication resources.
In the literature, despite the studies on the end-to-end resource allocation for each slice,
the end-to-end network slicing models are not yet well investigated. RAN slicing is one
of the key parts of network slicing. In RAN slicing, the communication resources, e.g.,
bandwidth, can be dynamically allocated to slices depending on their loads in the net-
work. In the literature, RAN slicing has been investigated from different perspectives. The
authors in [98], discuss how the RAN can be sliced to satisfy the heterogeneous service
requirements. In this study, different sharing concepts and RAN slicing challenges have
been described. In [99], the authors propose a network slicing-based architecture, focus-
ing particularly on the RAN, and elaborate how artificial intelligence (AI) can potentially
assist the network in terms of resource provisioning. The study in [100], formulates a dy-
namic optimization model considering power consumption and service quality. In their
model, they take into account the limited radio resources, power resources, and latency
constraints. The study in [101], formulates a framework for RAN slicing in which the
delay performance is improved by content caching schemes. The authors in [102] aim
to maximize bandwidth utilization while guaranteeing quality of service (QoS) of users
in their slice. The study in [103], proposes a two-timescale radio resource allocation for
network slicing where one timescale is for long-term dynamic characteristics of service
requests and the other one for adapting the allocated resources in short time periods. The
study in [104] proposes a two-timescale RAN slicing where one is for resource reservation
and the other one is for resource allocation within each slice in a C-RAN architecture.
maintaining the users’ QoS. Most of the architecture designs and energy saving algorithms
in the literature are considering static traffic and hence neglecting the dynamicity of the
network. Solutions are missing that can enable practical implementation of energy sav-
ing features due to the unpredictable risk of putting network resources in sleep, which
can cause degradation of network performance. Today mobile networks are designed for
peak traffic, most of the time idle but not sleeping. Radio and computational resources
are under-utilized in conventional radio access network architectures where processing re-
sources are co-located with the radio sites. In this thesis, we will investigate the high level
research question (HRQ) of
"How to design green network architectures and dynamically allocate network resources
to minimize energy consumption considering the network state, traffic load, and demanded
QoS?" and describe steps and methodology to approach the solutions that can address the
high level research question and contribute to the answers to this question.
To address this question, we leverage machine learning based algorithms and real network
traffic obtained from a mobile network operator. We will explain our studied open re-
search questions, problem formulation, and our solution approach to fill the research gap
and respond to the need of improving the energy efficiency of the network by reducing
the energy consumption, improving the resource allocation schemes, and improving the
network architecture design.
In order to approach the high level question, we break it down into two sections. In the
first section, we first look into intent-based1 risk-aware energy saving methods to be used
in the network and we focus on the most energy consuming section of the network, i.e.,
base stations. One way of saving energy at the BSs is to put it into sleep when there is no
users attached to it. Selection of different sleep modes enables load adaptive sleep options
by deciding on when and how deep to sleep. Thus, we narrow down the research question
and put effort to answer the following detailed research questions:
• RQ1: How to dynamically put BSs into sleep modes when they are idle, i.e., when
and how deep to sleep, given the previous user arrivals and network load?
• RQ2: Depending on how deep BS sleeps, how can we design a network management
framework considering mobile operator’s intent, continuously analyze and monitor
the risk while using ML to make decisions?
The current network needs to be upgraded to meet the requirements of 5G and beyond
networks. As per literature, cloud-radio access network (C-RAN) based architectures are
promising architectures to support scalable network infrastructure growth to meet with the
need for higher data rates while maintaining the energy consumption as low as possible.
However, there are multiple network architecture options to realize the concept of C-RAN
depending on the traffic intensity and available optical Fiber infrastructure. Each network
architecture proposal has its own pros and cons. Therefore, there is a need for investigating
the proper choice of network architecture to upgrade to, in terms of migration costs and
energy consumption. Therefore, as the second part of this thesis, we study the network
1 Intent is defined as the goal and business objective of the mobile operator
1.4. RESEARCH QUESTIONS AND METHODOLOGIES 19
architectures from the end-to-end perspective and put effort on investigating different as-
pects of such architectures including the migration costs, energy consumption, delay, and
end-to-end resource allocation schemes. First, we answer the following questions:
• RQ3: What is the deployment cost of different C-RAN architectures? How sensitive
an architecture is to the price changes?
• RQ4: How artificial intelligence can help us to design a C-RAN fronthaul with min-
imum cost and energy consumption?
• RQ5: Is there a way to save energy while meeting with the delay requirements of
users by placing content to the edge cloud instead of centralized cloud?
• RQ6: How to allocate network resources in fronthaul, edge cloud, and centralized
cloud to minimize the energy consumption while meeting the users’ QoS?
5G and beyond networks are supporting and providing increasing number of services
with vast and heterogeneous QoS. Considering this trend, one architecture does not fit for
all service. Network slicing can help the operators to tailor their network for specific ser-
vices and dedicate network resources to the services so that all are served with satisfying
QoS. An end-to-end network slice spans across multiple network segments including ac-
cess network, core network and transport network. A network slice comprises dedicated
and/or shared resources, e.g., in terms of processing power, storage, and bandwidth and has
isolation from the other network slices. This advanced reservation of predefined resources
for each slice may result in under utilizing the network resources and possibly increase in
energy consumption. Hence it requires meticulous network management and resource allo-
cation schemes to allocate resources to each slice while maintaining the network’s energy
consumption as low as possible. In this thesis, we will investigate the following research
questions under the context of network slicing.
• RQ7: How to allocate computation and communication resources to each slice for
satisfying the QoS and minimizing energy consumption?
20 CHAPTER 1. INTRODUCTION
• RQ8: How machine learning capabilities can be used in slice admission for improv-
ing the performance of dynamic slicing in terms of energy saving?
• RQ9: How the designed algorithms can be trained over real network data to make
intelligent decisions about slices?
The network slicing concept will allow the deployment of virtual network slices on
top of the same physical infrastructure. In this study, we will propose a model to
estimate the impact of slice deployment and operation on the energy consumption
and delay. This study will be facilitated by the use of collected network data and ar-
tificial intelligence techniques. In addition, ML based algorithm will be employed to
develop online energy-efficient optimization algorithms to allocate communication
and computation resources which are usually too complex to be tackled by current
optimization methods.
Mobile network operators anticipate a surge in their network energy consumption due to
exponentially increasing network traffic and introduction of new services [4]. Therefore,
in order to make their network sustainable, energy efficiency becomes a key issue in their
future deployments. Studies show that base stations (BSs) consume a major share of the
overall network energy consumption [51]. Thus, reducing BSs’ energy consumption can
boost the energy efficiency of the network. In particular, base station sleeping is one of
the promising ways of reducing energy consumption in the mobile networks, however, it
may lead to incurring longer service delay to the users. Due to variation of the traffic
profile during a day, network can be low loaded at some times. During the day, there are
multitudes but short durations in which no user is connected to the BS while BS is active
and consumes energy. In such idle durations, BS can put some of its components into sleep
and hence save energy. In this chapter, we examine the potentials of energy saving as well
as the challenges of activating sleep modes in 5G networks.
1. SM 1: This SM is the fastest, and the most shallow, whose duration is comparable
to the symbol time T s. Whenever a BS does not have any traffic over the entire band
of sub-carriers during symbol time per antenna, the PA can be switched off to save
power. This mode is available and known in current technologies as micro-scale
discontinuous transmission (µDTX).
25
26 CHAPTER 2. AI ASSISTED GREEN MOBILE NETWORKS
Mode Active
SM1 SM2 SM3
100% 0%
Pow. (W) 702.6 114.5 76.5 8.6 6.0
Dur. (sec) T s: fast modes (FMs) 1 msec 10 msec
2. SM 2: A slightly deeper sleep level can be reached by switching off one more set
of components with slower (de)activation transition time than that of SM 1, if this
longer delay can be afforded by the system. In this mode, the transition is on the
transmission time interval (TTI) scale, i.e., 1 msec (a duration of a sub-frame consti-
tuting 2 resource blocks (RBs) or 14 resource elements (REs) assuming frequency
division duplexing (FDD) Frame Structure 1 [105])
3. SM 3: This mode has a deeper sleeping level but with a minimum duration of a 10
msec frame (10 sub-frames).
4. SM 4: This is the deepest SM with a minimum duration of 100 frames (1 sec).
In this study, we consider SM1-SM3. Table 2.1 presents the power consumption and min-
imum duration of the each SMs for macro-base stations with max power 49 dBm over 3
sectors with a bandwidth of 20 MHz [10].
The FE includes amplifiers and filters (analog baseband and RF), up/down-conversion mix-
ers, frequency synthesizer and digital to analog/analog to digital converter (DAC/ADC).
BB includes baseband filtering, up/down-sampling, FFT/IFFT, digital pre-distortion, digi-
tal compensation of system non-idealities, modulation and demodulation, channel encod-
ing and decoding, channel estimation, synchronization and processing for Multiple Input
Multiple Output and equalization. DC contains platform control processor, backbone se-
rial link interface and MAC and network layer processors. PS includes AC/DC converter,
DC/DC converters and active cooling.
The adopted BS power consumption model in this thesis follows the one implemented
by imec in [9] and reported in [10, 63]. The BS can operate in three modes, namely Fast
mode (FM), SM2, and SM3. FM mode merges three cases: 1) active with 0 to 100% load,
2) idle with 0% load, and 3) SM1. In this mode, BS makes decision and hops instanta-
neously into any operational mode depending on the arriving traffic without any need for
optimization. The remaining two sleep modes incur a transition delay from the time of
2.3. TRAFFIC MODEL 27
decision until completion of the action which causes additional queuing delay of traffic
arriving in the transition period.
Pidle + Pt
If BS is in active mode
Pc = (2.2)
PSMi If BS is in SMi, i ∈ {1, 2, 3}
The BS consumes transient power for switching, between operational modes. Denote the
T
total power consumption of switching in duration of T by Psw , Power consumption per
switching by PSW , and the number of switching in duration of T by Fm , then we have
T
Psw = Fm Psw (2.3)
where κ is 60 in our study since we have data for 60 days and T denotes the set of the time
index of data during the day. The cardinality of set T is 24 ∗ 60/ts where ts is sampling
time which is 5 minutes in our study. Then, for each set, i.e., each 5 minutes, we calculate
the mean, i.e., E(Oi ), and variance, i.e., V ar(Oi ), of the original data. Finally, based on
derived statistics, we can generate demanded data rate for each time step and each user,
using the steps explained in the next subsection.
It is worth mentioning that this coarse grained time granularity data provides no infor-
mation about the time of arrivals or the time duration between two arrivals which is a key
parameter to determine the idle time of the BS. Therefore, an accurate and yet tractable
model is required to generate the random arrivals of the users.
τ ζ
PON = , POFF = (2.5)
τ +ζ τ +ζ
2.4. RISK AWARE SLEEP MODE MANAGEMENT 29
τ
On Off
ζ
U
X
Ψ = ψj (2.6)
j=1
E(Ψ) = E(U )E(ψ) (2.7)
V ar(Ψ) = E(U )V ar(ψ) + V ar(U )E(ψ)2 . (2.8)
In Equation (2.6), ψ-s are independent random variables and for a sufficiently large value
of U , e.g., more than 30, Ψ is normally distributed, regardless of distribution of ψ-s, with
mean and variance of E(Ψ) and V ar(Ψ), respectively.
In order to generate the user arrivals and their data rates with the original data set
statistics, the model parameters, i.e., τ, ζ, λ, E(ψ), and V ar(ψ), should be set in a way
that Equations (2.7) and (2.8) hold.
Real Use AI
System
Policy Risk
Network Estimated Assessment
Data Parameters Retrain
risk
DT Deactivate
Deactivate
SMs
Activate SMs
if not activated
else Threshold < RDMdt
RDMa
Physical Calculate current Check for re-
SMA RDM
Network RDM training data
RDMdt RDMa > RDMdt
Traffic
Data
Threshold
Retrain
Parameter Markov Predict
Operator
update Model RDM
Virtual Network
Digital Twin
Figure 2.3: Digital twin based risk aware sleep mode management [6].
we propose a solution for BS ASM management to estimate the risk. Finally, we propose a
framework for risk-aware BS sleep mode management to combine intelligent sleep mode
management and hidden Markov model based risk estimation.
In this thesis, risk refers to a situation in which SM management algorithm takes sleep
decisions resulting in delays in connection setup time of future user arrivals. The more
users experiencing delay, the higher value of risk should be associated to the situation.
Therefore, if the risk could be measured in advance, BS can avoid delaying large number of
users. For this purpose, we utilize the the hidden Markov model, as a virtual representation
of a process, in our case BS sleeping. The model is continuously updated from real-time
data, and uses machine learning and reasoning to help decision-making. Therefore, it
helps us understand the present and predict the future performance of the BS sleeping
performance metric, e.g., sleeping duration, probability of delaying users, and the risk for
each state of the network. In the following we explain the Markov model for BS sleep
mode management algorithm.
In this study, as is depicted in Fig.2.3, the digital twin has three main parts, 1) parameter up-
date, 2) network model, and 3) prediction. The former two constructs the virtual network,
representative of the physical network. The model is continuously updated from real-time
data, and uses machine learning and reasoning to help decision-making. The network is
modeled as hidden Markov process and in the prediction phase we use the updated virtual
network to estimate and predict the future performance metric of the BS sleeping, e.g.,
sleeping duration, probability of delaying users, and the risk for each state of the network.
In the following we explain the Markov model for BS sleep mode management algorithm.
32 CHAPTER 2. AI ASSISTED GREEN MOBILE NETWORKS
λ λ λ λ λ
A1,1 S3,1 S1,1 A1,1 A2,1 ... AM,1
µ µ µ µ µ
ζ τ ζ τ a1,3 ζ τ ζ τ ζ τ ζ τ
µ ...
A1,2 S3,2 S1,2 A1,2 A2,2 AM,2
a3,1
µ µ µ µ
a2,3 a1,2
a3,2 a2,1
A1,2 S2,2
µ
τ ζ τ ζ
λ
A1,1 S2,1
µ
Figure 2.4: Markov model for advanced sleep mode management [6].
in which one of the states has zero emission rate. Therefore, the state diagram depicted in Fig. 2.4 is a hidden
Markov model.
2.4. RISK AWARE SLEEP MODE MANAGEMENT 33
any of the sleep mode state, i.e., S(i, j) i ∈ {1, 2, 3} to the other SMs if j =2, there is no
traffic arrival.
where S(i, j) is the state in which BS is in sleep mode i and the arrival state is j and
A(m, j) is the state in which BS is active and there are m number of users being served at
the BS and the arrival state is j, and j = 1 means ON state and j = 2 means OFF state.
In [6], we have written the balance equations for Markov model in Fig.2.4. The balance
equations are used to calculate the steady state probabilities, i.e., νi,j and um,j .
where λ is user arrival rate and νi,1 , i ∈ {1, 2, 3} are the probabilities of being in SMi
while arrivals are in ON state. From the operators’ perspective, the smallest possible value
for Us is more preferable since minimum number of users experience delay until they are
being served. Although this metric can reflect the favor of the operators, it is incapable of
measuring the performance of the BS sleeping algorithms. For instance, when the network
is busy, i.e. λ is high, the probability of sleeping will be low. However, one inappropriate
sleeping decision may result in a large number of users experiencing delay. On the other
hand, in off peak hours, when λ is low and Vs is high, we may have very few users experi-
encing delay. Therefore, the parameter Us cannot solely reflect the higher risk of incurring
high delay at peak hours. To tackle this issue, a novel metric called risk of decision making
(RDM) is introduced to measure the risk when we take a non-optimal action by taking into
account the Us and the probability of sleeping (in both ON and OFF state). Risk of wrong
decision is formulated as,
can be dynamically set by the operator depending on the hours of the day, and assured
QoS, etc. Using the same data sets as the one used for training the algorithms, the RDM
can be pre-calculated as a reference value. Then, the actual RDM in the network can be
always compared with this reference value. If the experienced value of RDM is above
the expected one, it means the current traffic pattern is different than the expected traffic.
Therefore, the BS should disable the sleeping features, i.e., SMs, until either the algorithm
is trained over new data set or the traffic behavior becomes normal again.
Interaction with DT and Physical Network:
After defining the DT, underlying Markov model, and the performance metrics, we now
explain how the parameters of the hidden Markov model within the DT can be obtained
and updated. The parameters of hidden Markov model is function of user arrival patterns,
rates, ON state duration, and OFF state duration. Therefore, these parameters can change
during the day. In order to obtain the parameters of the model, i.e., λ, τ , ζ, from the
training data set, we can use a well-known backward-forward algorithm, i.e., Baum-Welch
algorithm [109]. The Baum-Welch algorithm finds the maximum likelihood estimate of
the parameters of a hidden Markov model given a set of observed input sequences. This
algorithm computes the statistics of the input traffic sequence O = {o1 , o2 , . . . , oT }, and
then updates the maximum-likelihood estimate of the model parameters, i.e., ON/OFF
state duration and arrival rate. The procedure of Baum-Welch algorithm to extract the IPP
parameters is defined in [107].
where (x)+ is a operator that takes the value of x if it is positive and is zero otherwise,
pi , i ∈ A, is the power consumption of SMs and rp ∈ [0, 1].
When the system receives a request while it is in one of the deep SMs, i.e., di 6= 0, or
in the case when di = lc = 0, the normalized delaying penalty can be calculated as
di
ri,d = − , (2.13)
lmaxc
which takes 0 when di = 0, and 1 when di = lmaxc, respectively. When the system is in
FM with lc 6= 0, although no power saving is attained, there should exist a reward for the
avoided delay, i.e., (di = 0)&(lc 6= 0), the reward is
lc
rd = . (2.14)
lmaxc
The incurred reward function at the end of the ith time index due to the action taken at the
end of time index i − 1 can now be plausibly defined as
where α is a weight parameter to prioritize power saving or serving delay. When an action
is taken, the system will be frozen for the minimum duration of that action. Therefore, the
incurred reward is calculated at the end of such duration.
1.2
Generated load 1
Normalized averaged traffic load
0.4
0.4
0.2 0.2
0 0
0 2 4 6 8 10 12 14 16 18 20 22 24 0 1 2 3 4 5
Hour of day Time in Second
(a) Average original and generated load. (b) User arrivals in 5 seconds.
Figure 2.5: Network load generation as input to the network for τ = 0.1 and ζ = 0.5 [6].
to the α. The higher α is, the less weight is assigned to energy saving. We compare the
performance of SMA with OBS algorithm defined in [63], as an upper bound on energy
saving gain. OBS is always optimal and non-causal due to the knowledge of future infor-
mation. For α = 0 the total reward includes only energy saving reward. SMA can achieve
optimal energy saving but at cost of incurring delay to the users. When latency is more pri-
oritized, the BS becomes more conservative to choose deeper SMs and prefers shallower
SMs. Therefore, less energy is saved in favor of less incurred delay. SMA outperforms
fixed SM where only SM1 is in use, in which BS cannot benefit from deeper SMs and
miss the energy saving opportunities. Moreover, SMA performs better than SARSA algo-
rithm because SMA can find better long-short term dependencies in traffic data and hence
leverage this information to find a better energy saving policy.
The Fig. 2.8 illustrates the absolute energy consumption of the SMA compared to the
three cases: 1) when no sleeping is used, 2) when only SM1 is used, and 3) the optimal
energy saving approach. During the rush hours the SMA mostly chose to go to the shal-
lowest SM hence the performance is very close to the only SM1. Moreover, during these
hours there is not room to save energy and hence it is meaningful to be more conservative
about delaying users and let the BS be either awake or benefit from the SM1.
Fig. 2.7 shows the normalized energy saving performance of the SMA and OBS over
24 hours. The values are normalized with regards to the maximum energy saving of OBS
algorithm. Hours between 2:00-6:00 AM are the best hours of a day with the highest
energy saving potentials. The least opportunity for energy saving happens in the evening
at about 18:00. It is worth mentioning the data set under consideration is for an area which
is dominated by the industrial buildings. The traffic pattern and hence the energy saving
patterns depend on the type of area.
In Fig. 2.9, we plot the performance of risk aware BS sleeping algorithm. Using the
procedure in Fig. 2.3, BS calculates the risk value and when the risk is high it temporarily
38 CHAPTER 2. AI ASSISTED GREEN MOBILE NETWORKS
1
0.9
Normalized Energy Saving
0.8
0.7 SMA
Sarsa
0.6 OBS
0.5 Only SM1
0.4
0.3
0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Alpha
1
OBS
SMA
0.8
Normalized Energy Saving
0.6
0.4
0.2
0
0 2 4 6 8 10 12 14 16 18 20 22 24
Hours
deactivates SMs. When SMs are deactivated, BS calculates the potential risks and when
the risk is small again for a while, i.e., for duration of T , it activates the SMs again.
2.7. SUMMARY 39
600
No SM
550 Only SM1
Optimal
500
SMA
450
Energy Consumption (K joule)
400
350
300
250
200
150
100
50
0
0 2 4 6 8 10 12 14 16 18 20 22 24
Hour
7
RDM Threshold
BS RDM Value
6
Risk of Decision Making (RDM)
SMA Status
5
SMA Status
3
Active
1
De-active
0
10 20 30 40 50 60 70 80 90 100 110
Time (s)
2.7 Summary
In this study, we propose a framework to generate the user arrivals based on the real data
set. We develop a deep learning based algorithm to decide at what time which level of
sleep mode should be chosen. The objective of the proposed algorithm is to maximize
40 CHAPTER 2. AI ASSISTED GREEN MOBILE NETWORKS
the reward function, i.e., linear combination of normalized energy saving and delay re-
ward. Simulation results show that considerable energy saving can be achieved using the
proposed algorithm. The results show that for busy hours of a day or when in the reward
function delay is more prioritized, the algorithm tends to choose more shallow SMs, i.e.,
SM1 which reduces the energy saving. On the other hand, for low traffic hours or when
energy saving is more prioritized, deeper SMs, i.e., SM3 is chosen more frequently, result-
ing in more energy saving and more incurred delay to the users. Therefore, comparing to
the optimal BS sleeping, at low traffic hours, the proposed solution achieves near optimal
result, while for busy hours, the proposed solution achieves less, yet considerable, energy
saving. Comparing to the tabular methods, e.g., SARSA, the proposed solution has a better
energy saving performance for a similar reward function (similar priority for energy sav-
ing and incurred delay). This means that compared to the tabular method, the proposed
solution learns better the dynamicity of the network, e.g., here user arrivals.
Chapter 3
Trends like softwarization and centralization are being widely attracted by both the re-
search community and standardisation bodies. In particular, centralization in the radio
access network (RAN) has been discussed in the context of Cloud RAN (C-RAN), where
the base band processing unit (BBU), also known as digital unit (DU), is decoupled from
the radio unit (RU) and pooled in a central cloud. In this chapter, we investigate the C-
RAN based network architectures that can enable 5G to provide multitudes of services to
the users. In this chapter, we first provide the end-to-end delay and power consumption
model of the network. Then, we focus on two aspects of the network. First, we investigate
the cost of migrating to the new architectures. In the next step, with the aim at minimizing
the power consumption of the network where we investigate two methods of improving
the provided QoS to the users. In the first method, we try to reduce the delay of delivering
contents to the users, and in the second method we tailor the end-to-end network resources
for each services with heterogeneous QoS requirements. In the following, we start with
the explanation of the investigated architecture and its delay and power consumption.
41
CHAPTER 3. AI ASSISTED NETWORK ARCHITECTURE DESIGN AND
42 MANAGEMENT
ter cloud via high speed low latency transport link. Therefore, the transport link is a vital
components of the network architecture. We call the transport link as X-haul which can be
Fronthaul or Midhaul. Moreover, This link is using CPRI or eCPRI to transport data to the
cloud and can be implemented with different technologies such as passive optical network
(PON), Ethernet network, or time sensitive network. Each of these technologies have their
own pros and cons which make them a better solution in a specific situation.
• Required bandwidth can scale according to the to the user plane traffic
• Ethernet can carry eCPRI traffic and other traffic simultaneously, in the same switched
network
• A single Ethernet network can simultaneously carry eCPRI traffic from several sys-
tem vendors.
M,tx R,tx
where Du,r is the transmission delay in midhaul/fronthaul, Du,r is transmission delay
pr p
in RAN, Du,r is processing delay, D is propagation delay in midhaul/fronthaul which is
dependent on the distance and is a fixed value, Nsw is the number of switches between
CC and radio site, Dkq is the queuing delay which is function of load on switch k, Df is
fabric delay which is constant and is defined as delay caused by Ethernet switch circuits
with typical value of 5 − 10 µs [113], and Dksw is switching delay due to load from user u
belongs to on switch k.
Here, we focus on eCPRI protocol and Ethernet based architecture. More information
other architecture can be found in [5,7]. In eCPRI protocol, based on spliting point a group
of packets are transmitted at each burst period. To calculate transmission delay for each
user we need to calculate how many periods we need to transfer the requested content of
the user as follows:
3.1. CLOUD RAN NETWORK ARCHITECTURE POWER AND DELAY MODELS45
Parameter Definition
U, R Sets of users, and RUs, respectively
II
Rr D The midhaul data rate for RU r
tot
Du,r Total delay of user u in Ur
M,tx
Du,r Transmission delay of user u in Ur in midhaul/fronthaul
R,tx
Du,r Transmission delay of user u in Ur in RAN
pr
Du,r Total processing delay of user u in Ur
sw
Dr,k Switching delay in RU r on switch k
dpr
x,u,r Processing delay of user u in Ur at x ∈ {CC, BS}
x
Cu,r Total required Gops for user u in Ur at x ∈ {CC, BS}
pr
Cx,r Total allocated GOPS of RU r at x ∈ {CC, BS}
ECC , EM F , ErBS Energy consumption of the CC, midhaul/fronthaul, and BS/RU r
CC
Pcool Cooling power consumption at the CC
CC
PDU,min Idle power consumption of a DU at the CC
CC
Pproc Processing power consumption at the CC
sw
Pr,k Power consumption of switch k in RU r
BS
Pstatic Static power consumption of a BS
PrDU,min Minimum processing power consumption of a DU in BS r
PrDU,max Maximum processing power consumption of a DU in BS r
Prtx Downlink transmission power for a single user in Ur
pr
νr = CCC,r Allocated processing resources in RU r
& '
IID
M,tx
Lu,r Ru,r Lpacket
Du,r = sc n
, (3.2)
Lpacket δsc Nu,r mod RL
sc
where Lu,r is the file size of user u connected to RU r, Lpacket is the packet size, δsc Nu,r nmod
is the respective downlink data rate with nmod being the modulation index, e.g., nmod = 4
for 16-QAM and δsc is the subcarrier spacing (in Hz), which depends on the 5G numerol-
IID
ogy given in Table 3.2. Ru,r is the split point data rate, and RL is the line rate which is
dependent on interface medium, e.g., fiber links.
R,tx
The transmission delay in RAN, Du,r , is calculated as
R,tx slot
Du,r = Nu,r Tslot (3.3)
slot
where Tslot is 5G time slot duration given in Table 3.2 and Nu,r is the required number
CHAPTER 3. AI ASSISTED NETWORK ARCHITECTURE DESIGN AND
46 MANAGEMENT
Table 3.2: 5G frame structure [11].
where Zx includes the processing functions at x ∈ {CC, BS}. Utilizing the information
about the required amount of processing resources for each communication function, PFi
is calculated as follows:
Y xact si,x
PFi = Ci,ref , (3.7)
xref
x∈X
where xact and xref are the system input parameters/resources under actual scenario and
the reference scenario, respectively, and X is the set of all possible tuning parameters. The
exponent si,x highlights the impact of changing the input parameter on the required Gops
for the communication function sub-component.
The information to calculate the required Gops is summarized in [8, 114]. All the
necessary digital sub-components that contribute to the overall processing delay and their
corresponding Gops are summarized in [114].
The switching delay of switch k for users is formulated as,
Lk
Dksw = , k = 1, . . . , Nsw (3.8)
Rsw
3.1. CLOUD RAN NETWORK ARCHITECTURE POWER AND DELAY MODELS47
where Ls,k is the load on switch k and Rsw is the switch rate.
where PCC , PM F , and PrBS , are the power consumption of the CC, midhaul/fronthaul,
and each RU r, respectively.
CC CC CC CC
PDU = NDU PDU,min + Pproc (3.11)
CC
where PDU,min is the fixed power consumption of a single DU when there is no processing
CC
and Pproc is the processing power consumption of the CC.
PM F = Pksw (3.12)
k=1
with
based on genetic algorithm (GA) to solve the problem. Although the GA does not guar-
antee the optimality, it is scalable. We investigate the optimality gap of this algorithm as
well. Finally, the impact of the proposed frameworks on the performance of JT as one of
the CoMP techniques is investigated. The main contributions of this study are as follows:
• Developing the GA-based scalable heuristic algorithm to solve the ILP problem in
[33]. Investigation of optimality gap between scalable solution and optimal solution.
Figure 3.4: An encoder example for 3 RUs, 3 pools, and 5 DUs per pool. RU 1 and 2 are
connected to DU 1 and 2 in Pool 1 respectively, pool 2 serves RU 3 and Pool 3 is inactive.
RUs, and pools. It is worth mentioning that some constraints such as distance constraint
are considered when the individuals are selected, therefore, the calculated number of active
splitters are based on feasible solutions and hence we do not consider all of them in the
individual formation phase.
106
2
Pool build-out
Equipment
Fiber
1.5 OpEx
0.5
0
AN FS AN FS AN AN
-R th -R th -R -R
ldC , Wi ldC , Wi eD lD
fie AN fie AN v ca
n R n R wa pti
ow C- ee C- cro O
Br ie ld Gr ie ld Mi
nf nf
B row G ree
10 6
4
C-RAN, Alpha=0.25
C-RAN, Alpha=1
3.5 Optical D-RAN
Microwave D-RAN
C-RAN, Alpha=0.25, With FS
C-RAN, Alpha=1, With FS
3
2.5
1.5
0.5
0
0 2 4 6 8 10 12 14 16 18 20
Years
Figure 3.6: TCO of C-RAN and D-RAN over the years, with TCO minimization, for
greenfield pool deployment, with and without FS [5].
Fig. 3.6 shows the impact of TCO of functional splitting and availability of fibers in-
3.3. CONTENT PLACEMENT AND FUNCTIONAL SPLIT OPTIMIZATION 53
frastructures to the operators. When more fiber is available, the cross over points (years
that the cost of D-RAN is more than that of C-RAN) is lower meaning that in a short
term the costs of C-RAN will be lower than cost of D-RAN for the operators. Moreover,
the TCO of C-RAN with functional splitting is lower that conventional C-RAN, because
C-RAN with functional splitting has much lower OpEx while the CapEx are similar.
Fig. 3.5 depicts the breakdown of TCO for green/brown field C-RAN with/without
functional splitting and optical/microwave D-RAN. The initial investment of greenfield
fiber deployment is much higher than that of brownfield deployment but it gives us more
flexibility to design the optimal fronthaul with lower OpEx. With enabling functional
splitting, TCO is decreased since the OpEx and fiber leasing cost is reduced. Compared to
D-RAN, both greenfield and brownfield fiber deployment require more initial investment.
However, this cost is paid to decrease the OpEx. The reduction in OpEx means reduction in
annual cost and hence after a certain crossover time (in case of brownfield fiber deployment
about 4 years), C-RAN becomes an economically viable solution.
3.2.5 Conclusion
In this part, we investigate the deployment cost of different C-RAN architectures. We
propose an ILP optimization problem to minimize the TCO of C-RAN network and an
AI-based algorithm to solve the problem. We show that the fiber-rich operators should
migrate to conventional C-RAN while for fiber-short operators migrating to the C-RAN
with functional splitting is a more reasonable choice. We also show that C-RAN with
functional splitting had lower TCO and crossover time compared to C-RAN. Furthermore,
in terms of upgradability, C-RAN was more cost efficient compared to D-RAN.
consumed. Moreover, the capacity of the edge cloud is limited, therefore, after a point, no
more users can be served and a saturation point will be observed in the power consumption.
When the delay requirement is less stringent, we have more room for centralizing functions
and hence more energy is saved.
In Fig. 3.8, we have depicted the average delay experienced by the users. Fetching the
content from the CC experiences more delay than other cases due to the higher distance
between the CC and the users. FSCP-80-20 has up to 20% lower delay compared to FSCP
due to the content sharing. Comparing this trend with that of Fig. 3.7, one can see the trade
off between the experienced delay and the the total power consumption.
3.3.4 Conclusion
In this subsection, we show that the proposed content placement algorithm can reduce the
content delivery delay to the users. On the other hand, we can save energy in the network by
turning off the extra unused equipment such as digital units, and line cards. The simulation
results show that the popular contents for delay stringent services can be cached at the EC,
leaving the rest in the center cloud for better energy and QoS performance.
CHAPTER 3. AI ASSISTED NETWORK ARCHITECTURE DESIGN AND
56 MANAGEMENT
Figure 3.8: Average delay of the users vs percentage of active users [7].
5G and beyond networks will support a wide range of services with diverse and heteroge-
neous service requirements. The service demands are evolving while the requirements are
very diverse and heterogeneous. Therefore, one architecture does not fit for all applications
and services. The current network architecture utilizes a relatively monolithic network and
transport framework to accommodate these emerging services. It is anticipated that the
current architecture is not flexible and scalable enough to efficiently support a wide range
of services each with their own specific set of quality-of-service requirements [119]. Net-
work slicing is a promising technology for realizing this vision. With network slicing, the
network is partitioned into multiple dedicated virtual networks tailored and customized for
specific services. Network slicing makes it possible to create fit-for-purpose virtual net-
works with varying degrees of freedom for each service [120]. It enables service-oriented
resource allocation by tailoring the network resources, e.g., bandwidth and processing, to
specific services [121]. Hence, each service can have its own customized network. In net-
work slicing, resources are reserved in advance for each service. Therefore, the resource
utilization may not be energy efficient. In fact, by network slicing the energy consumption
of the network will increase and this is the cost network should pay to support wide and
heterogeneous services.
In the literature of network slicing, the main focus is to provide and guarantee the
required QoS to the users. However, the energy consumption of such approach is not
3.4. ENERGY EFFICIENT END-TO-END NETWORK SLICING 57
well-investigated. On the other hand, most of the studies are focusing on RAN slicing
where only RAN resources are dedicated and reserved for a slice. In terms of energy
consumption, one may increase the bandwidth in RAN to reduce the energy consumption
in the transmission phase. However, this does not necessarily lead to reducing the total
energy consumption of the network. Because more bandwidth requires higher data rate
and data transport in mid/fronthaul and more processing at the cloud side. Therefore, the
energy consumption performance should be evaluated from end-to end perspective. In
the literature of network slicing, the problem of end-to-end network modeling is not well
investigated. In this study, we propose end-to-end delay and energy model for the end-to-
end network slicing in Ethernet-based C-RAN architecture and propose an optimization
problem to minimize the networks’ energy consumption by allocating the orthogonal re-
sources to each slice. We guarantee the slice QoS is satisfied by imposing constraints to
the problem. The main contributions of this study are summarized as follows:
ID defined in eCPRI standardization [122], for each slices. This point of split mitigates
the fronthaul capacity requirements while allowing to benefit from PHY layer techniques
such as joint transmission/reception coordinated multipoint (CoMP) [66].
CHAPTER 3. AI ASSISTED NETWORK ARCHITECTURE DESIGN AND
60 MANAGEMENT
Slice Energy Model: Each slice is composed of set of RUs, switches, assigned sub-
carriers, and DUs. The energy consumption per slice is given by,
CC CC CC
Pcool + NDU PDU,min T
Esslice =
ζCC |S|
CC X X N sw
Pproc X
+ dpr
CC,u,s,r + sw
Ps,k T
ζCC
r∈R u∈Us,r k=1
BS
1 X Pstatic T DU
+ + Ps,r T
ζBS |S|
r∈R
1 X X sc tx R,tx
+ Nu,s,r Ps,r Du,s,r , (3.17)
ζBS
r∈R u∈Us,r
where the static power is distributed equally among the slices. It is worth mentioning that
different sharing of constant power consumption is possible among each slice or service
categories. Equal sharing assigns equal share of power to each slice, however, it intro-
duces a non-affordable cost burden to the slice with small loads which might be essential,
such as voice, or yet newly introduced ones. Another approach is proportional sharing in
which the constant power consumption is shared proportional to the requested data rate
of each slice/service. It requires quantifying the amount of traffic produced by each ser-
vice category and then sharing the power consumption between them. In this case, we are
penalizing slices or services with high loads which are major driving forces for network
operation and revenue. Another approach is to use Shapely value in which it yields to
smaller share for large service categories than proportional sharing and a smaller share for
small service categories than equal sharing [4].
The first problem problem is joint computation and communication resource allocation
for network slicing. The slice manager solves the optimization problem (which is explained
in Paper 6). At each time instance, we will solve the following problem and create services
slices.
Given: Number of users at each time with their demanded QoS, requested services,
architecture and its parameters such as line rates, processing capacities, bandwidth, switch
rate, etc.
Objective: Minimize the energy consumption of the end to end network
Constraints: Experienced delay of each user, used bandwidth, used processing capac-
ity, used rate in the fronthaul
Optimization variables: bandwidth for each service in each cell, processing resources
for each service in the center cloud.
Once the problem is solved, we can calculate the number of subcarriers per allocated
bandwidth. We assume the Gops at CC is a linear function of the bandwidth, we can
express the required processing as a function of the bandwidth. We express the number of
active DUs as a function of the processing capacity of a single DU at CC and the required
processing resources.
No
Yes
Slice Requirements Reconfigure Slice? QoS Monitoring
Processing Resources
Slice Manager
Figure 3.12: Slicing life cycle management with QoS guarantee [8].
satisfying the users’ QoS is not feasible, then slice reconfiguration may be required which
is explained in the following subsection.
When resources are reserved for each slice, the slice manager needs to know whether the
provided resources are enough for satisfying users’ requirements. Therefore, the network
should continuously monitor the QoS to make sure that the users’ QoS are satisfied. In Fig.
3.12, we show how the slice manager interacts with the network. The slice manager gets
the information of all slice requirements and their corresponding load prediction and ac-
cordingly reserves the communication and computation resources for each slice and casts
this information to each RU. At each RU and for each slice, the assigned resources are
allocated to the users. The network examines the experienced users’ QoS. When network
status changes, e.g., we have more users than expected, the provided resources in the slice
may not be enough to meet the QoS requirements and the slice resources should be recon-
figured. If BS cannot serve the users with their QoS, it submits the slice reconfiguration
request to the slice manager and raise the flag if slice reconfiguration is required. The slice
manager reconfigures the allocated resources per slice based on the new predictions and
update the RUs with the new assigned resources so that they can serve users with their
required QoS within each slice.
On the other hand, if the resources are not utilized for the slice window, again RU
can submit the slice reconfiguration request to release the assigned resources in favor of
providing more resources to other services, improve the resource utilization, and decrease
the energy wasted for keeping the network resources idle. With this approach, we can
dynamically manage the life cycle of each slice.
3.4. ENERGY EFFICIENT END-TO-END NETWORK SLICING 63
14
Number of users
12
480
10
8
460
6
440 4
420 0
2 4 6 8 10 12 14 16 18 20 22 24
Hour
Figure 3.13: Hourly profile of expected energy consumption and number of users for
eMBB delay sensitive and eMBB delay tolerant slices [8].
requirements determine the total required processing at the CC. For instance, in this study
since the number of massive MTC devices is higher than other slices, in total it requires
more processing resources. Please note that each massive MTC device requires much less
processing resources as is depicted in Fig. 3.14a. On the other hand critical MTC de-
vices, despite their lower load than other slices, they require higher processing resources
than eMBB slices. Because critical MTC slice has the most stringent delay requirement
and hence higher processing resources should be assigned to it, so that the QoS is satis-
fied. It is worth mentioning that the file size of each user may differ in UBRA, and thus
the required processing resources may differ from SBRA. Therefore, compared to SBRA,
less processing resources may be assigned to the users in UBRA for one slice and more
processing resources be assigned to other slices.
For UBRA the joint communication and computation resource allocation problem is
more complex in comparison to the proposed end-to-end slicing SBRA. The reason is that
the number of optimization variables as well as the number of constraints for users’ QoS
linearly scales with number of users while for SBRA the number of constraints is in the
order of the number of slices. Therefore, it takes more time to solve the problem of UBRA
as is depicted in Fig. 3.15.
3.4.5 Conclusion
In this study, we investigated the problem of joint communication and computation re-
source allocation to create an end-to-end network slice in a C-RAN based architecture. We
3.4. ENERGY EFFICIENT END-TO-END NETWORK SLICING 65
1%
Critical MTC
Massive MTC
Delay Sensitive 8%
Delay Tolerant 8%
25%
32%
23% 22% 37%
65%
37%
4% 38%
(a) Allocated processing re- (b) Allocated processing re- (c) Proportion of traffic de-
sources per user. sources per slice. mands per slice.
Figure 3.14: Allocated processing resources to each slice and user [8].
35
SBRA
UBRA
30
25
Time(Seconds)
20
15
10
0
1 2 3 4 5 6 7 8 9
Number of users
Figure 3.15: Time complexity performance comparison of SBRA and UBRA [8].
3.5 Summary
In this section, we investigate three different aspect of the mobile network architecture
design. We propose an end to end delay and power (energy model) for the C-RAN based
architecture. Then we evaluate the migration cost from D-RAN to C-RAN. We show that it
is more cost-efficient for the fiber-rich network operator to migrate to C-RAN architecture.
The reason is that they can use their existing fiber infrastructure as the front/mid haul.
In this case, the cross-over time is lower than that of fiber-short operators. Fiber-short
operators, should benefit from functional splitting so that they require less capacity in the
front/mid haul. After examining the network migration cost, we investigate two different
methods of improving the the users’ QoS. In the first method, we improve the the delay
performance of delivering a file to the users by caching them at the edge cloud, closer to
the user. For this study, we minimize the power consumption of the network, by finding
the best location of each file, and the best functional splitting point for the users while
satisfying their required delay performance. In the last study, the aim is to serve multiple
services with heterogeneous required QoS. For this purpose, we create multiple service-
based customized network that are made on top of same network infrastructure. Each
customized network is know as slice and the method is called network slicing. In this
method we assign end to end resources to each slice so that they can serve their users with
the requires QoS. The result showed that end-to-end network slicing can save more energy
compared to the only RAN slicing.
Chapter 4
4.1 Conclusions
Mobile networks are a major energy consumer accounting for 1 − 2% of the global en-
ergy consumption. As we move ahead into 5G and beyond, the energy consumption will
be increasing considering new demands on service providers to increase network capac-
ity, extend geographical coverage, and deploy advanced technology use cases. In order
to assure the sustainability of the mobile networks, it is of great importance to improve
the network energy efficiency. In this thesis, we investigated the potential techniques to
improve the 5G and beyond network’s energy efficiency. We divide the contribution of the
thesis into two parts, i.e., 1) AI assisted green mobile network where we reduce the energy
consumption of the BSs in the network, and 2) AI assisted network architecture design and
management where we investigate the end-to-end network design and resource allocation
to improve the energy efficiency of the networks.
In the first part of thesis reducing the BSs energy consumption is under the focus. BSs
are one of the most energy consuming components of the network. Hence, we can reduce
the energy consumption by sleeping the BS when it is idle. Then, the question of when
and how deep a BS should sleep so that the energy consumption reduces without adverse
impact on the users’ QoS. To tackle these issues, we defined the RQ1-2 as:
• RQ1: How to dynamically put BSs into sleep modes when they are idle, i.e., when
and how deep to sleep, given the previous user arrivals and network load?
• RQ2: Depending on how deep BS sleeps, how can we design a network manage-
ment framework continuously analyze and monitor the risk while using ML to make
decisions?
To handle the issues in RQ1-2, we have devised a novel framework to define the risk
linked to the intention of mobile operator and translated this into network KPIs. We created
a novel network management framework with a digital twin continuously analyzing the
probability of risk and orchestrating the usage of ML. We propose an ML-based technique
67
68 CHAPTER 4. CONCLUSIONS AND CONTRIBUTIONS
to save energy at the BS, utilizing advanced sleep modes, while maintaining the users’ QoS.
We have combined analytical model based approaches with ML. Model based approaches
are used for risk analyses together with the operator traffic data to create a digital twin
which can mimic the behavior of a real BS with advanced sleep modes. Digital twin is
used for assessing the risk, continuously monitoring the performance by controlling the
usage of ML to make sleep mode decisions. Our study shows that there is a potential
to reduce base stations’ energy consumption by more than 50% when network operates at
10% average load. At around 20−30% average load, BSs can enter in sleep for a 70−80%
of time, leading to a gain in energy consumption of more than 20%. We show that using the
proposed method with achieving near optimal results can be achieved in low traffic hours
and sub-optimal energy saving in busy hours.
Today large telco players achieved a consensus on an open RAN architecture based
on hybrid C-RAN studied in this thesis. We studied and modeled end-to-end energy con-
sumption and delay of a cloud native network architecture based on virtualized cloud RAN
forming foundations of open RAN. Moreover, both network architecture design and the
utilizing network energy saving features, playing key roles in improving the energy effi-
ciency of the network meanwhile supporting multitudes of heterogeneous services.
Current network architectures cannot meet the requirements of the 5G and beyond
systems. Therefore, there is a need to devise and design new energy efficient architectures
and migrate to it. However, migration to new architectures is costly and imposes new
constraints and challenges to be tackled. To address these issues, we define the following
research questions:
• RQ3: What is the deployment cost of different C-RAN architectures? How sensitive
an architecture is to the price changes?
• RQ4: How artificial intelligence can help us to design a C-RAN fronthaul with min-
imum cost and energy consumption?
In order to answer the RQ3-4, we model the migration cost, in terms of both OPEX and
CAPEX, from a conventional distributed RAN architecture to a C-RAN architecture with
economic viability analyses of a virtualized cloud native architecture considering the future
traffic forecast. It is known that C-RAN is energy efficient compared to D-RAN due to the
cooling efficiency and processing multiplexing gains in the centralized cloud however it is
not clear under what conditions it becomes also cost efficient considering the infrastructure
cost of fronthaul and fiber links. We solved the optimal fronthaul design problem in terms
of costs and energy consumption in C-RAN networks. For this study, we formulate the
problem as the ILP and propose an AI algorithm for the optimal fronthaul design. We
show that the fiber-rich operators should migrate to conventional C-RAN while for fiber-
short operators migrating to the C-RAN with functional splitting is a more reasonable
choice.
• RQ5: Is there a way to still save energy while meeting with the delay requirements
of users by placing content to the edge cloud instead of centralized cloud?
4.2. DISCUSSIONS AND FUTURE WORK 69
• RQ6: How to allocate the network resources in fronthaul, edge cloud, and central-
ized cloud to minimize the energy consumption while meeting the users’ QoS?
To address the RQ5-6, in a multi-layer hybrid C-RAN architecture, we solved the prob-
lem of joint content placement and functional split optimization by minimizing network
energy consumption while meeting with the QoS requirements of mobile users. The chal-
lenge here is the trade-off between energy consumption and delay optimization. Energy
efficiency is achieved when all network functions are centralized. To optimize the delay,
content needs to be placed at the edge-cloud requiring network function processing at the
edge. In this study, it is demonstrated that utilizing the content caching at edge cloud to-
gether with functional splitting is beneficial in terms of content access delay but at cost of
power consumption. Joint optimization of function splitting and content cashing allows us
to find a compromise in between delay and power consumption.
• RQ7: How to allocate computation and communication resources to each slice for
satisfying the QoS and minimizing energy consumption?
• RQ8: How to design a network management framework for slice admission to con-
tinuously maintain the performance of dynamic slicing in terms of energy saving?
Finally, to address the RQ7-8, we investigate the problem of end-to-end network slicing
by joint allocation of communication and computation resources in C-RAN networks. We
investigate the tradeoff between communication and computation resource. In this study,
end-to-end network slicing is optimized for the first time considering the energy consump-
tion in addition to the QoS guaranteeing. Most network slicing studies consider only radio
access network resources and the goal of slicing is to guarantee the performance for spe-
cific services which in turn increases the energy consumption of the network. Intuitively
energy consumption goes down if more bandwidth resources is allocated to users when
RAN segment of the network is considered. Thanks to our end-to-end energy consumption
model it is demonstrated that this is not always true since increasing bandwidth alloca-
tion increases processing energy consumption in the cloud and fronthaul segment of the
network.
To sum up, in the first part of this thesis a new obstacle of practical usage and risk
of AI for managing mobile networks is treated under the context of advanced sleep mode
management. A new risk-aware network management framework combining model based
approaches with ML is proposed considering the intent of the mobile operator as an input.
The concept of digital twin is formalized under this framework. In the second part of
the thesis, cloud native network architectures are studied based on C-RAN by developing
end-to-end cost, energy consumption and delay models and jointly optimizing performance
together with energy consumption founding the basis for the future open RAN architecture.
obtain mobile networks data. Utilizing the available data can significantly help the mobile
operators and vendors, to design, optimize, and maintain their network. Moreover, thanks
to the advancement of the field of artificial intelligence and processing technologies which
can handle processing extensive tasks, it is now possible to benefit from the AI techniques
to assist the network management and improve the performance metrics. In this thesis, we
cover many aspects of improving the network energy efficiency in the mobile networks.
However, there are nevertheless some limitations that this thesis and can be improved and
extended.
Intelligent Energy Saving at Base Stations
Various energy saving features are already implemented in the base stations. However,
due to the trade off between provided QoS and energy saving, the mobile operators are
very conservative about activating the energy saving feature. In the studies in [6, 49, 63],
we take into account the activating the base sleep modes to save energy while minimizing
the adverse impact on the users’ QoS. However, in [6, 63] single BS is considered and the
problem of multiple BSs with distributed learnign is still open. Moreover in these studies,
the impact of interference is neglected. In fact, with interference from the neighboring
cells the required time to serve users will increase, hence less time will be left for BS to
sleep. The impact of interference on the decision can be investigated as one extension of
the works. In [49], we adapt the BS’s configuration to increase the potential sleeping time
of BS considering the neighboring interference, however, the decisions on the sleep modes
are based on average values and can be adapted dynamically to enhance the energy sav-
ing performance. In all above studies, the underlying assumption is that the algorithms are
trained over a generic BS. However, depending on the hour of the day, and the geographical
location of the BSs, the solution of the proposed algorithms may be different. Hence, it is
important to incorporate the spatio temporal data into the analysis and provide the solution
depending on the time and position of the BS. Since BSs are serving users who are using
different services, and services have heterogeneous QoS demands, some delay stringent
services cannot tolerate the imposed delay caused by BS sleeping. Hence, the heteroge-
neous service requirements as well as allocating resource blocks in the time-frequency grid
have impact on the decisions and can be considered as a research direction to improve the
performance of our studies.
Risk Aware Network Management
It is significantly crucial to make sure that the algorithms that are implemented in the
network do not cause any unexpected performance degradation to the network. This issue
becomes even more critical when AI based algorithms are utilized because there is always
a risk associated to the AI algorithms. In this thesis, we addressed the importance of risk
and performance degradation analysis in the network in [6, 8] where we investigated this
issue. However, this can be further analyzed for the AI based algorithms to be utilized in
the network. For instance, in the content placement problem [7], the content delivery time
to the user is a key performance metric, and the impact of any AI algorithms on this metric
should be quantified and analyzed. Similar analysis is required for any AI assisted energy
saving algorithms, to avoid the risk of performance degradation in the network. Therefore,
one research direction is to extend the analysis of energy saving methods by adding another
control layer to manage and monitor the risk.
4.2. DISCUSSIONS AND FUTURE WORK 71
[4] M. Masoudi et al., “Green mobile networks for 5G and beyond,” IEEE Access,
vol. 7, pp. 107 270–107 299, 2019.
[6] M. Masoudi, E. Soroush, J. Zander, and C. Cavdar, “Digital twin assisted risk-aware
sleep mode management using deep Q-networks,” submitted to IEEE Transactions
on Vehicular Technology, 2022.
[9] imec, GreenTouch Project, “Power model for wireless base stations.” [Online].
Available: https://www.imec-int.com/en/powermodel
[10] B. Debaillie, C. Desset, and F. Louagie, “A flexible and future-proof power model
for cellular base stations,” in Proc. IEEE VTC-Spring’15, May 2015, pp. 1–7.
73
74 BIBLIOGRAPHY
[14] M. Masoudi, B. Khamidehi, and C. Cavdar, “Green cloud computing for multi
cell networks,” in Wireless Communications and Networking Conference (WCNC).
IEEE, 2017.
[16] M. Masoudi and C. Cavdar, “Device vs edge computing for mobile services: Delay-
aware decision making to minimize power consumption,” IEEE Transactions on
Mobile Computing, 2020.
[17] M. Masoudi, A. Azari, and C. Cavdar, “Low power wide area IoT networks: Re-
liability analysis in coexisting scenarios,” IEEE Wireless Communications Letters,
2021.
[21] M. Masoudi, A. Azari, E. A. Yavuz, and C. Cavdar, “Grant-free radio access IoT
networks: Scalability analysis in coexistence scenarios,” in 2018 IEEE International
Conference on Communications (ICC). IEEE, 2018, pp. 1–7.
[23] A. Azari, M. Masoudi, C. Stefanovic, and C. Cavdar, “IoT networks with grant-
free access: Reliability and energy efficiency analysis in coexistence deployments,”
submitted to IEEE IoT Journal, 2022.
[29] G. Vallero, D. Renga, M. Meo, and M. A. Marsan, “RAN energy efficiency and fail-
ure rate through ANN traffic predictions processing,” Computer Communications,
vol. 183, pp. 51–63, 2022.
[37] E. Dinc, M. Vondra, and C. Cavdar, “Total cost of ownership optimization for direct
air-to-ground communication networks,” IEEE Transactions on Vehicular Technol-
ogy, vol. 70, no. 10, pp. 10 157–10 172, 2021.
[39] ETSI, “Environmental engineering (EE); methodology for environmental life cy-
cle assessment (LCA) of information and communication technology (ICT) goods,
networks and services,” ETSI ES 203 199 V1.3.1, 2015.
[41] E.-E. Commission et al., “International reference life cycle data system (ILCD)
handbook general guide for life cycle assessment detailed guidance,” Institute for
Environment and sustainability, 2010.
[50] F. Yaghoubi, M. Furdek, A. Rostami, P. Öhlén, and L. Wosinska, “Design and relia-
bility performance of wireless backhaul networks under weather-induced correlated
failures,” IEEE Transactions on Reliability, 2021.
[51] ETSI, “Energy consumption and CO2 footprint of wireless networks,” Report
RRS05-024, 2011.
[52] G. Vallero, M. Deruyck, M. Meo, and W. Joseph, “Base station switching and edge
caching optimisation in high energy-efficiency wireless access network,” Computer
Networks, vol. 192, p. 108100, 2021.
[54] P. Piunti, C. Cavdar, S. Morosi, K. E. Teka, E. Del Re, and J. Zander, “Energy
efficient adaptive cellular network configuration with QoS guarantee,” in Proc. IEEE
ICC’15. IEEE, 2015, pp. 68–73.
[55] S.-E. Elayoubi, L. Saker, and T. Chahed, “Optimal control for base station sleep
mode in energy efficient radio access networks,” in Proc. IEEE INFOCOM’11.
IEEE, 2011, pp. 106–110.
[57] M. Meo, D. Renga, and Z. Umar, “Advanced sleep modes to comply with delay con-
straints in energy efficient 5G networks,” in 2021 IEEE 93rd Vehicular Technology
Conference (VTC2021-Spring). IEEE, 2021, pp. 1–7.
78 BIBLIOGRAPHY
[58] F. E. Salem, A. Gati, Z. Altman, and T. Chahed, “Advanced sleep modes and their
impact on flow-level performance of 5G networks,” in Proc. IEEE VTC-Fall’17,
Sep. 2017.
[59] F. E. Salem, Z. Altman, A. Gati, T. Chahed, and E. Altman, “Reinforcement learning
approach for advanced sleep modes management in 5G networks,” in Proc. IEEE
VTC-Fall’18, Chicago, USA, Aug. 2017.
[60] F. E. Salem, T. Chahed, Z. Altman, and A. Gati, “Traffic-aware advanced sleep
modes management in 5G networks,” in Proc. IEEE WCNC’19, Marrakech, Mo-
rocco, Apr. 2019.
[61] A. El-Amine, M. Iturralde, H. A. H. Hassan, and L. Nuaymi, “A distributed Q-
Learning approach for adaptive sleep modes in 5G networks,” in Proc. IEEE
PIMRC’19. IEEE, 2019, pp. 1–6.
[62] H. Pervaiz, O. Onireti, A. Mohamed, M. A. Imran, R. Tafazolli, and Q. Ni, “Energy-
efficient and load-proportional eNodeB for 5G user-centric networks: A multilevel
sleep strategy mechanism,” IEEE Vehicular Technology Magazine, vol. 13, no. 4,
pp. 51–59, 2018.
[63] M. Masoudi, M. G. Khafagy, E. Soroush, D. Giacomelli, S. Morosi, and C. Cav-
dar, “Reinforcement learning for Traffic-Adaptive sleep mode management in 5G
networks,” in Proc. IEEE PIMRC’20, London, United Kingdom, Aug. 2020.
[64] I. Chih-Lin, J. Huang, R. Duan, C. Cui, J. X. Jiang, and L. Li, “Recent progress
on C-RAN centralization and cloudification,” IEEE Access, vol. 2, pp. 1030–1039,
2014.
[65] D. Pliatsios, P. Sarigiannidis, S. Goudos, and G. K. Karagiannidis, “Realizing
5G vision through cloud RAN: technologies, challenges, and trends,” EURASIP
Journal on Wireless Communications and Networking, vol. 2018, no. 1, p. 136,
May 2018. [Online]. Available: https://doi.org/10.1186/s13638-018-1142-1
[66] “Functional splits and use cases for small cell virtualization,” Small Cell Forum
release, Tech. Rep., 2016.
[67] S. Matoussi, I. Fajjari, S. Costanzo, N. Aitsaadi, and R. Langar, “A user centric
virtual network function orchestration for agile 5G Cloud-RAN,” in 2018 IEEE In-
ternational Conference on Communications (ICC). IEEE, 2018.
[68] N. Carapellese, M. Tornatore, and A. Pattavina, “Energy-efficient baseband unit
placement in a fixed/mobile converged WDM aggregation network,” IEEE J. Select.
Areas Commun., vol. 32, no. 8, pp. 1542–1551, Aug. 2014.
[69] A. Asensio, P. Saengudomlert, M. Ruiz, and L. Velasco, “Study of the centralization
level of optical network-supported cloud RAN,” in 2016 International Conference
on Optical Network Design and Modeling (ONDM), May 2016.
BIBLIOGRAPHY 79
[70] K. Tanaka and A. Agata, “Next-generation optical access networks for C-RAN,” in
Optical Fiber Communication Conference, Mar. 2015.
[71] M. Peng, C. Wang, V. Lau, and H. V. Poor, “Fronthaul-constrained cloud radio
access networks: insights and challenges,” IEEE Wireless Commun., vol. 22, no. 2,
pp. 152–160, Apr. 2015.
[72] D. C. Chen, T. Q. Quek, and M. Kountouris, “Backhauling in heterogeneous cellu-
lar networks: Modeling and tradeoffs,” IEEE Transactions on Wireless Communi-
cations, vol. 14, no. 6, pp. 3194–3206, 2015.
[73] A. A. Widaa Ahmed, K. Chatzimichail, J. Markendahl, and C. Cavdar, “Techno-
economics of green mobile networks considering backhauling,” in European Wire-
less 2014; 20th European Wireless Conference, May 2014.
[74] M. Mahloo, P. Monti, J. Chen, and L. Wosinska, “Cost modeling of backhaul for
mobile networks,” in Communications Workshops (ICC), 2014 IEEE International
Conference on. IEEE, 2014, pp. 397–402.
[75] C. M. Machuca, M. Kind, K. Wang, K. Casier, M. Mahloo, and J. Chen, “Methodol-
ogy for a cost evaluation of migration toward NGOA networks,” IEEE/OSA Journal
of Optical Communications and Networking, vol. 5, no. 12, pp. 1456–1466, 2013.
[76] A. Peralta, E. Inga, and R. Hincapié, “Optimal scalability of FiWi networks based
on multistage stochastic programming and policies,” Journal of Optical Communi-
cations and Networking, vol. 9, no. 12, pp. 1172–1183, 2017.
[77] K. Wang, C. Masmachuca, L. Wosinska, P. Urban, A. Gavler, K. Brunnstrom, and
J. Chen, “Techno-economic analysis of active optical network migration toward
next-generation optical access,” IEEE/OSA Journal of Optical Communications and
Networking, vol. 9, no. 4, pp. 327–341, 2017.
[78] F. Yaghoubi, M. Mahloo, L. Wosinska, P. Monti, F. de Souza Farias, J. C. W. A.
Costa, and J. Chen, “A techno-economic framework for 5G transport networks,”
IEEE wireless communications, vol. 25, no. 5, pp. 56–63, 2018.
[79] H. Frank, R. Tessinari, Y. Zhang, Z. Gao, C. Meixner, S. Yan, and D. Simeonidou,
“Resource analysis and cost modeling for end-to-end 5G mobile networks,” in Op-
tical Network Design and Modelling (ONDM), 10 2019.
[80] C. Ranaweera, P. Monti, B. Skubic, E. Wong, M. Furdek, L. Wosinska, C. M.
Machuca, A. Nirmalathas, and C. Lim, “Optical transport network design for 5G
fixed wireless access,” Journal of Lightwave Technology, vol. 37, no. 16, pp. 3893–
3901, Aug 2019.
[81] T. X. Tran, A. Hajisami, P. Pandey, and D. Pompili, “Collaborative mobile edge
computing in 5G networks: New paradigms, scenarios, and challenges,” IEEE Com-
munications Magazine, vol. 55, no. 4, pp. 54–61, 2017.
80 BIBLIOGRAPHY
[83] J. Kwak, Y. Kim, L. B. Le, and S. Chong, “Hybrid content caching in 5G wireless
networks: Cloud versus edge caching,” IEEE Transactions on Wireless Communi-
cations, vol. 17, no. 5, pp. 3030–3045, 2018.
[84] X. Huang, Z. Zhao, and H. Zhang, “Latency analysis of cooperative caching with
multicast for 5G wireless networks,” in 2016 IEEE/ACM 9th International Confer-
ence on Utility and Cloud Computing (UCC), 2016.
[85] A. Alabbasi and C. Cavdar, “Delay-aware green hybrid CRAN,” in Modeling and
Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt), 2017. IEEE,
2017, pp. 1–7.
[87] I. Afolabi, T. Taleb, K. Samdanis, A. Ksentini, and H. Flinck, “Network slicing and
softwarization: A survey on principles, enabling technologies, and solutions,” IEEE
Communications Surveys & Tutorials, vol. 20, no. 3, pp. 2429–2453, 2018.
[92] R. Pasquini and R. Stadler, “Learning end-to-end application QoS from openflow
switch statistics,” in 2017 IEEE Conference on Network Softwarization (NetSoft).
IEEE, 2017, pp. 1–9.
[94] X. Li, R. Ni, J. Chen, Y. Lyu, Z. Rong, and R. Du, “End-to-end network slicing in
radio access network, transport network and core network domains,” IEEE Access,
vol. 8, pp. 29 525–29 537, 2020.
[95] T. Li, X. Zhu, and X. Liu, “An end-to-end network slicing algorithm based on deep
Q-learning for 5G network,” IEEE Access, vol. 8, pp. 122 229–122 240, 2020.
[96] H. Chergui and C. Verikoukis, “Offline SLA-constrained deep learning for 5G net-
works reliable and dynamic end-to-end slicing,” IEEE Journal on Selected Areas in
Communications, vol. 38, no. 2, pp. 350–360, 2020.
[97] H.-T. Chien, Y.-D. Lin, C.-L. Lai, and C.-T. Wang, “End-to-end slicing with opti-
mized communication and computing resource allocation in multi-tenant 5G sys-
tems,” IEEE Transactions on Vehicular Technology, vol. 69, no. 2, pp. 2079–2091,
2019.
[98] S. E. Elayoubi, S. B. Jemaa, Z. Altman, and A. Galindo-Serrano, “5G RAN slicing
for verticals: Enablers and challenges,” IEEE Communications Magazine, vol. 57,
no. 1, pp. 28–34, 2019.
[99] X. Shen, J. Gao, W. Wu, K. Lyu, M. Li, W. Zhuang, X. Li, and J. Rao, “AI-assisted
network-slicing based next-generation wireless networks,” IEEE Open Journal of
Vehicular Technology, vol. 1, pp. 45–66, 2020.
[100] L. Feng, Y. Zi, W. Li, F. Zhou, P. Yu, and M. Kadoch, “Dynamic resource allocation
with RAN slicing and scheduling for uRLLC and eMBB hybrid services,” IEEE
Access, vol. 8, pp. 34 538–34 551, 2020.
[101] H. Xiang, S. Yan, and M. Peng, “A realization of fog-RAN slicing via deep re-
inforcement learning,” IEEE Transactions on Wireless Communications, vol. 19,
no. 4, pp. 2515–2527, 2020.
[102] Y. Sun, S. Qin, G. Feng, L. Zhang, and M. Imran, “Service provisioning framework
for RAN slicing: user admissibility, slice association and bandwidth allocation,”
IEEE Transactions on Mobile Computing, 2020.
[103] Y. Cui, X. Huang, P. He, D. Wu, and R. Wang, “A two-timescale resource alloca-
tion scheme in vehicular network slicing,” in 2021 IEEE 93rd Vehicular Technology
Conference (VTC2021-Spring). IEEE, 2021, pp. 1–5.
[104] H. Zhang and V. W. Wong, “A two-timescale approach for network slicing in QoS,”
IEEE Transactions on Vehicular Technology, vol. 69, no. 6, pp. 6656–6669, 2020.
[105] C. Cox, An Introduction to LTE: LTE, LTE-Advanced, SAE, VoLTE and 4G Mobile
Communications, 2nd ed. Wiley Publishing, 2014.
[106] X. Guo, Z. Niu, S. Zhou, and P. Kumar, “Delay-constrained energy-optimal base sta-
tion sleeping control,” IEEE Journal on Selected Areas in Communications, vol. 34,
no. 5, pp. 1073–1085, 2016.
82 BIBLIOGRAPHY
[107] J. Liu, B. Krishnamachari, S. Zhou, and Z. Niu, “DeepNap: Data-driven base station
sleeping operations through deep reinforcement learning,” IEEE Internet of Things
Journal, vol. 5, no. 6, pp. 4273–4282, 2018.
[108] J. E. Beyer and B. F. Nielsen, “Predator foraging in patchy environments: the inter-
rupted poisson process (IPP) model unit,” DANA-CHARLOTTENLUND-, vol. 11,
1996.
[112] “CPRI vs eCPRI: What Are Their Differences and Meanings to 5G?” https:
//community.fs.com/blog/cpri-vs-ecpri-differences-and-meanings-to-5g.html, ac-
cessed: 2020-04-04.
[114] B. Debaillie, C. Desset, and F. Louagie, “A flexible and future-proof power model
for cellular base stations,” in Vehicular Technology Conference (VTC Spring), 2015
IEEE 81st. IEEE, 2015, pp. 1–7.
[115] China Mobile Research Institute, “C-RAN the road towards green RAN,” White
Paper, October 2011.
[116] Ericsson AB, Huawei Technologies Co. Ltd, NEC Corporation, Alcatel Lucent, and
Nokia Networks, “Common public radio interface (CPRI); interface specification
v7.0,” Oct. 2015.
[117] D. Liu and C. Yang, “Energy efficiency of downlink networks with caching at base
stations,” IEEE Journal on Selected Areas in Communications, vol. 34, no. 4, pp.
907–922, April 2016.
[118] X. Wang, A. Alabbasi, and C. Cavdar, “Interplay of energy and bandwidth consump-
tion in CRAN with optimal function split,” in 2017 IEEE International Conference
on Communications (ICC), May 2017, pp. 1–6.
[120] B. Henrik, L. Jan, C. Angelo, and a. Thomas, “Applied network slicing scenarios in
5G,” Ericsson Technology Review, 2021.
[121] R. Su, D. Zhang, R. Venkatesan, Z. Gong, C. Li, F. Ding, F. Jiang, and Z. Zhu,
“Resource allocation for network slicing in 5G telecommunication networks: A
survey of principles and models,” IEEE Network, vol. 33, no. 6, pp. 172–179, 2019.
[122] “Common public radio interface: eCPRI interface specification,” Standard, 2019.
[Online]. Available: http://www.cpri.info/downloads/eCPRI_v_2.0_2019_05_10c.
pdf
[123] S. Abdelwahab, B. Hamdaoui, M. Guizani, and T. Znati, “Network function virtu-
alization in 5G,” IEEE Communications Magazine, vol. 54, no. 4, pp. 84–91, 2016.
[124] M. Yan, G. Feng, J. Zhou, Y. Sun, and Y.-C. Liang, “Intelligent resource scheduling
for 5G radio access network slicing,” IEEE Transactions on Vehicular Technology,
vol. 68, no. 8, pp. 7691–7703, 2019.
[125] A. Alabbasi, M. Ganjalizadeh, K. Vandikas, and M. Petrova, “On cascaded federated
learning for multi-tier predictive models,” in 2021 IEEE International Conference
on Communications Workshops (ICC Workshops), 2021, pp. 1–7.