You are on page 1of 7

NAME – Vishnu Mishra

43513202818

ECE-2

Artificial Intelliigence in 5g Technology


Artificial Intelligence in 5G Technology

techniques in the context of fog (edge) computing architecture,


Abstract—A fully operative and efficient 5G network cannot be
complete without the inclusion of artificial intelligence (AI) routines. aiming to distribute computing power, storage, control and
Existing 4G networks with all-IP (Internet Protocol) broadband networking functions closer to the users. Jiang et al. [5], focused
connectivity are based on a reactive conception, leading to a poorly on the challenges of AI in assisting the radio communications in
efficiency of the spectrum. AI and its sub-categories like machine intelligent adaptive learning, and decision- ma kin g .
learning and deep learning have been evolving as a discipline, to the The next generation of mobile and wireless communication
point that nowadays this mechanism allows fifth-generation (5G)
wireless networks to be predictive and proactive, which is essential technologies also requires the use of optimization to minimize or
in making the 5G vision conceivable. This paper is motivated by the maximize certain objective functions. Many of the problems in
vision of intelligent base stations making decisions by themselves, mobile and wireless communications are not linear or
mobile devices creating dynamically-adaptable clusters based on polynomial, in consequence, they demand to be approximated.
learned data rather than pre-established and fixed rules, that will Artificial neural networks (ANN) is an AI technique that has
take us to a improve in the efficiency, latency, and reliability of the
been suggested to model the objective function of the non- linear
current and real-time network applications in general. An
exploration of the potential of AI-based solution approaches in problem that requires optimization [6].
the context of 5G mobile and wireless communications In this article, we will introduce the potential of AI from the
technology is presented, evaluating the different challenges and basic learning algorithms, ML, deep learning, etc., into the
open issues for future research. next generation wireless networks, that help fulfilling the
Index Terms—5G Networks, Artificial Intelligence, IT
coming diversified requirements of the 5G standards to operate
Convergence, Machine Learning, Deep Learning, Next Generation
Network. in a fully automated fashion, meeting the increased capacity
demand and to serve users with superior quality of experience
I. I NTRODUCTION (QoE). The article is divided according to the level of supervision
the AI technique requires on the training stage. The major
Artificial Intelligence is great for problems for which existing categories boarded on the following sections are in supervised
solutions require a lot of hand-tuning or long lists of rules, for learning, unsupervised learning, and reinforcement learning. To
complex problems for which there is no good solution at all using understand the difference between these three learning
traditional approaches, for adaptation to fluctuating subcategories, a quintessential concept of learning can be
environments, to get insights about complex problems that use invoked: "A computer program is said to learn from experience
large amounts of data, and in general to notice the patterns that a E with respect to some class of tasks T and performance measure
human can miss [1]. Hard-coded software can go from a long list P, if its performance at tasks in T, as measured by P, improves
of complex rules that can be hard to maintain to a system that with experience E" [7].
automatically learn from previous data, detect anomalies, predict Supervised Learning comprises looking at several examples of
future scenarios, etc. These problems can be tackled adopting the a random vector x and its label value of vector y, then learning to
capability of learn offered by AI along with the dense amount of predict y from x, by estimating p(y | x), or particular properties of
transmitted data or wireless configuration datasets. that distribution. Unsupervised Learning implicates observing
We have witnessed AI, mobile and wireless systems becoming different instances of a random vector x and aiming to learn the
an essential social infrastructure, mobilizing our daily life and probability distribution p(x), or its properties. Reinforcement
facilitating the digital economy in multiple shapes [2]. However, Learning interacts with the environment, getting feedback loops
somehow 5G wireless communications and AI have been between the learning system and its experiences, in terms of
perceived as dissimilar research fields, despite the potential they rewards and penalties [8].
might have when they are fused together.
Certain applications available in this intersection of fields II. SUPERVISED LEARNING IN 5G MOBILE AND W IRELESS
have been addressed within specific topics of AI and next- COMMUNICATIONS TECHNOLOGY
generation wireless communication systems. Li et al. [3], In supervised learning, each training example has to be fed
highlighted the potentiality of AI as an enabler for cellular along with their respective label. The notion is that training a
networks to cope with the 5G standardization requirements. learning model on a sample of the problem instances with known
Bogale et al. [4], discussed the Machine Learning (ML) optima, and then use the model to recognize optimal solutions to
new instances. A typical task on supervised

978-1-5386-5041-7/18/$31.00 ©2018 IEEE ICTC 2018


learning is to predict a target numeric value, given a set of al. [13], investigated the unobservable CSI for wireless
features, called predictors. This description of the task is called communications and proposed a neural-network-based
regression. approximation for channel learning, to infer this unobservable
Transfer Learning is a popular technique often used to classify information, from an observable channel. Their framework was
vectors. Essentially, one would train a convolutional neural built upon the dependence between channel responses and
network (CNN) on a very large dataset, for example ImageNet location information. To build the supervised learning
[9], and then fine-tune the CNN on a different vector dataset. The framework, they train the network with channel samples, where
good part here is, training on the large dataset is already done by the unobservable metrics can be calculated from traditional pilot-
some people who offer the learned weights for public research aided channel estimation. The applications of their work can be
use. The dataset can change during the implementation, but the extended to cell selection in multi-tier networks, device
strength of AI is that it does not depend on fixed rules; therefore discovery for device-to-device (D2D) communications, or end-
adapting the model to changes in time is done by retraining the to-end user association for load balancing, among others.
model with the augmented or modified dataset. Sarigiannidis et al. [14], used a machine-learning framework
Another typical task of Supervised Learning is regression or based on supervised learning on a Software-Defined-Radio-
prediction, where the task is to predict a target numerical value, enabled hybrid optical wireless network. The machine-learning
given a set of features/attributes, also called predictors. The key framework receives the traffic-aware knowledge from the SDN
difference between classification is that with ML algorithms like controllers and adjusts the uplink-downlink configuration in the
Logistic Regression, the model can output the probability of that LTE radio communication. The authors argue that their
certain value belongs to a given class. This type of system is mechanism is capable of determining the best configuration
trained with multiple examples of a class, along with their label, based on the traffic dynamics from the hybrid network, offering
and the model must learn how to classify new instances. significant network improvements in terms of jitter and latency.
LTE small cells are increasingly being deployed in 5G A commonly AI architecture used to model or approximate
networks to cope with the high traffic demands. These small- objective functions for existing models or to create accurate
scale cells are characterized by its unpredictable and dynamic models that were impossible to represent in the past without the
interference patterns, expanding the demand for self-optimized intervention of learning machines, is Artificial Neural Networks
solutions that can lead to lower drops, higher data rates, and (ANN). ANNs have been proposed to solve propagation loss
lower cost for the operators. Self-organizing networks (SON) are estimation in dynamic environments, where the input parameters
expected to learn and dynamically adapt to different can be selected from the information of the transmitter, receiver,
environments. For the selection of optimal network buildings, frequency, and so on, and the learning network will
configuration in SONs, several AI-based fixes had been train on that data to learn to estimate the function that best
discussed. In [10], machine learning and statistical regression approximates the propagation loss for next-generation wireless
techniques are evaluated (bagging tree, boosted tree, SVM, linear networks [15]–[18]. In the same context, Ayadi et al. [19],
regressors, etc), gathering radio performance metrics like path proposed a multi-layer perceptron (MLP) architecture to predict
loss and throughput for particular frequencies and bandwidth coverage for either short or long distance, in multiple
settings from the cells, and adjusting the parameters using frequencies, and in all environmental types. The MLP presented
learning-based approaches to predict the performance that a user uses feed-forward training with back propagation to update the
will experience, given previous performance measurement weights of the ANN. They used the inputs of the ITU-R P.1812-
instances/samples. The authors showed that the learning-based 4 model [20], to feed their network composed by an input layer,
dynamic frequency and bandwidth allocation (DFBA) prediction a hidden layer, and one output layer. They showed that the ANN
methods yield outstanding performance gains, with bagging tree model is more accurate to predict coverage in outdoor
prediction method as the most promising approach to increase environments than the ITU model, using the standard deviation
and correlation factor as a comparison measure.
the capacity of next-generation cellular networks. An extensive
interest in path-loss prediction has raised since researchers Among other AI techniques with potential for wireless
noticed the power of AI to model more efficient and accurate communications, there are K-Nearest Neighbors, Logistic
path-loss models based on publicly available datasets [11]. The Regression, Decision Trees and Random Forests. Table I, shows
use of AI has been proved to provide adapt- ability to network a summary of the potential applications of supervised learning in
designers who rely on signal propagation models. Timoteo et al. 5G wireless communication technologies.
[12], proposed a path loss prediction model for urban III. UNSUPERVISED L EARNING IN 5G MOBILE AND
environments using support vector regression to ensure an WIRELESS COMMUNICATIONS TECHNOLOGY
acceptable level of quality of service (QoS) for wireless network
users. They employed different kernels and parameters over the In unsupervised learning, the training data is unlabeled, and the
Okumura-Hata model and Ericsson 9999 model, and obtained system attempts to learn without any guidance. This technique is
similar results as a complex neural network, but with a lower particularly useful when we want to detect groups of similar
computational complexity. characteristics. At no point, we tell the algorithm to try to detect
Wireless communications count actively on channel state groups of related attributes; the algorithm solves this connection
information (CSI) to make an acquainted decision in the without intervention. However, in some cases, we can select the
operations of the network, and during signal processing. Liu et number of clusters we want the algorithm to create.
Balevi et al. [21], incorporated fog networking into a summary of the potential applications of unsupervised learning
heterogeneous cellular networks and used an unsupervised soft- in 5G wireless communication technologies.
clustering algorithm to locate the fog nodes that are upgraded
from low power nodes (LPNs) to high power nodes (HPNs). The IV. R EINFORCEMENT L EARNING IN 5G MOBILE AND
authors showed that by applying machine learning clustering to WIRELESS COMMUNICATIONS TECHNOLOGY
a priori known data like the number of fog nodes and location The philosophy of Reinforcement Learning scheme is based
of all LPNs within a cell, they were able to determine a clustering on a learning system often called agent, that reacts to the
configuration that reduced latency in the network. The latency environment. The agent performs actions and gets rewards or
calculation was performed with open- loop communications, penalties (negative rewards) in return for its actions. That means
with no ACK for transmitted packets, and compared to the that the agent has to learn by itself creating a policy that defines
Voronoi tessellation model, a classical model based on Euclidean the action that the agent should choose in a certain situation. The
distance. aim of the reinforcement-learning task is to maximize the
A typical unsupervised learning technique is K-means aforementioned reward over time.
clustering; numerous authors have investigated the applications
Resource allocation in Long Term Evolution (LTE) net- works has
of this particular clustering technique in the next generation been a dilemma since the technology was introduced. To
wireless network system. Sobabe et al. [22], proposed a overcome the wireless spectrum scarcity in 5G, novel deep
cooperative spectrum-sensing algorithm using a combination of learning approaches that account the coexistence of LTE and
an optimized version of K-means clustering, Gaussian mixture LTE-Unlicensed, to model the resource allocation problem in
model and expectation maximization (EM) algorithm. They LTE-U small base stations (SBS), has been introduced in [26].
proved that their learning algorithm outperformed the energy To accomplish their contribution, the authors introduced a
vector-based algorithm. Song et al. [23], discussed how K-means reinforcement-learning algorithm based on long short-term
clustering algorithm and its classification capabilities can aid in memory (RL-LSTM) cells to allocate pro- actively the resources
selecting an efficient relay selection among urban vehicular of LTE-U over the unlicensed spectrum. The problem formulated
networks. The authors investigated the methods for multi-hop reassembles a non-cooperative game between the SBSs, where a
wireless broadcast and how K-means is a key factor in the RL-LSTM framework enables the SBSs to learn automatically
decision-making and learning steps of the base stations, that which of the unlicensed channels to use, based on the probability
learn from the distribution of the devices and chooses of future changes in terms of the WLAN activity and the LTE-U
automatically which are the most suitable devices to use as a traffic loads of the unlicensed channels. This work takes into
relay. account the value of LTE-U as a proposal that allows cellular
When a wireless network experience unusual traffic demand network operators to offload some of their data traffic, and the
at a particular time and location, it is often called an anomaly, connotation of AI in the form of RL-LSTM, as a promising
To help identify these anomalies, Parwez et al. [24], used mobile solution to long-term dependencies learning, sequence, and time-
network data for anomaly detection purposes, with the help of series problems. Nevertheless, researchers should be warned that
hierarchical clustering to identify this kind of inconsistencies. this deep learning architecture is one of the most difficult to train,
The authors claim that the detection of this data deviations helps due to the vanishing and exploding gradient problem in Recurrent
to establish regions of interest in the network that require special Neural Networks (RNN) [27], the speed of activation functions,
actions, such as resource allocation, or fault avoidance as well as the initialization of parameters for LSTM systems [28].
solutions. Reinforcement Learning has also played an important role on
Ultra-dense small cells (UDSC) is expected to increase the Heterogeneous Networks (HetNets), enabling Femto Cells (FCs)
capacity of the network, spectrum and energy efficiency. To to autonomously and opportunistically sense the radio
consider the effects of cell switching, dynamic interference, environment and tune their parameters accordingly to satisfy
time-varying user density, dynamic traffic patterns, and specific pre-set quality-of-service requirements. Alnwaimi et al.
changing frequencies, Wang et al. [25], proposed a data-driven [29], proved that by using reinforcement learning for the
resource management for UDSC using Affinity Propagation, an femtocells self-configuration, based on dynamic-learning games
unsupervised learning clustering approach, to perform data for a multi-objective fully-distributed strategy, the intra/inter-tier
analysis and extract the knowledge and behavior of the system interference can be reduced significantly. The collision and
under complex environments. Later they introduced a power reconfiguration measurements were used as a "learning cost
control and channel management system based on the results of during training. This self-organizing potential, empower FCs to
the unsupervised learning algorithm. They conclude their identify available spectrum for opportunistic use, based on the
research stating that by means of simulation, their data-driven learned parameters.
resource management framework significantly improved the
efficiency of the energy and throughput in UDSC.
Alternative clustering models like K-Means, Mini- Batch K-
Means, Mean-Shift clustering, DBSCAN, Agglomerative
Clustering, etc., can be used to associate the users to a certain
base station in order to optimize the user equipment (UE) and
base stations (BS) transmitting/receiving power. Table II, shows
TABLE I
SUMMARY OF SUPERVISED LEARNING-BASED SCHEMES FOR 5G MOBILE AND WIRELESS COMMUNICATIONS TECHNOLOGY.

AI Technique Learning Model 5G-based Applications


Machine Learning and statistical logistic Dynamic frequency and bandwidth allocation in self-organized LTE dense small
regression techniques. cell deployments (as in [10]).
Support Vector Machines (SVM). Path loss prediction model for urban environments (as in [12]).
Channel Learning to infer unobservable channel state information (CSI) from
Neural-Network-based approximation.
an observable channel (as in [13]).
Supervised Learning
Adjustment of the TDD Uplink-Downlink configuration in XG-PON-LTE
Supervised Machine Learning Frameworks. Systems to maximize the network performance based on the ongoing traffic
conditions in the hybrid optical-wireless network (as in [14]).
Artificial Neural Networks (ANN), and Modelling and approximations of objective functions for link budget and
Multi-Layer Perceptrons (MLPs). propagation loss for next-generation wireless networks (as in [15]–[19]).

TABLE II
SUMMARY OF UNSUPERVISED LEARNING-BASED SCHEMES FOR 5G MOBILE AND WIRELESS COMMUNICATIONS TECHNOLOGY.

AI Technique Learning Model 5G-based Applications


K-means clustering, Gaussian Mixture
Cooperative spectrum sensing (as in [22]). Relay node selection in vehicular
Model (GMM), and Expectation
networks (as in [23]).
Maximization (EM).
Hierarchical Clustering. Anomaly/Fault/Intrusion detection in mobile wireless networks (as in [24]).
Unsupervised Learning
Latency reduction by clustering fog nodes to automatically decide which low
Unsupervised Soft-Clustering Machine power node (LPN) is upgraded to a high power node (HPN) in heterogeneous
Learning Framework. cellular networks. (as in [21]).
Affinity Propagation Clustering. Data-Driven Resource Management for Ultra-Dense Small Cells (as in [25]).

TABLE III
SUMMARY OF REINFORCEMENT LEARNING-BASED SCHEMES FOR 5G MOBILE AND WIRELESS COMMUNICATIONS TECHNOLOGY.

AI Technique Learning Model 5G-based Applications


Proactive resource allocation in LTE-U Networks, formulated as a non-
Reinforcement Learning algorithm based on cooperative game which enables SBSs to learn which unlicensed channel, given
long short-term memory (RL-LSTM) cells. the long-term WLAN activity in the channels and LTE-U traffic loads (as in
[26]).
Reinforcement Learning Gradient follower (GF), the modified Roth- Enable Femto-Cells (FCs) to autonomously and opportunistically sense the
Erev (MRE), and the modified Bush and radio environment and tune their parameters in HetNets, to reduce intra/inter-
Mosteller (MBM). tier interference (as in [29]).
Reinforcement Learning with Network
Heterogeneous Radio Access Technologies (RATs) selection (as in [30]).
assisted feedback.

have an enormous impact in the development of future


5G wireless networks will also contain multiple radio access
generation networks. The era where wireless networks
technologies (RAT). However, selecting the right RAT is a latent
researchers were afraid to use AI-based algorithms due to the
problem in terms of speed, exploration times, and
lack of understanding of the artificial-learning process, has been
convergence. Nguyen et al. [30], developed a feedback
left in the past. Nowadays, with the power and ubiquity of
framework using limited network-assisted information from the
information, numerous researchers are adapting their knowledge
base stations (BS), to improve the efficiency of distributed
and expanding their tools arsenal with AI-based models,
algorithms for RAT selection. The framework used
algorithms and practices, especially in the 5G world, where even
reinforcement learning with network-assisted feedback to
a few milliseconds of latency can make a difference. A reliable
overcome the aforementioned problems. Table III, shows a
5G system requires extremely low latency, which is why not
summary of the potential applications of reinforcement learning
everything can be stored in remote cloud servers far away.
in 5G wireless communication technologies.
Latency increases with distance and congestion of network links.
Base stations have limited storage size, so they have to learn to
V. C ONCLUSION predict user needs by applying a variety of artificial intelligence
After exploring some of the successful cases where AI is used tools. With these tools, every base station will be able to store a
as a tool to improve 5G technologies, we strongly believe that reduced but adequate set of files or contents. This is one example
the convergence between these two knowledge expertises will why our future networks must be predictive, and how Artificial
Intelligence becomes crucial in optimizing this kind of McGrawHill-MachineLearning-TomMitchell.pdf
problems in the network. An additional goal of linking AI with [8] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, 1st
ed.,
5G networks would be to obtain significant improvements in the T. Dietterich, Ed. London, England: The MIT Press, 2016. [Online].
context of edge caching just by applying off-the-shelf machine Available: www.deeplearningbook.org
learning algorithms. We have shown how AI can be a solution [9] P. U. Stanford Vision Lab, Stanford University, “ImageNet.”
that can fill this gap of requirements in 5G mobile and wireless [Online].
Available: http://image-net.org/about-overview
communications, allowing base stations to predict what kind of [10] B. Bojovic´, E. Meshkova, N. Baldo, J. Riihijärvi, and M.
content users nearby may request in the near future, allocating Petrova, “Machine learning-based dynamic frequency and
dynamic frequencies in self-organized LTE dense small cell bandwidth allocation in self-organized LTE dense small cell
deployments, predicting path loss/link budget with deployments,” Eurasip Journal on Wireless Communications and
Networking, vol. 2016, no. 1, 2016. [Online]. Available:
approximated NN models, inferring the unobservable channel https://jwcn-eurasipjournals.springeropen.
state information from an observable channel, adjusting the com/track/pdf/10.1186/s13638-016-0679-0
TDD uplink-downlink configuration in XG-PON-LTE systems [11] S. Y. Han, N. B. Abu-ghazaleh, and S. Member, “Efficient and Con-
based on ongoing network conditions, sensing the spectrum sistent Path loss Model for Mobile Network Simulation,”
using unsupervised models, reducing the latency by IEEE/ACM Transactions on Networking, vol. PP, no. 99, pp. 1–1,
2015.
automatically configuring the clusters in Het-Nets, detecting [12] R. D. a. Timoteo, D. C. Cunha, and G. D. C. Cavalcanti, “A Proposal
anomalies/faults/intrusions in mobile wireless networks, for Path Loss Prediction in Urban Environments using Support
managing the resources in ultra-dense small cells, selecting the Vector Re- gression,” Advanced International Conference on
relay nodes in vehicular networks, allocating the resources Telecommunications, vol. 10, no. c, pp. 119–124, 2014.
[13] J. Liu, R. Deng, S. Zhou, and Z. Niu, “Seeing the unobservable:
in LTE-U networks, enabling autonomous and opportunistic Channel learning for wireless communication networks,” 2015
sensing of the radio environment in femto-cells, selecting the IEEE Global Communications Conference, GLOBECOM 2015,
optimal radio access technology (RAT) in HetNets, among 2015.
others. [14] P. Sarigiannidis, A. Sarigiannidis, I. Moscholios, and P.
Zwierzykowski, “DIANA: A Machine Learning Mechanism for
Adjusting the TDD Uplink-Downlink Configuration in XG-PON-
ACKNOWLEDGMENT
LTE Systems,” Mobile Information Systems, vol. 2017, no. c, 2017.
This work was supported by the Global Excellent Technology [15] S. P. Sotiroudis, S. K. Goudos, K. A. Gotsis, K. Siakavara, J. N.
Innovation Program (10063078) funded by the Ministry of Sahalos, and L. Fellow, “Application of a Composite Differential
Evolution Algorithm in Optimal Neural Network Design for
Trade, Industry and Energy (MOTIE) of Korea; and by the Propagation Path-Loss Prediction in Mobile Communication
National Research Foundation of Korea (NRF) grant funded by Systems,” IEEE ANTENNAS AND WIRELESS PROPAGATION
the Korea government (MSIP; Ministry of Science, ICT & LETTERS, vol. 12, pp. 364–367, 2013.
Future Planning) (No. 2017R1C1B5016837). [16] J. M. Mom, C. O. Mgbe, and G. A. Igwue, “Application of Artificial
Neural Network For Path Loss Prediction In Urban Macrocellular
Envi- ronment,” American Journal of Engineering Research
(AJER), vol. 03, no. 02, pp. 270–275, 2014.
REFERENCES [17] I. Popescu, D. Nikitopoulos, I. Nafornita, and P. Constantinou,
[1] A. Geron, “Hands-on machine learning with Scikit-Learn and “ANN prediction models for indoor environment,” IEEE
TensorFlow: concepts, tools, and techniques to build intelligent International Confer- ence on Wireless and Mobile Computing,
systems,” p. 543, 2017. [Online]. Available: Networking and Communica- tions 2006, WiMob 2006, pp. 366–
http://shop.oreilly.com/ product/0636920052289.do 371, 2006.
[2] A. Osseiran, J. F. Monserrat, and P. Marsch, 5G Mobile and Wireless [18] S. P. Sotiroudis, K. Siakavara, and J. N. Sahalos, “A Neural Network
Communications Technology, 1st ed. United Kingdom: Cambridge Approach to the Prediction of the Propagation Path-loss for Mobile
University Press, 2017. [Online]. Available: www.cambridge.org/ Communications Systems in Urban Environments,” PIERS Online,
9781107130098 vol. 3, no. 8, pp. 1175–1179, 2007.
[3] R. Li, Z. Zhao, X. Zhou, G. Ding, Y. Chen, Z. Wang, and H. Zhang, [19] M. Ayadi, A. Ben Zineb, and S. Tabbane, “A UHF Path Loss Model
“Intelligent 5G: When Cellular Networks Meet Artificial Us- ing Learning Machine for Heterogeneous Networks,” IEEE
Intelligence,” IEEE Wireless Communications, vol. 24, no. 5, pp. Transactions on Antennas and Propagation, vol. 65, no. 7, pp.
175–183, 2017. [Online]. Available: 3675–3683, 2017.
http://www.rongpeng.info/files/Paper_wcm2016.pdf [20] International Telecommunication Union, “A path-specific
[4] T. E. Bogale, X. Wang, and L. B. Le, “Machine Intelligence propagation prediction method for point-to-area terrestrial services
Techniques for Next-Generation Context-Aware Wireless in the VHF and UHF bands,” ITU P-Series Radiowave propagation,
Networks,” ITU Special Issue: The impact of Artificial Intelligence no. P.1812-4, pp. 1–35, 2015. [Online]. Available:
(AI) on communication networks and services., vol. 1, 2018. https://www.itu.int/dms_pubrec/itu-r/ rec/p/R-REC-P.1812-4-
[Online]. Available: 201507-I!!PDF-E.pdf
https://arxiv.org/pdf/1801.04223.pdfhttp://arxiv.org/abs/1801.0422 [21] E. Balevi and R. D. Gitlin, “Unsupervised machine learning in 5G
3 networks for low latency communications,” 2017 IEEE 36th
[5] C. Jiang, H. Zhang, Y. Ren, Z. Han, K. C. Chen, and L. Hanzo, International Performance Computing and Communications
“Machine Learning Paradigms for Next-Generation Wireless Conference, IPCCC 2017, vol. 2018-Janua, pp. 1–2, 2018.
Networks,” IEEE Wireless Communications, 2017. [22] G. Sobabe, Y. Song, X. Bai, and B. Guo, “A cooperative spectrum
[6] G. Villarrubia, J. F. De Paz, P. Chamoso, and F. D. la Prieta, sensing algorithm based on unsupervised learning,” 10th International
“Artificial neural networks used in optimization problems,” Congress on Image and Signal Processing, BioMedical Engineering
Neurocomputing, vol. 272, pp. 10–16, 2018. and Informatics (CISP-BMEI 2017), vol. 1, pp. 198–201, 2017.
[7] T. M. Mitchell, Machine Learning, 1st ed. McGraw- [23] W. Song, F. Zeng, J. Hu, Z. Wang, and X. Mao, “An Unsupervised-
Hill Science/Engineering/Math, 1997. [On- Learning-Based Method for Multi-Hop Wireless Broadcast Relay Se-
line]. Available: https://www.cs.ubbcluj.ro/~gabis/ml/ml-books/ lection in Urban Vehicular Networks,” IEEE Vehicular Technology
Conference, vol. 2017-June, 2017.
[24] M. S. Parwez, D. B. Rawat, and M. Garuba, “Big data analytics for
user-activity analysis and user-anomaly detection in mobile wireless
network,” IEEE Transactions on Industrial Informatics, vol. 13, no. 4,
pp. 2058–2065, 2017.
[25] L.-C. Wang and S. H. Cheng, “Data-Driven Resource Management
for Ultra-Dense Small Cells: An Affinity Propagation Clustering
Approach,” IEEE Transactions on Network Science and Engineering,
vol. 4697, no. c, pp. 1–1, 2018. [Online]. Available:
https://ieeexplore.ieee.org/document/8369148/
[26] U. Challita, L. Dong, and W. Saad, “Deep learning for proactive
resource allocation in LTE-U networks,” in European Wireless 2017-
23rd European Wireless Conference, 2017. [Online]. Available:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8011311
[27] R. Pascanu, T. Mikolov, and Y. Bengio, “On the difficulty
of training recurrent neural networks,” Tech. Rep., 2013.
[Online]. Available:
http://proceedings.mlr.press/v28/pascanu13.pdf?
spm=5176.100239.blogcont292826.13.57KVN0&file=pascanu13.pdf
[28] Q. V. Le, N. Jaitly, and G. E. Hinton Google, “A Simple Way to
Initialize Recurrent Networks of Rectified Linear Units,” Tech. Rep.,
2015. [Online]. Available: https://arxiv.org/pdf/1504.00941v2.pdf
[29] G. Alnwaimi, S. Vahid, and K. Moessner, “Dynamic heterogeneous
learning games for opportunistic access in LTE-based
macro/femtocell deployments,” IEEE Transactions on Wireless
Communications, vol. 14, no. 4, pp. 2294–2308, 2015.
[30] D. D. Nguyen, H. X. Nguyen, and L. B. White, “Reinforcement
Learning with Network-Assisted Feedback for Heterogeneous RAT
Selection,” IEEE Transactions on Wireless Communications, vol. 16,
no. 9, pp. 6062–6076, 2017.

View publication stats

You might also like