You are on page 1of 6

Machine-learning-based spectrum sensing

enhancement for software-defined radio applications


Shirin AGHABEIKI Christophe HALLET Nathan El-Roi NOUTEHOU
Direction R&I Direction R&I Direction R&I
2021 IEEE Cognitive Communications for Aerospace Applications Workshop (CCAAW) | 978-1-6654-1258-2/21/$31.00 ©2021 IEEE | DOI: 10.1109/CCAAW50069.2021.9527294

Capgemini Engineering Capgemini Engineering Capgemini Engineering


Rennes, France Rennes, France Rennes, France
shirin.aghabeiki@altran.com christophe.hallet@altran.com nathanel-roi.noutehou@altran.com

Nadège RASSEM Imad ADJALI Mouna BEN MABROUK


Direction R&I Direction R&I Direction R&I
Capgemini Engineering Capgemini Engineering Capgemini Engineering
Rennes, France Rennes, France Paris, France
nadege.rassem@altran.com imad.adjali@altran.com mouna.benmabrouk@altran.com

Abstract—The Software-defined radio (SDR) technology is users is increasing every day, which causes a growth in the
considered as a promising solution to address the issue of requests to use the limited frequency spectrum [7]. CR allows
spectrum scarcity by providing a high level of flexibility and the use of unoccupied frequency bands in a specific place and
configurability to wireless communication systems. This time and provides management access in a way that does not
technology, widely used in cognitive radio (CR) networks, aims interfere with primary licensed users [8]. Spectrum sensing is
at optimizing the spectrum efficiency. However, to reach this an important step in the CR cycle [9], which allows the
objective, a new conception of spectrum sensing methods needs identification of available frequency bands also known as
to be adopted. Indeed, in the emerging wireless communications spectrum holes [10].
systems, telecom researchers face the challenge of detecting the
presence of energy with a very low signal-to-noise ratio (SNR) Different methods have been proposed for spectrum
in a very complex environment. For this purpose, for complex sensing, all of which aim to verify the presence or absence of
environments such as CR networks, machine-learning (ML) a transmitted signal [11]. In [11] this latter is denoted as an
algorithms seem to be a suitable solution allowing a smart unknown signal from a primary user. Furthermore, energy
spectrum-sensing scheme. This scheme enables the increase of detection has been widely used due to its low computational
the signal detection probability and the decrease the false complexities and its independency of the additional
detection probability of signal detection. In this paper, four information about the transmitted signal under detection [9].
supervised ML models are compared: Naïve Bayes classifier,
However, this method has a major drawback in computing
Support Vector Machine, Gradient Boosting Machine and
energy with a low signal-to-noise ratio (SNR) [12]. Indeed,
Distributed Random Forest. Furthermore, a principal
component analysis (PCA) is performed to reduce the
this method has a poor performance for spread spectrum
dimensionality of the complex dataset while preserving the sensing.
useful variability. Their performance is subsequently evaluated Machine-learning (ML) based spectrum sensing has been
by computing their receiver operating characteristic (ROC) proposed to provide high detection accuracy with low
curves. The entire algorithms are then implemented and complexity algorithms, which are able to adapt to the
demonstrated using the GNU (GNU’s Not Unix) Radio platform environment by optimizing more features [13-14]. In this
and an SDR-based electronic card, HackRF One.
paper, four supervised ML algorithms are compared according
Keywords—software-defined radio, cognitive radio, spectrum
to their receiver operating characteristic (ROC) curve and
sensing, machine learning, naïve Bayes classifier, support vector their area under curve (AUC). Naïve Bayes classifier (NB),
machine, gradient boosting machine and distributed random support vector machine (SVM), gradient boosting machine
forest. (GBM) and distributed random forest (DRF) all achieved a
more accurate sensing than energy detection in high-noise
I. INTRODUCTION environment. Their performances are evaluated in several
Since the first article published on software radios in 1995 channels with various SNR. The principal component analysis
[1], the Software-Defined Radio (SDR) technology has method is applied to the received signal in order to improve
revolutionized wireless communication. Expensive and heavy the detection accuracy. The implementation of the algorithms
analog systems that were designed for a specific purpose are on a SDR card, HackRF One, leads to an experimental
replaced by low-cost, lightweight SDR systems whose validation.
flexibility enables the design of multi-purpose systems [3]. The rest of the paper is organized as follows. In Section
Many applications are based on SDR to simplify their use in II, the CR system model and the assumptions are presented.
communication systems; Cognitive Radio (CR) and spectrum The machine learning spectrum sensing algorithms are
efficiency are examples of such applications [2]. introduced and the use of these algorithms in our proposed
CR introduced by Joseph Mitola III in 1998 [4] is the key model is shown in Section III. In Section IV, the entire
solution to one of the most challenging issues in wireless simulated transmission chain and its essential blocks are
communications: spectral congestion [5]. The radio use has presented, as well as the obtained results. The implementation
outgrown the available spectrum and finding an unassigned of the algorithms on HackRF one via GNU radio is
frequency band is a challenging technical task [6]. Meanwhile, demonstrated in Section V. A discussion of simulation and
the number of wireless communication applications and their experiment results are given in Section VI. Finally, the
conclusion and future works are provided in Section VII.

978-1-6654-1258-2/21/$31.00 ©2021 IEEE

Authorized licensed use limited to: Universite Gustave Eiffel. Downloaded on September 13,2022 at 13:49:16 UTC from IEEE Xplore. Restrictions apply.
II. SYSTEM MODEL AND ASSUMPTION does not focus on optimizing this variable, and it is instead
In this Section, we study a cognitive radio unit that is chosen to be constant across all the implemented algorithms.
responsible for a secondary-user (SU) access to an available Once the signal energy is calculated, a threshold must be
and authorized frequency band. The SU should not disturb set in such a manner that if the signal energy is below the
the primary-user access. Therefore, it is important to verify threshold level, then hypothesis ℋ is considered correct;
the spectrum availability and occupancy in the radio otherwise ℋ . The ML algorithms can enhance the signal
environment. By supposing an additive white Gaussian noise detection by determining the appropriate threshold, which
(AWGN) channel, the signal detection is possible by considers all signal properties, and the propagation channel
employing the following hypothesis [15]: features. Although ML algorithms are powerful at detecting
signals in high-noise environments, any methods that helps to
ℋ: = (1) reduce noise can improve system performance. For this
purpose, we propose to use a band-pass Butterworth filter as
well as principal component analysis (PCA) method in the
receiver. The effect of PCA is briefly explained below.
ℋ: = + (2)
The PCA method is known as a projection method is able
where w[n] is the noise signal and is the primary user to extract the main uncorrelated data from a larger set of
signal which is distorted by the zero mean additive white correlated data. After passing through this analysis only
Gaussian noise ~ (0, ) [16],. By considering these principal characteristics of data are accounted, as a result the
two hypotheses, the signal detection is included in binary computation time is considerably reduced [17].
classification problems:
III. MACHINE LEARNING BASED SENDING FRAMEWORK
• ℋ indicates the absence of primary users. In this Section, the proposed framework of the ML
• ℋ corresponds to its presence. algorithms used for spectrum sensing is introduced. A block
The evaluation of the effectiveness of a signal detection scheme of the transceiver is shown in Fig. 1. In the algorithm-
method is based on the probability that these hypotheses are training block, the collected information about the channel in
satisfied. When a signal is detected, there are two the network are used to design the classification rules. This
possibilities: a signal is truly there or it is a false detection sort of information is usually provided by the base station in
event, which means that the primary user was absent. The a network or can be obtained by studying the whole system.
probability of these cases is respectively denoted as true In the receiver part, trained patterns are used in order to detect
positive rate or sensitivity ( ), and false positive rate ( ). the signals. The four chosen ML algorithms are described in
These two variables, are sufficient for the evaluation and the following.
comparison of signal detection methods and there is no need A. Design of random forest classifier
to compute further criteria, since and are mutually Tim Kam Ho created the first random decision forest
exclusive with the probability of false negative (signal is algorithm in 1995 [18]. This method belongs to the family of
there but is not detected) and true negative (the absence of ensemble learning algorithms. The Decision Tree is the
signal is truly detected) respectively. It is important to building block of the random forest, which is a supervised
mention that the purpose of optimizing signal detection is to learning algorithm. Random forest makes several decision
increase the true positive rate while decreasing false trees and merges them together to make more accurate and
positive rate , in other words to increase the sensitivity of stable predictions. Instead of searching for the most important
the detection to the extent that all available signals are features when splitting a “node”, the algorithm looks for the
detected only when they are truly present. best features among a random set of features [19]. This leads
As mentioned above, determining the presence of a signal to a lot of variety and ultimately a better model. Thus, in a
based on its energy is the simplest way as it does not require random forest, only one subset of features is considered by
additional information about the received signal. In this work, the algorithm to split a node. There are many approaches to
the energy for a received signal is computed for a limited select a pruning method used for decision tree induction. The
series of signal samples and is evaluated according to the number of trees for the random forest is equal to one hundred
following equation: in this article. A Gini Index is chosen as the attribute selection
measure in decision tree induction and is a measure of
inequality by means of a relative analysis. Gini Index is
! determined as [20]:
= | | (3)
"#
%(&' , () %)&* , (+
( )( ) (4)
where is the first sample and $ is the series length. |(| |(|
*,'
Computing the energy of a set of samples instead of only one
sample causes a memory effect on the results which helps in
where ( is a given training set, &' is a random class and
signal detection even with high noise or disturbance. (- ,/)
$ deserves further study and optimization since it determines ( . ) is the probability that a selected case belongs to &' .
|/|
the detection duration. In simple terms, the higher $ is, the The least value of Gini Index shows the purity of
more samples are included in the energy computation, and classification.
subsequently detection takes longer, which means a shorter
channel allocation time for the secondary user. This article

Authorized licensed use limited to: Universite Gustave Eiffel. Downloaded on September 13,2022 at 13:49:16 UTC from IEEE Xplore. Restrictions apply.
B. Design of naïve Bayes sample, i.e. 2( ' | & ). In this article, the observations and
data are continuous, and a probabilistic model with a
Gaussian or normal distribution for evidence-related
variables is used. In this case, each category or group has a
Gaussian distribution. In this way, for $ categories or classes,
the mean and the variance for each category are computed
and their normal distribution parameters are estimated. We
assume that C is the mean and is the variance of $-th
category, i.e. & . We also consider D as observations of
random variables 0 . Since the 0 distribution in each
Gaussian category is assumed to be normal, we have:

!(J!KL )²
1 NLO
2( = E ∣ & ) = I (8)
Fig. 1. TX/RX chaine with training and test of ML models. F2H
In the field of ML, the technique and method of naïve
Bayes Classifiers using Bayes theorem and assuming C. Design of gradient boosting machine
independence between variables, are classified as a member This algorithm, like the random forest, is classified in the
of the family of probabilistic classifiers [21-22]. The family of ensemble-learning algorithms. This algorithm is a
estimation of model parameters is possible by maximizing a linear combination of a group of weak models that creates a
likelihood function (Likelihood maximization) and the naïve strong and efficient model. In gradient boosting, the function
Bayes classification technique uses the Bayesian theorem to PQ that minimizes a loss function, is optimized intermittently
separate probabilities. By supposing 0 = ( , … , " ) a with R stages, and is expressed according to the final model
vector of attributes, which are independent variables, the below [23]:
probability of & occurring, i.e. 2(& ∣ , … , " ) can be
represented as one of the states of different event classes for
different $, as shown below: U

PQ ( ) = S' ℎ' ( ) + P (9)


'#
2(& )2(0|& )
2(& |0) = (5)
2(0) where P is the first model, which is a constant and S' is a
weight dedicated to ℎ' ( ). In other words, PQ is the weighted
Thus, to calculate the probability 2(& ∣ , … , " ), it is sum of ℎ' ( ) and is known as an estimator. In gradient
enough to use joint probability and simplify it with the help boosting tree, our models are decision trees. In step , the
of conditional probability according to the independence of learned model is a tree called ℎ ( ) that was able to model
variables. As a result, the probability of an observation negative gradients. The function is updated at each stage
belonging to group & with respect to observations 0 will be in this way:
determined according to the following relation:
P ( )=P ! ( )+S ℎ ( ) (10)
"
1
2(& ∣ , … , " ) = 2(& ) 6 2( ' ∣& ) (6)
5 Regularization is an important factor to prevent the
'# overfitting. The shrinkage is usually used to regularize the
model by modifying the above formula as below:
Here the probability of evidence (observations) is
considered as 5 = 2 (0) = ∑2 (& ) 2 (0 ∣ & ). It is clear
that 5 depends on the evidence and observations , … , " . P ( )=P ! ( ) + V. S ℎ ( ) (11)
With the help of the decision rule, we create and complete the
Bayesian category. One of the most basic rules of decision- where V , which is always positive and less than unity, is
making is choosing the most probable hypothesis. In this known as the learning rate. The loss function is the computed
way, among the various decisions, we determine what is most deviance in this article. The learning rate and the estimator
likely to happen based on the gathered evidence. This rule is number are equal to 0.1 and 100 respectively in order to
called “the Maximum Posterior” (MP). Thus, the Bayesian achieve an acceptable tradeoff between computational time
classification can be considered as a function of & decisions, and the precision of model.
which is estimated by the 89 function. The maximization of
this function is shown as follows: D. Design of support vector machine
In the support vector machine (SVM) algorithm, each
" data sample is plotted as a point in the th-dimensional space
89 = argmax 2(& ) 6 2( ' ∣& ) (7) on the data-scatter diagram ( is the number of properties of
∈{ ,…,A} a given data sample). The value of each data attribute
'#
specifies one of the component coordinates of the point on
As a result, the parameters can be calculated or estimated the graph; then, by drawing a straight line, it categorizes
according to different distributions that may have a random different and distinct data elements [24]. In other words, a
vector support machine is the one that best separates data sets.

Authorized licensed use limited to: Universite Gustave Eiffel. Downloaded on September 13,2022 at 13:49:16 UTC from IEEE Xplore. Restrictions apply.
The kernel functions are used to solve the nonlinear compute the threshold and it is necessary to re-compute the
separation problems. These functions find a process by which noise level after filtering. In Fig. 2 the probability is
they can separate the data based on user-defined tags. traced for different values of , which is known as the
In this paper, radial basis function or rbf kernel type is receiving operating characteristics (ROC) curve. The
used as it is useful for non-straight separator line. Another weakness of this classic detection method is the lack of
important factor for SVM algorithm is gamma, which is the flexibility while adding a new element, for example here a
kernel coefficient for rbf. The higher the value of gamma, the filter, as shown in Fig.2.
more the algorithm tries to perform the fit exactly based on By replacing the Neyman-Pearson criterion with ML
the training data set, and this leads to the generalization of the algorithms, the results shown in Fig. 3 are achieved. The
error and the occurrence of the over-fitting problem. A algorithms are trained before creating the models, which are
gamma value equal to 0.5 is given in this work. The cost then used to predict the new data as shown in Fig. 1. The ROC
parameter for the error sentence controls the balance between curves for SNR of -10 dB after and before filtering are shown
smooth decision-making boundaries and the classification of in Fig. 3. Thanks to the flexible ML algorithms, an
training data points. The cost parameter is equal to unity to enhancement of the filtering process is clearly observed. The
achieve the results presented in the next Section. area under curves (AUC) of ROC curves are summarized in
Table I. A significant difference is observed for the three
IV. SIMULATIONS AND RESULTS lowest SNRs and their AUC are enhanced with filter from
The transceiver shown on Fig. 1 is simulated with Python. 0.49, 0.56 and 0.74 to 0.72, 0.89 and 0.91 (results for naïve
After generating a ten-bit random binary signal with a symbol Bayes algorithm) respectively for SNR of -20, -10 and 0 dB.
rate of 300 Bd, the signal is sampled with a rate of 44.1 kHz.
Then, a pulse shaping follows an amplitude-shift keying According to the results shown in Table I, the random
modulation included in a digital front-end block. The channel forest algorithm has the lowest AUC for all SNRs between
the four selected ML algorithms. This would be caused by
model has been realized by the simulation of an AWGN
overfitting, which is one of the predominant problems in ML
channel with several levels of SNR: -20, -15, -10, 0, 10, 15
and the main limitation of random forest, where the large
dB. In the first stage of signal reception, a fourth-order
numbers of trees can make the algorithm for real-world
Butterworth pass-band filter is applied, to reduce the noise of
predictions inaccurate and ineffective. In general, the training
the received signal. However, if we leverage a classic
of this algorithm is fast, but the prediction which occurs after
criterion, like a Neyman-Pearson detector [25], in order to
the training model is slow. A more accurate prediction
define the threshold, the filter will, not only, not be useful but
requires more trees, which leads to a slower model. Gradient
also will cause a lower detection rate. This is due to the
boosting algorithm is the algorithm with the second lowest
impact of the signal noise in the threshold computation. As a
AUC. This algorithm is greedy and can quickly fit over
result, the knowledge of channel noise is not convenient to

(a) (a)

(b)
(b)
Fig. 3. ROC curves achieved from machine learning algorithms for a
Fig. 2. ROC curves acheived from Neyman-pearson method for channel with SNR=-10 dB: (a) before filtering, (b) after filtering.
channels with different SNR: (a) before filtering, (b) after filtering.

Authorized licensed use limited to: Universite Gustave Eiffel. Downloaded on September 13,2022 at 13:49:16 UTC from IEEE Xplore. Restrictions apply.
TABLE II. AREA UNDER CURVE FOR SIMULATION RESULTS

Before filtering After filtering


SNR (dB) -20 -10 0 10 15 -20 -10 0 10 15
Random forest
0.48 0.51 0.70 0.89 0.87 0.46 0.80 0.88 0.93 0.93
classifier
algorithm
Machine
learning

Naïve Bayes 0.49 0.56 0.74 0.92 0.93 0.72 0.89 0.91 0.96 0.97
Support vector
0.50 0.64 0.75 0.93 0.93 0.71 0.89 0.91 0.96 0.97
machine
Gradient
0.49 0.51 0.73 0.91 0.89 0.69 0.86 0.90 0.95 0.94
boosting machine

training data sets. Naïve Bayes and SVM algorithms have the
same efficiency and perform equally well. TABLE I. AREA UNDER CURVE FOR EXPERIMENTAL RESULTS

V. ENERGY DETECTION IMPLEMENTATION USING HACKRF


ONE AND GNU RADIO Random Support Gradient
forest Naïve Bayes vector boosting
ML algorithms were compared practically using HackRF classifier machine machine
One as a transmitter and an RTL-SDR as a receiver. The 0.64 0.74 0.51 0.64
communication system is shown in Fig. 4. HackRF One and
RTL-SDR are connected to two computers independently. A
GNU Radio program draws up the binary data for first trained with the first part of saved data and then applied
transmission with the same baud rate and sampling rate as in to the second part of this data by a Python program. The
simulation, i.e. 300 Bd and 44 kHz, respectively. On the other results are shown in Fig. 5.
side, the RTL-SDR receives this data, which arrive through a In order to be accurate in the comparison of ML
noisy propagation channel, and after an amplitude algorithms, the AUCs of the ROC curves shown in Fig. 5 are
demodulation step in GNU Radio, the data are stored in a file. computed. The values are presented in Table II. As the results
A band-pass Butterworth filter and the ML algorithms are at show, the propagation channel suffers from a high noise
level. By comparing Table II with Table I, it is possible to
estimate the SNR to be more than -20 dB and less than -10
dB. As expected from simulation results, the naïve Bayes
algorithm performed better than random forest and GBM.
However, the experimental results obtained with SVM are
significantly different from the simulation results. The
difference may be due to the sensitivity of this algorithm to
the compromise between the training errors and the
complexity of model [24] which makes this algorithm
unsuitable for large amounts of noisy data.
VI. CONCLUSIONS
The performance of four ML algorithms applied to the
field of spectrum sensing is compared in this article. The
simulation results show that all the algorithms have a better
performance than the Neyman-Pearson classic detection
Fig. 4. Wireless communication at unliscenced 868 MHz frequency. method. Furthermore, naïve Bayes and SVM obtained more
accurate results than GBM and random forest. However, in
practice, SVM did not perform as precisely as predicted by
the simulation. As a result, naïve Bayes is identified as the
most suitable algorithm between the four studied ML
algorithms for the spectrum sensing. More precise results can
be obtained by expanding the scope of research to the
cooperative signal detection.
REFERENCES
[1] Jondral, F.K. “Software-Defined Radio—Basics and Evolution to
Cognitive Radio.” J Wireless Com Network 2005, 652784 (2005).
[2] Fette, Bruce. "Introducing Adaptive, Aware and Cognitive Radios." In
Cognitive Radio, Software Defined Radio, and Adaptive Wireless
Systems, edited by Huseyin Arslan, 1-16. AA Dordrecht: Springer,
2007.
[3] T. Ulversoy, "Software Defined Radio: Challenges and Opportunities,"
in IEEE Communications Surveys & Tutorials, vol. 12, no. 4, pp. 531-
Fig. 5. ROC curves experimentally achieved from machine learning 550, Fourth Quarter 2010.
algorithms.

Authorized licensed use limited to: Universite Gustave Eiffel. Downloaded on September 13,2022 at 13:49:16 UTC from IEEE Xplore. Restrictions apply.
[4] I. Mitola, J. and J. Maguire, G. Q., “Cognitive radio: making software [14] D. Wang and Z. Yang, "An novel spectrum sensing scheme combined
radios more personal,” IEEE Personal Commun. Mag., vol. 6, no. 4, with machine learning," 2016 9th International Congress on Image and
pp. 13–18, Aug. 1999. Signal Processing, BioMedical Engineering and Informatics (CISP-
[5] Federal Communications Commission, “Notice of proposed rule BMEI), pp. 1293-1297, 2016.
making and order: Facilitating opportunities for flexible, efficient, and [15] Travis F. Collins, Robin Getz, Di Pu, and Alexander M. Wyglinski,
reliable spectrum use employing cognitive radio technologies,” ET Software-Defined Radio for Engineers, 2018.
Docket No. 03-108, Feb. 2005. [16] Haozhou Xue and Feifei Gao, "A machine learning based spectrum-
[6] I. F. Akyildiz, W. Lee, M. C. Vuran and S. Mohanty, "A survey on sensing algorithm using sample covariance matrix," 2015 10th
spectrum management in cognitive radio networks," in IEEE International Conference on Communications and Networking in
Communications Magazine, vol. 46, no. 4, pp. 40-48, April 2008. China (ChinaCom), Shanghai, China, pp. 476-480, 2015.
[7] E. Axell, G. Leus, E. G. Larsson and H. V. Poor, "Spectrum Sensing [17] P. Liton, A. Suman, N. Sultan, "Methodological analysis of principal
for Cognitive Radio: State-of-the-Art and Recent Advances," in IEEE component analysis (PCA) method," International Journal of
Signal Processing Magazine, vol. 29, no. 3, pp. 101-116, May 2012. Computational Engineering & Management. 16. 32-38, 2013.
[8] S. Haykin, D. J. Thomson and J. H. Reed, "Spectrum Sensing for [18] Tin Kam Ho, "Random decision forests," Proceedings of 3rd
Cognitive Radio," in Proceedings of the IEEE, vol. 97, no. 5, pp. 849- International Conference on Document Analysis and Recognition,
877, May 2009. Montreal, QC, Canada, pp. 278-282 vol.1, 1995.
[9] T. Yucek and H. Arslan, "A survey of spectrum sensing algorithms for [19] Breiman, L. Random Forests. Machine Learning 45, PP. 5-32, 2001.
cognitive radio applications," in IEEE Communications Surveys & [20] M. Pal, "Random forest classifier for remote sensing classification,"
Tutorials, vol. 11, no. 1, pp. 116-130, First Quarter 2009. International Journal of Remote Sensing, 26:1, 217-222, 2005.
[10] Patil, V. M., & Patil, S. R., "A survey on spectrum sensing algorithms [21] Lewis, D. D., "Naive Bayes at forty: The independence assumption in
for cognitive radio, " International Conference on Advances in Human information retrieval", In Proceedings of the 10th European conference
Machine Interaction (HMI), 2016. on machine learning (pp. 4–15), New York: Springer, 1998.
[11] D. Cabric, S. M. Mishra and R. W. Brodersen, "Implementation issues [22] McCallum, A. & Nigam, K., "A comparison of event models for naive
in spectrum sensing for cognitive radios," Conference Record of the Bayes text classification", In AAAI-98 workshop on learning for text
Thirty-Eighth Asilomar Conference on Signals, Systems and categorization, 1998.
Computers, pp. 772-776 Vol.1, 2004.
[23] Friedman, Jerome H. "Greedy Function Approximation: A Gradient
[12] A. Ghasemi and E. S. Sousa, "Spectrum sensing in cognitive radio Boosting Machine." The Annals of Statistics, vol. 29, no. 5, pp. 1189–
networks: requirements, challenges and design trade-offs," in IEEE 1232. JSTOR, 2001.
Communications Magazine, vol. 46, no. 4, pp. 32-39, April 2008.
[24] V. Jakkula, "Tutorial on Support Vector," School of EECS,
[13] K. M. Thilina, K. W. Choi, N. Saquib and E. Hossain, "Machine Washington State University, Pullman 99164.
Learning Techniques for Cooperative Spectrum Sensing in Cognitive
[25] Yih-Fang Huang, 6 - Statistical Signal Processing, Editor(s): WAI-KAI
Radio Networks," in IEEE Journal on Selected Areas in
Communications, vol. 31, no. 11, pp. 2209-2221, November 2013. CHEN, The Electrical Engineering Handbook, Academic Press, 2005,
pp. 921-932.

Authorized licensed use limited to: Universite Gustave Eiffel. Downloaded on September 13,2022 at 13:49:16 UTC from IEEE Xplore. Restrictions apply.

You might also like