Professional Documents
Culture Documents
ABSTRACT
1. INTRODUCTION
2. SYSTEM ANALYSIS
3. PROJECT DESCRIPTION
4. SYSTEM DESIGN
5. SYSTEM IMPLEMENTATION
6. CONCLUSION
7. REFERENCES
ADVANCED CLUSTER BASED SPECTRUM SENSING IN BROADBAND COGNITIVE
RADIO NETWORK
ABSTRACT
Due to the rapid growth of new wireless communication services and applications, much
attention has been directed to frequency spectrum resources. Considering the limited radio
spectrum, supporting the demand for higher capacity and higher data rates is a challenging task
that requires innovative technologies capable of providing new ways of exploiting the available
radio spectrum. Cognitive radio (CR), which is among the core prominent technologies for the
next generation of wireless communication systems, has received increasing attention and is
considered a promising solution to the spectral crowding problem. The key issue of applying the
Cognitive radio technique successfully is how to sense exactly and quickly whether or not the
primary user (PU) exists, and searching for the spectrum holes to provide the secondary user
(SU). In addition when the SU accesses an available band, it must periodically monitor this band
to account for sudden reappearances of the PUs. This would inherently limit the throughput of
the SUs, or at least degrade the quality of service (QoS) if it is even guaranteed. In this thesis, we
focus on advanced cluster based spectrum sensing (ACBSS) algorithm which combines
hierarchical data-fusion idea with jointly compressive reconstruction technology. To validate the
efficiency and effectiveness, we compare the ACBBSS with independent compressive sensing
(ICS) and joint compressive sensing (JCS) in the detection probability, false-alarm probability
and algorithm execution time under the circumstance of different SNR and compression ratio.
The majority of existing work has focused on single band cognitive radio, multiband cognitive
radio represents great promises toward implementing efficient cognitive networks compared to
single-based networks. This has primarily motivated the introduction of multiband cognitive
radio (MB-CR) paradigm, which is also referred to wideband CR. By enabling SUs to
simultaneously sense and access multiple channels, this paradigm promises significant
enhancements to the network’s throughput. In addition, it helps provide seamless handoff from
band to band, which improves the link maintenance and reduces data transmission interruptions.
1. INTRODUCTION
Research on Wireless Ad Hoc Networks has been ongoing for decades. The history of
wireless ad hoc networks can be traced back to the Defense Advanced Research Project Agency
(DAPRPA) packet radio networks (PRNet), which evolved into the survivable adaptive radio
networks (SURAD) program. Ad hoc networks have played an important role in military
applications and related research efforts, for example, the global mobile information systems
(GloMo) program and the near-term digital radio (NTDR) program. Recent years have seen a
new spate of industrial and commercial applications for wireless ad hoc networks, as viable
communication equipment and portable computers become more compact and available.
Denial of service by server resource exhaustion has become a major security threat in
open communications networks. Public-key authentication does not completely protect against
the nodes detections because the authentication protocols often leave ways for an
unauthenticated client to consume a server’s memory space and computational resources by
initiating a large number of protocol runs and inducing the server to perform expensive
cryptographic computations.
A solution to such threats is to authenticate the client before the server commits any
resources to it. The authentication, however, creates new opportunities for DOS nodes detections
because authentication protocols usually require the server to store session-specific state data,
such as nonces, and to compute expensive public-key operations. One solution is to begin with a
weak but inexpensive authentication, and to apply stronger and costlier methods only after the
less expensive ones have succeeded. An example of a weak authentication is the SYN-cookie
protection against the SYN nodes detection where the return address is verified not to be
fictional by sending the client a nonce that it must return in its next message. This strategy is not
entirely unproblematic because the gradually strengthening authentication results in longer
protocol runs with more messages and the security of the weak authentication mechanisms may
be difficult to analyze.
The convenience of 802.11-based wireless access networks has led to widespread
deployment in the consumer, industrial and military sectors. However, this use is predicated on
an implicit assumption of confidentiality and availability. While the security flaws in 802.11’s
basic confidentially mechanisms have been widely publicized, the threats to network availability
are far less widely appreciated. In fact, it has been suggested that 802.11 is highly susceptible to
malicious denial-of-service (DoS) nodes detections targeting its management and media access
protocols. This paper provides an experimental analysis of such 802.11-specific nodes detections
– their practicality, their efficacy and potential low-overhead implementation changes to mitigate
the underlying vulnerabilities.
On wireless computer networks, ad-hoc mode is a method for wireless devices to directly
communicate with each other. Operating in ad-hoc mode allows all wireless devices within range
of each other to discover and communicate in peer-to-peer fashion without involving central
access points (including those built in to broadband wireless routers).To set up an ad-hoc
wireless network, each wireless adapter must be configured for ad-hoc mode versus the
alternative infrastructure mode. In addition, all wireless adapters on the ad-hoc network must use
the same SSID and the same channel number. An ad-hoc network tends to feature a small group
of devices all in very close proximity to each other. Performance suffers as the number of
devices grows, and a large ad-hoc network quickly becomes difficult to manage. Ad-hoc
networks cannot bridge to wired LANs or to the Internet without installing a special-
purpose gateway. Ad hoc networks make sense when needing to build a small, all-wireless LAN
quickly and spend the minimum amount of money on equipment. Ad hoc networks also work the
well as a temporary fallback mechanism if normally-available infrastructure mode gear (access
points or routers) stop functioning.
The advance of very large-scale integrated circuits (VLSI) and the commercial popularity
of global positioning services (GPS), the geographic location information of mobile devices in a
mobile ad hoc network (MANET) is becoming available for various applications. This location
information not only provides one more degree of freedom in designing network protocols, but
also is critical for the success of many military and civilian applications. In a MANET, since the
locations of nodes are not fixed, a node needs to frequently update its location information to
some or all other nodes. There are two basic location update operations at a node to maintain its
up-to-date location information in the network. One operation is to update its location
information within a neighboring region, where the neighboring region is not necessarily
restricted to one hop neighboring nodes. the project call this operation neighborhood update
(NU), which is usually implemented by local broadcasting/flooding of location information
messages. The other operation is to update the node’s location information at one or multiple
distributed location servers. The positions of the location servers could be fixed.
The set of applications for MANETs is diverse, ranging from small, static networks that
are constrained by power sources, to large-scale, mobile, highly dynamic networks. The design
of network protocols for these networks is a complex issue. Regardless of the application,
MANETs need efficient distributed algorithms to determine network organization, link
scheduling, and routing. However, determining viable routing paths and delivering messages in a
decentralized environment where network topology fluctuates is not a well-defined problem.
While the shortest path (based on a given cost function) from a source to a destination in a static
network is usually the optimal route, this idea is not easily extended to MANETs. Factors such as
variable wireless link quality, propagation path loss, fading, multiuser interference, power
expended, and topological changes, become relevant issues. The network should be able to
adaptively alter the routing paths to alleviate any of these effects. Moreover, in a military
environment, preservation of security, latency, reliability, intentional jamming, and recovery
from failure are significant concerns. Military networks are designed to maintain a low
probability of intercept and/or a low probability of detection. Hence, nodes prefer to radiate as
little power as necessary and transmit as infrequently as possible, thus decreasing the probability
of detection or interception. A lapse in any of these requirements may degrade the performance
and dependability of the network.
Up to our knowledge, the location update problem in MANETs has not been formally
addressed as a stochastic decision problem. The theoretical work on this problem is also very
limited. The authors analyze the optimal location update strategy in a hybrid position-based
routing scheme, in terms of minimizing achievable overall routing overhead. Although, a closed-
form optimal update threshold is obtained in, it is only valid for their routing scheme. On the
contrary, our analytical results can be applied in much broader application scenarios as the cost
model used is generic and holds in many practical applications. On the other hand, the location
management problem in mobile cellular networks has been extensively investigated in the
literature, where the tradeoff between the location update cost of a mobile device and the paging
cost of the system is the main concern.
Radio frequency (RF) spectrum is a valuable but tightly regulated resource due to its
unique and important role in wireless communications. With the proliferation of wireless
services, the demands for the RF spectrum are constantly increasing, leading to scarce spectrum
resources. On the other hand, it has been reported that localized temporal and geographic
spectrum utilization is extremely low [1]. Currently, new spectrum policies are being developed
by the Federal Communications Commission (FCC) that will allow secondary users to
opportunistically access a licensed band, when the primary user (PU) is absent. Cognitive radio
[2], [3] has become a promising solution to solve the spectrum scarcity problem in the next
generation cellular networks by exploiting opportunities in time, frequency, and space domains.
Cognitive radio is an advanced software-defined radio that automatically detects its surrounding
RF stimuli and intelligently adapts its operating parameters to network infrastructure while
meeting user demands. Since cognitive radios are considered as secondary users for using the
licensed spectrum, a crucial requirement of cognitive radio networks is that they must efficiently
exploit under-utilized spectrum (denoted as spectral opportunities) without causing harmful
interference to the PUs. Furthermore, PUs have no obligation to share and change their operating
parameters for sharing spectrum with cognitive radio networks. Hence, cognitive radios should
be able to independently detect spectral opportunities without any assistance from PUs; this
ability is called spectrum sensing, which is considered as one of the most critical components in
cognitive radio networks. Many narrowband spectrum sensing algorithms have been studied in
the literature [4] and references therein, including matched-filtering, energy detection [5], and
cyclo stationary feature detection. While present narrowband spectrum sensing algorithms have
focused on exploiting spectral opportunities over narrow frequency range, cognitive radio
networks will eventually be required to exploit spectral opportunities over wide frequency range
from hundreds of megahertz (MHz) to several gigahertz (GHz) for achieving higher
opportunistic throughput.
This is driven by the famous Shannon’s formula that, under certain conditions, the
maximum theoretically achievable bit rate is directly proportional to the spectral bandwidth.
Hence, different from narrowband spectrum sensing, wideband spectrum sensing aims to find
more spectral opportunities over wide frequency range and achieve higher opportunistic
aggregate throughput in March 6, 2013 DRAFT 3 cognitive radio networks. However,
conventional wideband spectrum sensing techniques based on standard analog-to-digital
converter (ADC) could lead to unaffordably high sampling rate or implementation complexity;
thus, revolutionary wideband spectrum sensing techniques become increasingly important. In the
remainder of this article, we first briefly introduce the traditional spectrum sensing algorithms for
narrowband sensing in Section II. Some challenges for realizing wideband spectrum sensing are
then discussed in Section III. In addition, we categorize the existing wideband spectrum sensing
algorithms based on their implementation types, and review the state-of-the-art techniques for
each category. Future research challenges for implementing wideband spectrum sensing are
subsequently identified in Section IV, after which concluding remarks are given in Section.
1.1PROBLEM DEFINITION
Due to multipath fading, shadowing, and varying channel conditions, uncertainty affects
all the cognitive radio processes. Measurements taken by the SUs during the sensing process are
uncertain. Decisions are taken based on what has already been observed using the SUs
knowledge basis, which may have been impacted by uncertainty. This can lead to wrong
decisions, and, thus the cognitive radio system can take wrong actions. Thus, uncertainty
propagation influences the cognitive radio performance and mitigating it is a necessity.
SCOPE OF WORK
The concept of cognitive radio (CR) has been proposed[1]. The fundamental concept of
the CR relies on the spectrum sensing and re-configurability. Two kinds of users can be
classified in the CR analysis: one is the primary user (PU) and the other is the secondary user
(SU). PUs denotes the users that have the right or license to legally use a specific frequency, sub-
carrier or frequency band. In the CR, if the PU has no data transmitted in the allocated frequency
band, the SU can perform the spectrum sensing and transmits its own data in that frequency
band. However, if the PU begins to transmit data, SU should stop the transmission immediately
to avoid the signal collisions with the PU. The SU can perform the spectrum sensing again to
find another white spectrum or spectrum hole for data transmission in the CR. Therefore,
spectrum sensing is the fundamental and the critical issue should be solved properly if applying
the CR concept in the practical implementation. In literatures, three kinds of spectrum sensing
techniques can be classified, i.e., the energy-based detection, the matched-filter detection, and
the cyclostationary feature detection Basically, energy-based detection determines the condition
of the spectrum via the received signal energy. If the energy of the received signal exceeds a
predefined threshold, SU will decide that the frequency is currently occupied by the PU. For the
matched-filter detection method, SU uses a particular preamble or sequences to determine the
condition of the spectrum. Like the operation of the match filter (MF), if the correlation
calculation for the received signal with the preamble is greater than a predefined value, SU will
regard the spectrum as in busy. For the cyclostationary feature detection, the method uses the
inherit cyclic feature of the signal to determine the condition of a particular spectrum. For
example, in OFDM system, the transmitted symbol contains the cyclic-prefix (CP) which is the
duplication of the tail sequences of the transmitted OFDM symbol to avoid the inter-symbol
interference (ISI). Currently, the concept of the cooperative sensing is also proposed to further
enhance the detection probability of the spectrum sensing. Among these methods, the energy-
based detection method has the most simple and easy implementation advantages over the other
methods. In this paper, an energy-based spectrum sensing method based on the maximum
likelihood (ML) criterion will be proposed [1]. The proposed method avoids the troublesome
calculation of the required threshold in the conventional energy-based method and has almost the
same performance as the adoption of optimal threshold. The paper is organized as follows;
Section II describes the conventional energy-based spectrum sensing and the proposed ML
scheme. Besides, an extension of the ML method with the double threshold (DT) technique is
also provided to reduce the sensing period. Performance analysis of the conventional energy
based methods, such as the constant false alarm (CFAR) method, Constant Detection (CDR)
method and the proposed ML scheme are evaluated
Compressive sensing has been proposed as an alternative solution to scan the spectrum
by reducing the sensing time and the sampling rate [2]. With compressive sensing, high
dimensional sparse signals are acquired by extracting only few samples that reflect the main
information of the signal. This extraction is performed with the help of a sensing matrix, then the
original signal is recovered with a recovery algorithm. In this paper, we distinguish between one-
bit compressive sensing with one measurement and multi-bit compressive sensing with M
measurements, where M is less that the signal samples. Multi-bit compressive sensing indicates
the conventional compressive sensing. It can sample high dimensional signals by only acquiring
few measurements rather than acquiring the whole signal. However, multi-bit compressive
sensing still faces some problems in acquiring high dimensional signals under noise uncertainty.
Techniques with high recovery rate represent high processing time and complexity while fast
techniques are not efficient, which leads to a tradeoff between recovery rate and processing time
Recently, one-bit compressive sensing has been proposed to overcome the multi-bit compressive
sensing limitations and enhance the signal sampling efficiency. One-bit compressive sensing can
recover sparse signals by using only the sign of each measurement with an extreme quantization.
Preserving only the sign of the measurements is considered as the extreme case of sampling
A few papers that compare the efficiency of one-bit compressive sensing for spectrum
sensing applications have been published. These papers [2] did not consider sufficient
parameters for the performance analysis. The simulations were limited and the results were not
compared to those of the conventional compressive sensing as an alternative solution. Moreover,
to the best of our knowledge, a performance comparison between compressive sensing categories
has not been investigated before. Thus, there is a great need for a comparison between the
efficiencies of the compressive sensing categories under certain conditions for cognitive radio
networks. These conditions need to be identified by simulating a number of parameters that
cover the most important aspects in signal sampling and recovery performance. In this paper, we
analyze both compressive sensing categories and compare their efficiency using the recovery
SNR, recovery error, hamming distance, processing time, and complexity.
SNR estimation methods can be classified into two categories: data-aided and non-data
aided approaches. Data aided estimation techniques require the information about the properties
of the transmitted data sequences (pilot). These techniques are normally able to provide an
accurate estimate of SNR. However, in time varying channels, they need to employ larger pilot
information to enable the receiver to track the channel variations. This type of approaches leads
to excessive overhead imposing undesired capacity loss to the system [3]. On the other hand,
non-data aided algorithms estimate the SNR without impacting the channel capacity. These
techniques do not need any knowledge of the transmitted data sequence characteristics.
Techniques of this type use methods such as extracting and analyzing the inherent characteristics
of the received signal to estimate the noise and signal powers. Examples of data aided and non-
data aided methods are those described by Pauluzzi et al, such as split symbol moment estimator,
maximum likelihood estimator, squared signal to noise variance estimator, second and fourth
order moment estimator, and low bias algorithm negative SNR estimator. One of the non-data
aided methods is a technique based on the eigenvalues of the covariance matrix formed from the
received signal samples that was proposed by Hamid et al. [3]. This method initially detects the
eigenvalues as in [3] and employs the minimum descriptive length (MDL) criterion to split the
signal and noise corresponding to eigenvalues. It is a blind technique in the sense that it does not
have any knowledge of the transmitted signal and noise, and SNR is merely estimated based on
the received signal samples.
The selection criteria of these parameters are based on a number of factors such as type of
application, channel condition, and hardware limitation. It is obvious that this estimator could be
more efficient if an algorithm can dynamically optimize these parameters according to particular
situations. One possible solution is the use of evolutionary optimization algorithms such as
particle swarm optimization (PSO) and genetic algorithms. These techniques, as the name
implies, mimic the pattern of biological evolutions and evolve and iterate repeatedly to find the
optimum solution of an objective function corresponding to a specific situation. Some of these
algorithms, such as the genetic algorithm, are application-dependent and require selecting
appropriate initialization values to converge at a steady rate [3]. However, PSO algorithm does
not rely on a specific single variable initialization and is less complex.
Furthermore, since this method is based on sub-spaced decomposition of the signal, it
requires less processing time and is more accurate. However, this technique is highly dependent
on: 1) the number of received samples, 2) the number of eigenvalues, and 3) the
MarchenkoPastur distribution size. Therefore, in this paper, we propose the use of PSO
algorithm to optimize the operation of the eigenvalue-based SNR estimator in [3]. First, we
define an objective function for PSO algorithm which is the goodness of fitting of two
distributions involved in SNR estimation process. This objective function is a function of number
of samples, number of eigenvalues and, the distribution size. Then, we apply the PSO algorithm
to dynamically optimize these parameters. To assess the efficiency of the proposed method, we
compare the true SNR with estimated SNR using both PSO-based and original SNR estimation
techniques and find the error between them.
Cognitive radio has been proposed to overcome the spectrum scarcity issue and enable
dynamic spectrum utilization. It allows unlicensed users, secondary users (SUs), to use the free
spectrum when the owner, primary user (PU), is absent during a period of time. A cognitive
radio system performs communication through a 3-process cycle: spectrum sensing, decision-
making, and taking-action [4]. Spectrum sensing enables SUs to detect available channels to use
for their transmissions without interfering with the PU signals. Examples of spectrum sensing
techniques include energy, autocorrelation, and Euclidean distance based detection [4]. Energy
detection performs by comparing the energy of the SU received signals with a threshold. Despite
its simplicity, it cannot differentiate between the noise and the signal, which makes it inefficient
and inaccurate [4]. Autocorrelation based detection consists of comparing the autocorrelation
function of the SU received samples at lag one to the autocorrelation function of the SU received
samples at lag zero. If the two values are close, the PU signal is present, if not it is absent. This
technique can distinguish the signal from the noise, but, it does not recognize the internal thermal
noise [4]. Euclidean distance based detection consists on computing the Euclidean distance
between the autocorrelation function and a reference line corresponding to the internal noise of
the communication device. This function values are then compared with a threshold to decide
about the availability of the channel [4]. Spectrum scanning techniques allow the measurements
of the spectrum occupancy over time and frequency. A number of spectrum occupancy surveys
were conducted over the world whether for wideband or for licensed frequency bands. Examples
of these surveys are presented and discussed. In the authors conducted a spectrum occupancy
study over a narrow range of frequency using a low noise amplifier followed by a spectrum
analyzer. Energy detection was adopted as the simplest spectrum sensing technique with fixed
threshold and 1% false alarm. In [4], a spectrum measurement survey was performed over a short
frequency range to measure the spectrum occupancy and identify the free bands at different
locations using energy detection with a predefined threshold. In a campaign was conducted to
measure the spectrum occupancy over space, time, and frequency simultaneously. The survey
was performed using the power spectral density of the measured spectrum compared to a
constant threshold. Similarly, in, energy detection was employed for spectrum scanning in a long
frequency range with costly setup. In [4], the authors proposed a spectrum survey using
Euclidian distance based detection with SDR units. The measurements were performed and
compared to those of energy detection and autocorrelation based detection.
In this paper, we propose to adopt compressive sensing at the SU receiver before
performing the sensing. The wideband spectrum scanning is performed on the compressed
signals instead of capturing the whole signals followed by the spectrum sensing process.
Compressive sensing is a promising solution for the next generation of wireless communication
networks to enable signal acquisition at a lower rate than the Nyquist rate.
In this work, the compressive sensing is performed only on the sampling without
recovering the signal. As it is not necessary to recover the compressed signal to fulfill our main
objective, which is performing the spectrum scanning on the sampled received signals and not
the whole signal. The removed part of the signal after the matrix sampling includes only null
coefficients based on the signal sparsity and the main information is kept for the spectrum
sensing. As in this compressive sensing approach is known as the compressive signal processing
and allows solving the signal processing problems directly from the compressed signals.
Compressive sensing has been proposed as a low cost solution to speed up the scanning
process and reduce the computational complexity. It involves three main processes: sparse
representation, encoding, and decoding. During the first process, the signal, S, is projected in a
sparse basis. During the second process, S is multiplied by a sampling matrix, of MxN elements
to extract M samples from N of the signal, S, where M << N. In the last process, the signal is
reconstructed from the few M measurements [5]. For the encoding process, a number of
sampling matrices have been proposed in the literature, including random matrix , Circulant
matrix [5], Toeplitz matrix, and deterministic matrix . Because of their simplicity, more interest
has been paid to random matrices.
These matrices are randomly generated with independent and identically distributed
(i.i.d) elements such as Gaussian and Bernoulli distributions. In general, compressive sensing
requires that the sampling matrix satisfies the Restrict Isometry Property (RIP) condition. RIP is
a characteristic of orthonormal matrices bounded with a Restrict Isometry Constant (RIC), which
is a positive number between 0 and 1 that respects the RIP condition [5]. This condition
guarantees the uniqueness of the reconstructed solution, during the decoding process. For
random matrices, the matrix satisfies the RIP condition for small RIC [5]. However, these
matrices require a great deal of processing time and high memory capacity to store the matrix
coefficients [5]. Because of the randomness, the results are uncertain, which makes the signal
reconstruction inefficient.
For the decoding process, a number of algorithms that exploit the sparsity feature of
signals have been proposed in the literature. A sparse signal can be estimated from a few
measurements by solving the underdetermined system using three different types of algorithms:
Iterative relaxation, Greedy, and Bayesian models. The iterative relaxation category includes
techniques that solve the underdetermined system using linear programming. Some techniques
classified under this category are ℒ1 norm minimization known as basis pursuit, gradient descent
, and iterative thresholding . Greedy algorithms consist of selecting a local optimal at each step in
order to find the global optimum, which corresponds to the estimated signal coefficients.
Examples of techniques classified under this category are matching pursuit, orthogonal matching
pursuit, and stage wise orthogonal matching pursuit. Bayesian compressive sensing algorithms
consist of using a Bayesian model to estimate the unknown parameters in order to deal with
uncertainty in measurements. Examples of techniques classified under this category are:
Bayesian model using relevance vector machine learning, Bayesian model using Laplace priors,
and Bayesian model via belief propagation. All these Bayesian based algorithms were used only
with random matrices.
2. SYSTEM ANALYSIS
Disadvantages
The computation of the threshold used for signal detection is highly susceptible to
unknown and varying noise level which result in low SNR environments
It is not possible to distinguish among different primary users since energy detectors
cannot discriminate among the sources of the received energy.
Performance depends on noise power estimation
The fast development and great richness of wireless radio technology, wireless spectrum
has become the most valuable and needful resources ever now. As a dynamic spectrum resource
utilization technology, cognitive radio (CR) has received recent attention around the world. In
CRN, under the condition of protecting the authorized user (primary user) from interfering, the
fast and accurate detection of spectrum holes is the crucial premise and key procedure to
improve the spectrum utilization ratio. Currently, most of the spectrum sensing technology is
narrowband-based, whose detection efficiency is low, this entails new spectrum sensing
approaches with hard real-time character to emerge, wide-band spectrum sensing is the direction
of development in future.
The proposed system developed an algorithm called ACBSS in this paper, which
combines hierarchical data-fusion idea with jointly compressive reconstruction technology by
Compressive Sensing (CS) theory to realize the wide-band spectrum sensing. The simulation
results show that the ACBSS algorithm can sense the wide-band signal accurately and
efficiently, and also relieve the data fusion center from the heavy pressure of computation.
Advantages
RAM : 1 GB
Middleware Programming : C#
NET FRAMEWORK
Microsoft® .NET Framework version 1.1 the .NET Framework is an integral Windows
component that supports building and running the next generation of applications and XML Web
services. The key components of the .NET Framework are the common language runtime and
the .NET Framework class library, which includes ADO.NET, VB.NET, and Windows Forms.
The .NET Framework provides a managed execution environment simplified development and
deployment, and integration with a wide variety of programming languages. For a brief
introduction to the architecture of the .NET Framework
The following illustration shows the relationship of the common language runtime and
the class library to your applications and to the overall system. The illustration also shows how
managed code operates within a larger architecture.
The common language runtime manages memory, thread execution, code execution, code
safety verification, compilation, and other system services. These features are intrinsic to the
managed code that runs on the common language runtime.
With regards to security, managed components are awarded varying degrees of trust,
depending on a number of factors that include their origin (such as the Internet, enterprise
network, or local computer). This means that a managed component might or might not be able
to perform file-access operations, registry-access operations, or other sensitive functions, even if
it is being used in the same active application.
The runtime enforces code access security. For example, users can trust that an
executable embedded in a Web page can play an animation on screen or sing a song, but cannot
access their personal data, file system, or network. The security features of the runtime thus
enable legitimate Internet-deployed software to be exceptionally feature rich.
In addition, the managed environment of the runtime eliminates many common software
issues. For example, the runtime automatically handles object layout and manages references to
objects, releasing them when they are no longer being used. This automatic memory
management resolves the two most common application errors, memory leaks and invalid
memory references.
The runtime also accelerates developer productivity. For example, programmers can
write applications in their development language of choice, yet take full advantage of the
runtime, the class library, and components written in other languages by other developers. Any
compiler vendor who chooses to target the runtime can do so. Language compilers that target the
.NET Framework make the features of the .NET Framework available to existing code written in
that language, greatly easing the migration process for existing applications.
While the runtime is designed for the software of the future, it also supports software of
today and yesterday. Interoperability between managed and unmanaged code enables developers
to continue to use necessary COM components and DLLs.
The .NET Framework class library is a collection of reusable types that tightly integrate
with the common language runtime. The class library is object oriented, providing types from
which your own managed code can derive functionality. This not only makes the .NET
Framework types easy to use, but also reduces the time associated with learning new features of
the .NET Framework. In addition, third-party components can integrate seamlessly with classes
in the .NET Framework.
For example, the .NET Framework collection classes implement a set of interfaces that
you can use to develop your own collection classes. Your collection classes will blend
seamlessly with the classes in the .NET Framework.
As you would expect from an object-oriented class library, the .NET Framework types
enable you to accomplish a range of common programming tasks, including tasks such as string
management, data collection, database connectivity, and file access. In addition to these common
tasks, the class library includes types that support a variety of specialized development scenarios.
For example, you can use the .NET Framework to develop the following types of applications
and services:
Console applications.
Windows GUI applications (Windows Forms).
ASP.NET applications.
XML Web services.
Windows services.
For example, the Windows Forms classes are a comprehensive set of reusable types that
vastly simplify Windows GUI development. If you write an ASP.NET Web Form application,
you can use the Web Forms classes.
Another kind of client application is the traditional ActiveX control (now replaced by the
managed Windows Forms control) deployed over the Internet as a Web page. This application is
much like other client applications: it is executed natively, has access to local resources, and
includes graphical elements.
In the past, developers created such applications using C/C++ in conjunction with the
Microsoft Foundation Classes (MFC) or with a rapid application development (RAD)
environment such as Microsoft® Visual Basic®. The .NET Framework incorporates aspects of
these existing products into a single, consistent development environment that drastically
simplifies the development of client applications.
The Windows Forms classes contained in the .NET Framework are designed to be used
for GUI development. You can easily create command windows, buttons, menus, toolbars, and
other screen elements with the flexibility necessary to accommodate shifting business needs.
Server-side applications in the managed world are implemented through runtime hosts.
Unmanaged applications host the common language runtime, which allows your custom
managed code to control the behavior of the server. This model provides you with all the features
of the common language runtime and class library while gaining the performance and scalability
of the host server.
The following illustration shows a basic network schema with managed code running in
different server environments. Servers such as IIS and SQL Server can perform standard
operations while your application logic executes through the managed code.
ASP.NET is the hosting environment that enables developers to use the .NET Framework
to target Web-based applications. However, ASP.NET is more than just a runtime host; it is a
complete architecture for developing Web sites and Internet-distributed objects using managed
code. Both Web Forms and XML Web services use IIS and ASP.NET as the publishing
mechanism for applications, and both have a collection of supporting classes in the .NET
Framework.
If you have used earlier versions of ASP technology, you will immediately notice the
improvements that ASP.NET and Web Forms offer. For example, you can develop Web Forms
pages in any language that supports the .NET Framework. In addition, your code no longer needs
to share the same file with your HTTP text (although it can continue to do so if you prefer). Web
Forms pages execute in native machine language because, like any other managed application,
they take full advantage of the runtime. In contrast, unmanaged ASP pages are always scripted
and interpreted. ASP.NET pages are faster, more functional, and easier to develop than
unmanaged ASP pages because they interact with the runtime like any managed application.
The .NET Framework also provides a collection of classes and tools to aid in
development and consumption of XML Web services applications. XML Web services are built
on standards such as SOAP (a remote procedure-call protocol), XML (an extensible data format),
and WSDL (the Web Services Description Language). The .NET Framework is built on these
standards to promote interoperability with non-Microsoft solutions.
For example, the Web Services Description Language tool included with the .NET
Framework SDK can query an XML Web service published on the Web, parse its WSDL
description, and produce C# or Visual Basic source code that your application can use to become
a client of the XML Web service. The source code can create classes derived from classes in the
class library that handle all the underlying communication using SOAP and XML parsing.
Although you can use the class library to consume XML Web services directly, the Web
Services Description Language tool and the other tools contained in the SDK facilitate your
development efforts with the .NET Framework.
If you develop and publish your own XML Web service, the .NET Framework provides a
set of classes that conform to all the underlying communication standards, such as SOAP,
WSDL, and XML. Using those classes enables you to focus on the logic of your service, without
concerning yourself with the communications infrastructure required by distributed software
development.
Finally, like Web Forms pages in the managed environment, your XML Web service will
run with the speed of native machine language using the scalable communication of IIS.
Compilers and tools expose the runtime's functionality and enable you to write code that
benefits from this managed execution environment. Code that you develop with a language
compiler that targets the runtime is called managed code; it benefits from features such as cross-
language integration, cross-language exception handling, enhanced security, versioning and
deployment support, a simplified model for component interaction, and debugging and profiling
services.
To enable the runtime to provide services to managed code, language compilers must emit
metadata that describes the types, members, and references in your code. Metadata is stored with
the code; every loadable common language runtime portable executable (PE) file contains
metadata. The runtime uses metadata to locate and load classes, lay out instances in memory,
resolve method invocations, generate native code, enforce security, and set run-time context
boundaries.
The runtime automatically handles object layout and manages references to objects, releasing
them when they are no longer being used. Objects whose lifetimes are managed in this way are
called managed data. Garbage collection eliminates memory leaks as well as some other
common programming errors. If your code is managed, you can use managed data, unmanaged
data, or both managed and unmanaged data in your .NET Framework application. Because
language compilers supply their own types, such as primitive types, you might not always know
(or need to know) whether your data is being managed.
The common language runtime makes it easy to design components and applications whose
objects interact across languages. Objects written in different languages can communicate with
each other, and their behaviors can be tightly integrated. For example, you can define a class and
then use a different language to derive a class from your original class or call a method on the
original class. You can also pass an instance of a class to a method of a class written in a
different language. This cross-language integration is possible because language compilers and
tools that target the runtime use a common type system defined by the runtime, and they follow
the runtime's rules for defining new types, as well as for creating, using, persisting, and binding
to types.
As part of their metadata, all managed components carry information about the
components and resources they were built against. The runtime uses this information to ensure
that your component or application has the specified versions of everything it needs, which
makes your code less likely to break because of some unmet dependency. Registration
information and state data are no longer stored in the registry where they can be difficult to
establish and maintain. Rather, information about the types you define (and their dependencies)
is stored with the code as metadata, making the tasks of component replication and removal
much less complicated.
Language compilers and tools expose the runtime's functionality in ways that are intended to
be useful and intuitive to developers. This means that some features of the runtime might be
more noticeable in one environment than in another. How you experience the runtime depends
on which language compilers or tools you use. For example, if you are a Visual Basic developer,
you might notice that with the commonlanguage runtime, the Visual Basic language has more
object-oriented features than before. Following are some benefits of the runtime.
Performance improvements.
The ability to easily use components developed in other languages.
Extensible types provided by a class library.
New language features such as inheritance, interfaces, and overloading for object-
oriented programming; support for explicit free threading that allows creation of
multithreaded, scalable applications; support for structured exception handling and
custom attributes.
If you use Microsoft® Visual C++® .NET, you can write managed code using the Managed
Extensions for C++, which provide the benefits of a managed execution environment as well as
access to powerful capabilities and expressive data types that you are familiar with. Additional
runtime features include:
Cross- language integration, especially cross- language inheritance.
Garbage collection, which manages object lifetime so that reference counting is
unnecessary.
Self-describing objects, which make using Interface Definition Language (IDL)
unnecessary.
ADO.NET ARCHITECTURE
ADO.NET Architecture
The design of the Dataset enables you to easily transport data to clients over the Web
using XML Web services, as well as allowing you to marshal data between .NET components
using .NET Remoting services. You can also remote a strongly typed Dataset in this fashion. For
an overview of XML Web services..
An overview of remoting services can be found in the .NET Remoting Overview. Note
that Data Table objects can also be used with remoting services, but cannot be transported via
an XML Web service.
The following table outlines the four core objects that make up a .NET Framework data
provider.
Object Description
The .NET Framework includes the .NET Framework Data Provider for SQL Server (for
Microsoft SQL Server version 7.0 or later), the .NET Framework Data Provider for OLE DB,
and the .NET Framework Data Provider for ODBC.
Note The .NET Framework Data Provider for ODBC is not included in the .NET Framework
version 1.0. If you require the .NET Framework Data Provider for ODBC and are using the .NET
Framework version 1.0, you can download the .NET Framework Data Provider for ODBC at
http://msdn.microsoft.com/downloads. The namespace for the downloaded .NET Framework
Data Provider for ODBC is Microsoft.Data.Odbc.
The .NET Framework Data Provider for SQL Server uses its own protocol to
communicate with SQL Server. It is lightweight and performs well because it is optimized to
access a SQL Server directly without adding an OLE DB or Open Database Connectivity
(ODBC) layer. The following illustration contrasts the .NET Framework Data Provider for SQL
Server with the .NET Framework Data Provider for OLE DB. The .NET Framework Data
Provider for OLE DB communicates to an OLE DB data source through both the OLE DB
Service component, which provides connection pooling and transaction services, and the OLE
DB Provider for the data source.
Comparison of the .NET Framework Data Provider for SQL Server and the .NET
Framework Data Provider for OLE DB
Comparison of .NET Framework Data Provider for SQL Server and OLEDB
To use the .NET Framework Data Provider for SQL Server, you must have access to
Microsoft SQL Server 7.0 or later. .NET Framework Data Provider for SQL Server classes are
located in the System.Data.SqlClient namespace. For earlier versions of Microsoft SQL Server,
use the .NET Framework Data Provider for OLE DB with the SQL Server OLE DB Provider
(SQLOLEDB).
VISUAL C# LANGUAGE
Several C# features aid in the construction of robust and durable applications: Garbage
collection automatically reclaims memory occupied by unused objects; exception handling
provides a structured and extensible approach to error detection and recovery; and the type-safe
design of the language makes it impossible to have uninitialized variables, to index arrays
beyond their bounds, or to perform unchecked type casts.
C# has a unified type system. All C# types, including primitive types such as int and
double, inherit from a single root object type, Thus, all types share a set of common operations,
and values of any type can be stored, transported, and operated upon in a consistent manner.
Furthermore, C# supports both user-defined reference types and value types, allowing dynamic
allocation of objects as well as in-line storage of lightweight structures.
To ensure that C# programs and libraries can evolve over time in a compatible manner,
much emphasis has been placed on versioning in C#’s design. Many programming languages
pay little attention to this issue, and, as a result, programs written in those languages break more
often than necessary when newer versions of dependent libraries are introduced. Aspects of C#
’s design that were directly influenced by versioning considerations include the separate virtual
and override modifiers, the rules for method overload resolution, and support for explicit
interface member declarations.
ASP.NET
The .NET framework includes tools that ease the creation of web services. ASP.NET is
the latest offering from Microsoft toward the creation of a new paradigm for server-side
scripting. The systemwill see the basics of ASP.NET, which provides a complete framework
for the development of web applications. Here the systemget introduced into ASP.NET, the
platform requirements for ASP.NET applications, and the ASP.NET architecture. In addition,
the systemget introduced to web forms of ASP.NET applications, a new addition to ASP.NET.
ASP .NET differs in some ways from earlier versions Os ASP. ASP.NET has new
features such as better language support, a new set of controls, XML-based components, and
more secure user authentication. ASP.NET also provides increased performance by executing
ASP code.
Usually a software product undergoes many evolutionary phases. In each release version
of the software product, the software vendor fixes the bugs form previous versions and adds new
features. ASP 1.0 was released in 1996. Since then, two more versions of ASP (2.0 AND 3.0)
have been released. In various versions of ASP, new features have been added. However, the
basic methodology used for creating applications has not changed.
BENEFITS OF ASP.NET
ASP.NET provides flexibility to extend created in one language to another language. For
example, if the system has an object in C++, ASP.NET enables us to extend this object in Visual
Basic.
DIRECTIVES
<% @ page language =”VB” […] %>
[ lines of code]
</script>
%>
</HTML element>
CUSTOM CONTROL SYNTAX
SUB PROPERTY
Run at =”server”/>
Identifier=”idName”/>
SERVER-SIDE COMMENTS
A global variable has to be locked and unlocked to prevent problems with concurrent
access.
The term server controls always means Web Forms server controls, because they are
specially designed to work with Web Forms.
DATA BINDING
The systemcan bind Web Forms control properties to any data in a data store. This so-called
data binding gives us nearly complete control over how data moves to the page and back again to
the data store.
PAGE CLASS
When a page is loaded, the ASP.NET runtime generates and instantiates a page class.
This object forms a collection of our separate components (like visual elements and business
logic). So all (visual and code) elements are accessible through this object.
The system can convert simple HTML elements to HTML server controls, let the
ASP.NET engine create an instance on the server, and now they are programmable on the server.
The conversion is done by simply adding attributes to the HTML tag. The attributes runat=server
informs the framework to create a server-side instance of the control. If the system additionally
assigns an ID, the system can reference the control in our code.
For example, the system can use the HTMLAnchor control to program against the HTML <a>
tag to dynamically generate the H Ref values, or use HtmlTable (HTML <table>) to
dynamically create tables and their content.
Another feature is the typed object model. This gives us the potential for type-safe
programming. Server controls can automatically detect what browser the system is using and
generate the proper version of HTML output.
BUTTON
This is way to enable the user to finish editing a form. A Button enforces the submitting
of the page, and the systemcan additionally raise events like the Click event.
TEXTBOX
A Textbox is an input box where the user can enter information like numbers, text, or
dates formatted as single line, multilane, or password. This control raises a Text Changed event
when the focus “leaves” the control.
VALIDATION CONTROLS
Another group of server controls are validation controls. These can be used to check the
user’s entries. Validation can be processed on the client and on the server.
Validation on the client side can be performed using a client script. In that case, the user
will be confronted with immediate feedback-without a roundtrip to the server. Server-side
validation in addition provides, for example, security against users bypassing client-side
validation.
Required entry- the field must be filled in by the user. Comparison to a value- the entered
value is checked against another value of another field, a database, or a constant value by using
comparison operators. Range checking – the user’s entry is checked to see whether it resides
between given boundaries. Pattern matching- a regular expression is defined that the entered
value must match. User’s defined- implement our own validation logic. When the validation
fails, an error message is generated and sent back to the client browser. This can be done in
several ways. For example, all error messages related to a specific transaction could be collected
and presented to the user in summary.
SQL stands for Structured Query Language. SQL is used to communicate with a
database. According to ANSI (American National Standards Institute), it is the standard language
for relational database management systems.
SQL statements are used to perform tasks such as update data on a database, or retrieve
data from a database. Some common relational database management systems that the SQL are:
Oracle, Sybase, Microsoft SQL Server, Access, Ingress, etc. Although most database systems
use SQL, most of them also have their own additional proprietary extensions that are usually
only used on their system.
The standard SQL commands such as “Select”, “Insert”, “Update”, ”Delete”, ”Create”,
and “Drop” can be used to accomplish almost everything that one needs to do with a database.
This tutorial will provide you with the instruction on the basics of each of these commands as
well as allow you to put them to practice using the SQL Interpreter.
TABLE BASICS
A relational database system contains one of more objects called tables. The data or
information for the database is stored in these tables. Tables are uniquely identified by their
names and are comprised of columns and rows. Columns contain the column name, data type,
and any other attributes for the column. Rows contain the records or data for the columns.
SELECTING DATA
The select statement is used to query the database and relatives selected data that match
the criteria that you specify. Here is the format of a simple select statement.
The column names that follow the select keyword determine which columns will be
returned in the results. The system can select as many column names that you’d like, or you can
use a “*” to select all columns. The table name that follows the keyword from specifies the table
that will be queried to retrieve the desired results.
The where clause (optional) specifies which data values or rows will be returned
or displayed, based on the criteria described after the keyword where.
Conditional selections used in the where clause
= Equal
LIKE
The LIKE pattern matching operator can also be used in the conditional selection of the
where clause. Like is a very powerful operator that allows you to select only rows that are “Like”
what you specify. The percent sign “%” can be used as a wild card to match any possible
character that might appear before or after the characters specified.
For example
From empinfo
This SQL statement will match any first names that start with ‘Er’. Strings must be in
single quotes. Or the systemcan specify
This will only select rows where the first name equals ‘Erie’ exactly.
CREATING TABLES
The create tables statement is used to create a new table. Here is the format of a simple
create table statement.
Create table “table name”
[Constraints],
[Constraints],
[Constraints],
[ ] = optional)
To create a new table, enter the keywords create table followed by the table name,
followed by an open parenthesis , followed by the first column name, followed by the data type
for that column, followed by any optional constraints, any followed by a closing parenthesis
before the beginning table and a closing parenthesis after the end of the last column definition.
Make sure you separate each column definition with a comma. All SQL statements should end
with a “;”.
The table and column names must start with a letter and can be followed by letters,
numbers, or underscore – not to exceed a total of 30 characters in length. Do not use any SQL
reserved keywords as names for tables or column names (such as “select”, “create”, “insert”,
etc).Data types specify what the types of data can be for that particular column. If a column
called “Last Name” is to be used to hold names, then that particular column should have a
“VarChar” (variable-length character) data type.
Number (size) Number value with a max number of columns digits specified in
parenthesis.
The other two most particular constraints are “not null” which specifies that a column
can’t be left blank, and “primary key”. A “primary key” constraint defines a unique
identification of each record (or row) in a table. Constraints can be entered in this SQL
interpreter, however, they are not supported in this Intro to SQL tutorial & interpreter. They will
be covered and supported in the future release of the Advanced SQL tutorial- that is, if
“response” is good.
The insert statement is used to insert or add a row of data into the table. To insert records
into a table, enter the key words insert into followed by the table name, followed by an open
parenthesis, followed by a list of column names separated by commas, followed by a closing
parenthesis, followed by the keyword values, followed by the list of values enclosed in
parenthesis. The values that you enter will be held in the rows and they will match up with the
column names that you specify. Strings should be enclosed in single quotes, and numbers should
not.
UPDATING RECORDS
The update statement is used to update or change records that match specified criteria.
This is accomplished by carefully constructing a where clause.
Update “table name”
DELETING RECORDS
The delete statement is used to delete records or rows from the table.
OPERATOR “value”
[and/or “column”
OPERATOR “value”]
To delete an entire record/row from a table, enter “delete form” followed by the table
name, followed by the where clause which contains the conditions to delete. If you leave off the
where clause, all records will be deleted.
DROP A TABLE
The drop table command is used to delete a table and all rows in the table. To delete an
entire table including all of its rows, issue the drop table command followed by the table name.
Drop table is different from deleting all of the records in the table. Deleting all of the records in
the table leaves the table including column and constraint information. Dropping the table
removes the table definition as well as all of its rows.
TABLE JOINS
All of the queries up until this point have been useful with the exception of one major
limitation- that is, you’ve been selecting from only one table are a time with your SELECT
statement. It is time to introduce you to one of the most beneficial features of SQL & relational
database system – the
Joins allow you to link data from two or more tables together into a single query result –
from one single SELECT statement. A “join” can be recognized in a SQL SELECT statement if
it has more than one table after the FROM keyword.
CASE Tools
Integration Framework
Portability Services
Operating System
Hardware Platform
Environment Architecture
The compilers, editors and debuggers those are available to support most
conventional programming languages. Web development tools include to the generation of
text, graphics, forms, scripts and other elements of a web page.
UML
Unified Modeling Language (UML) is a standardized visual specification language
for object modeling. UML is a general-purpose modeling language that includes a graphical
notation used to create an abstract model of a system, referred to as a UML model.
User
Concern Details
Job Details
Maintaining
Exam Details
Report
Commands
3. PROJECT DESCRIPTION
MODULES
HOME
LOGIN
MEC
OPTIMALCHOOSING
RECONSTRUCTION
MODULE DESCRIPTION
HOME PAGE
This modules shows the entire sitemap and index to this upcoming events, ongoing web
projects, development sites and also provides the how to use this tool to submit the reports and
inspecting details to the system.
LOGIN
The main objective of this module is to authenticate the logged in clients, clients and others those
who are enter into the system for various purposes and they can be authenticated and authorized by the
secured mechanism that are enforced by the masonry system.
MEC
The Maximum Entropy Clustering (MEC) to divide all the sensing nodes into several clusters.
Then, each node within one cluster locally acquires the compressive measurement and send them
to head node.
OPTIMAL CHOOSING
The head node chooses the optimal measurement data from all received data, then sends the
chosen data to the fusion center. The rule is to select the measurement data of biggest absolute
value to keep the spectrum occupancy situation maximally.
RECONSTRUCTION
The fusion center uses the joint reconstruction scheme to reconstruct wide-band
spectrum. The data quantity can be decreased largely by head nodes for it only send the optimal
measurement data to fusion center, in where the data processing pressure can be reduced greatly
as well.
4.SYSTEM DESIGN
SYSTEM ARCHITECTURE
NODES
MEC
CLUSTER
CO M P R E S S I VE M E AS U R I N G
DATAS
OPTIMAL CHOOSING
FINAL DATA
JOINLY
RECONSTRUCTION
5. SYSTEM IMPLEMENTATION
6. CONCLUSION
Cognitive radio network is a system able to learn from the environment and adjust its
transmission parameters. With their awareness of the radio environment, SUs perform spectrum
sensing to identify free channels in order to efficiency exploit these channels. As the sensing
process requires a considerable amount of time, compressive sensing has been introduced as a
low-cost solution to speed up the channels scanning process and improve the detection rate.
For future directions, there are several interesting research directions related to the
compressive sensing. For instance, advanced ADCs can be considered for future works. They are
highly needed to support the high sampling rate presented in the cognitive radio networks with
the high increase of wireless network services and mobile users. Moreover, hardware
implementation is yet another future direction in terms of designing fast and inexpensive ADCs
devises for signal sampling and integrating the compressive sensing algorithms on these devises.
In addition, implementing in hardware the compressive sensing techniques is another direction to
overcome the problems of synchronization, calibration, and uncertainty in measurements. In
addition, developing new and efficient signal acquisition models based compressive sensing is an
interesting direction of interest to cover all the signal models presented in real radio
environments. Also, handling the uncertainty and the imperfections of real radio networks by
designing practical compressive sensing techniques is an open door for researchers in this field.
REFERENCES
[1] Li, C. M., & Lu, S. H. (2016). Energy-based maximum likelihood spectrum sensing
method for the cognitive radio. Wireless Personal Communications, 89(1), 289-302.
[2] Salahdine, F., Kaabouch, N., & Ghazi, H. E. (2018). One-Bit Compressive Sensing Vs.
Multi-Bit Compressive Sensing for Cognitive Radio Networks. In IEEE Int. Conf.
Industrial Techno(pp. 1-6).
[3] Manesh, M. R., Quadri, A., Subramaniam, S., & Kaabouch, N. (2017, January). An
optimized SNR estimation technique using particle swarm optimization algorithm.
In Computing and Communication Workshop and Conference (CCWC), 2017 IEEE 7th
Annual (pp. 1-6). IEEE.
[4] Salahdine, F., & El Ghazi, H. (2017, October). A real time spectrum scanning technique
based on compressive sensing for cognitive radio networks. In Ubiquitous Computing,
Electronics and Mobile Communication Conference (UEMCON), 2017 IEEE 8th
Annual (pp. 506-511). IEEE.
[5] Salahdine, F., Kaabouch, N., & El Ghazi, H. (2016, October). Bayesian compressive
sensing with circulant matrix for spectrum sensing in cognitive radio networks.
In Ubiquitous Computing, Electronics & Mobile Communication Conference
(UEMCON), IEEE Annual (pp. 1-6). IEEE.
[6] Yarkan, S. (2015). A generic measurement setup for implementation and performance
evaluation of spectrum sensing techniques: Indoor environments. IEEE Transactions on
Instrumentation and Measurement, 64(3), 606-614.
[7] Sun, H., Nallanathan, A., Wang, C. X., & Chen, Y. (2013). Wideband spectrum sensing
for cognitive radio networks: a survey. IEEE Wireless Communications, 20(2), 74-81.
[8] Sharma, S. K., Lagunas, E., Chatzinotas, S., & Ottersten, B. (2016). Application of
compressive sensing in cognitive radio communications: A survey. IEEE Communication
Surveys & Tutorials.