You are on page 1of 54

TABLE OF CONTENTS

CHAPTER NO. TITLE PAGE NO.

ABSTRACT

1. INTRODUCTION

1.1 Problem Definition

1.2 Literature Survey

2. SYSTEM ANALYSIS

2.1 Existing System

2.2 Proposed System

2.3 System Requirements

2.3.1 Hardware Specification

2.3.2 Software Specification

2.3.3 About Software

3. PROJECT DESCRIPTION

4. SYSTEM DESIGN

5. SYSTEM IMPLEMENTATION

5.1 Sample Screens

5.2 Sample Source Code

6. CONCLUSION

7. REFERENCES
ADVANCED CLUSTER BASED SPECTRUM SENSING IN BROADBAND COGNITIVE
RADIO NETWORK

ABSTRACT

Due to the rapid growth of new wireless communication services and applications, much
attention has been directed to frequency spectrum resources. Considering the limited radio
spectrum, supporting the demand for higher capacity and higher data rates is a challenging task
that requires innovative technologies capable of providing new ways of exploiting the available
radio spectrum. Cognitive radio (CR), which is among the core prominent technologies for the
next generation of wireless communication systems, has received increasing attention and is
considered a promising solution to the spectral crowding problem. The key issue of applying the
Cognitive radio technique successfully is how to sense exactly and quickly whether or not the
primary user (PU) exists, and searching for the spectrum holes to provide the secondary user
(SU). In addition when the SU accesses an available band, it must periodically monitor this band
to account for sudden reappearances of the PUs. This would inherently limit the throughput of
the SUs, or at least degrade the quality of service (QoS) if it is even guaranteed. In this thesis, we
focus on advanced cluster based spectrum sensing (ACBSS) algorithm which combines
hierarchical data-fusion idea with jointly compressive reconstruction technology. To validate the
efficiency and effectiveness, we compare the ACBBSS with independent compressive sensing
(ICS) and joint compressive sensing (JCS) in the detection probability, false-alarm probability
and algorithm execution time under the circumstance of different SNR and compression ratio.
The majority of existing work has focused on single band cognitive radio, multiband cognitive
radio represents great promises toward implementing efficient cognitive networks compared to
single-based networks. This has primarily motivated the introduction of multiband cognitive
radio (MB-CR) paradigm, which is also referred to wideband CR. By enabling SUs to
simultaneously sense and access multiple channels, this paradigm promises significant
enhancements to the network’s throughput. In addition, it helps provide seamless handoff from
band to band, which improves the link maintenance and reduces data transmission interruptions.

1. INTRODUCTION
Research on Wireless Ad Hoc Networks has been ongoing for decades. The history of
wireless ad hoc networks can be traced back to the Defense Advanced Research Project Agency
(DAPRPA) packet radio networks (PRNet), which evolved into the survivable adaptive radio
networks (SURAD) program. Ad hoc networks have played an important role in military
applications and related research efforts, for example, the global mobile information systems
(GloMo) program and the near-term digital radio (NTDR) program. Recent years have seen a
new spate of industrial and commercial applications for wireless ad hoc networks, as viable
communication equipment and portable computers become more compact and available.

Wireless networks have become increasingly popular in the communication industry.


These networks provide mobile users with ubiquitous computing capability and information
access regardless of the users’ location. There are currently two variations of mobile wireless
networks: infrastructure and infrastructure less networks. The infrastructure networks have fixed
and wired gateways or the fixed Base-Stations which are connected to other Base-Stations
through wires. Each node is within the range of a Base-Station. A “Hand-off” occurs as mobile
host travels out of range of one Base-Station and into the range of another and thus, mobile host
is able to continue communication seamlessly throughout the network. Example applications of
this type include wireless local area networks and Mobile Phone. On considering the location
service in a mobile ad hoc network, each node needs to maintain its location information by
frequently updating its location with its neighbor nodes which is called as neighborhood update
and periodically updating its location information in its network which is called location update.
The operation costs in location updates and performance losses for the target application because
of the location update has not properly done is a major issue. This may because the network can’t
be utilized in an effective way.

On focusing on this issue, planning and developing stochastic sequential decision


framework to analyze this problem. Investigation done on Node Update and Location Server
Update is done based on monotonicity properties under general cost settings. Then the decision
is done based on separation property of nodes in the network for the fixed cost settings. On this
investigation decision made that there exist a simple optimal threshold-based update rule for
Location Server Update and for Node Update operations, optimal threshold based update exists
in low mobility scenario. In this there is no previous knowledge of Morkov Decision Process
available in this model. To find optimal solution no need to set up practical model approach.
The other type of wireless network, infrastructure less networks, is knows as Mobile Ad-
hoc Networks (MANET). These networks have no fixed routers, every node could be router. All
nodes are capable of movement and can be connected dynamically in arbitrary manner. The
responsibilities for organizing and controlling the network are distributed among the terminals
themselves. The entire network is mobile, and the individual terminals are allowed to move
freely. In this type of networks, some pairs of terminals may not be able to communicate directly
with each other and have to rely on some terminals so that the messages are delivered to their
destinations. Such networks are often referred to as multi-hop or store-and forward networks.
The nodes of these networks function as routers, which discover and maintain routes to other
nodes in the networks. The nodes may be located in or on airplanes, ships, trucks, cars, perhaps
even on people or very small devices. Mobile Ad-hoc Networks are supposed to be used for
disaster recovery, battlefield communications, and rescue operations when the wired network is
not available. It can provide a feasible means for ground communications and information
access.

Denial of service by server resource exhaustion has become a major security threat in
open communications networks. Public-key authentication does not completely protect against
the nodes detections because the authentication protocols often leave ways for an
unauthenticated client to consume a server’s memory space and computational resources by
initiating a large number of protocol runs and inducing the server to perform expensive
cryptographic computations.

A solution to such threats is to authenticate the client before the server commits any
resources to it. The authentication, however, creates new opportunities for DOS nodes detections
because authentication protocols usually require the server to store session-specific state data,
such as nonces, and to compute expensive public-key operations. One solution is to begin with a
weak but inexpensive authentication, and to apply stronger and costlier methods only after the
less expensive ones have succeeded. An example of a weak authentication is the SYN-cookie
protection against the SYN nodes detection where the return address is verified not to be
fictional by sending the client a nonce that it must return in its next message. This strategy is not
entirely unproblematic because the gradually strengthening authentication results in longer
protocol runs with more messages and the security of the weak authentication mechanisms may
be difficult to analyze.
The convenience of 802.11-based wireless access networks has led to widespread
deployment in the consumer, industrial and military sectors. However, this use is predicated on
an implicit assumption of confidentiality and availability. While the security flaws in 802.11’s
basic confidentially mechanisms have been widely publicized, the threats to network availability
are far less widely appreciated. In fact, it has been suggested that 802.11 is highly susceptible to
malicious denial-of-service (DoS) nodes detections targeting its management and media access
protocols. This paper provides an experimental analysis of such 802.11-specific nodes detections
– their practicality, their efficacy and potential low-overhead implementation changes to mitigate
the underlying vulnerabilities.
On wireless computer networks, ad-hoc mode is a method for wireless devices to directly
communicate with each other. Operating in ad-hoc mode allows all wireless devices within range
of each other to discover and communicate in peer-to-peer fashion without involving central
access points (including those built in to broadband wireless routers).To set up an ad-hoc
wireless network, each wireless adapter must be configured for ad-hoc mode versus the
alternative infrastructure mode. In addition, all wireless adapters on the ad-hoc network must use
the same SSID and the same channel number. An ad-hoc network tends to feature a small group
of devices all in very close proximity to each other. Performance suffers as the number of
devices grows, and a large ad-hoc network quickly becomes difficult to manage. Ad-hoc
networks cannot bridge to wired LANs or to the Internet without installing a special-
purpose gateway. Ad hoc networks make sense when needing to build a small, all-wireless LAN
quickly and spend the minimum amount of money on equipment. Ad hoc networks also work the
well as a temporary fallback mechanism if normally-available infrastructure mode gear (access
points or routers) stop functioning.

The advance of very large-scale integrated circuits (VLSI) and the commercial popularity
of global positioning services (GPS), the geographic location information of mobile devices in a
mobile ad hoc network (MANET) is becoming available for various applications. This location
information not only provides one more degree of freedom in designing network protocols, but
also is critical for the success of many military and civilian applications. In a MANET, since the
locations of nodes are not fixed, a node needs to frequently update its location information to
some or all other nodes. There are two basic location update operations at a node to maintain its
up-to-date location information in the network. One operation is to update its location
information within a neighboring region, where the neighboring region is not necessarily
restricted to one hop neighboring nodes. the project call this operation neighborhood update
(NU), which is usually implemented by local broadcasting/flooding of location information
messages. The other operation is to update the node’s location information at one or multiple
distributed location servers. The positions of the location servers could be fixed.

The set of applications for MANETs is diverse, ranging from small, static networks that
are constrained by power sources, to large-scale, mobile, highly dynamic networks. The design
of network protocols for these networks is a complex issue. Regardless of the application,
MANETs need efficient distributed algorithms to determine network organization, link
scheduling, and routing. However, determining viable routing paths and delivering messages in a
decentralized environment where network topology fluctuates is not a well-defined problem.
While the shortest path (based on a given cost function) from a source to a destination in a static
network is usually the optimal route, this idea is not easily extended to MANETs. Factors such as
variable wireless link quality, propagation path loss, fading, multiuser interference, power
expended, and topological changes, become relevant issues. The network should be able to
adaptively alter the routing paths to alleviate any of these effects. Moreover, in a military
environment, preservation of security, latency, reliability, intentional jamming, and recovery
from failure are significant concerns. Military networks are designed to maintain a low
probability of intercept and/or a low probability of detection. Hence, nodes prefer to radiate as
little power as necessary and transmit as infrequently as possible, thus decreasing the probability
of detection or interception. A lapse in any of these requirements may degrade the performance
and dependability of the network.

Up to our knowledge, the location update problem in MANETs has not been formally
addressed as a stochastic decision problem. The theoretical work on this problem is also very
limited. The authors analyze the optimal location update strategy in a hybrid position-based
routing scheme, in terms of minimizing achievable overall routing overhead. Although, a closed-
form optimal update threshold is obtained in, it is only valid for their routing scheme. On the
contrary, our analytical results can be applied in much broader application scenarios as the cost
model used is generic and holds in many practical applications. On the other hand, the location
management problem in mobile cellular networks has been extensively investigated in the
literature, where the tradeoff between the location update cost of a mobile device and the paging
cost of the system is the main concern.

A similar stochastic decision formulation with a semi-Markov Decision Process (SMDP)


model for the location update in cellular networks has been proposed. However, there are several
fundamental differences between our works. First, the separation principle discovered here is
unique to the location update problem in MANETs Since there are two different location update
operations (i.e., NU and LSU), second, the mono-tonicity properties of the decision rules.
Location inaccuracies have not been identified; and third, the value iteration algorithm used
relies on the existence of powerful base stations, which can estimate the parameters of the
decision process model while the learning approach, we provide here is model free and has a
much lower complexity in implementation, which is favorable to infrastructure less MANETs.

Radio frequency (RF) spectrum is a valuable but tightly regulated resource due to its
unique and important role in wireless communications. With the proliferation of wireless
services, the demands for the RF spectrum are constantly increasing, leading to scarce spectrum
resources. On the other hand, it has been reported that localized temporal and geographic
spectrum utilization is extremely low [1]. Currently, new spectrum policies are being developed
by the Federal Communications Commission (FCC) that will allow secondary users to
opportunistically access a licensed band, when the primary user (PU) is absent. Cognitive radio
[2], [3] has become a promising solution to solve the spectrum scarcity problem in the next
generation cellular networks by exploiting opportunities in time, frequency, and space domains.
Cognitive radio is an advanced software-defined radio that automatically detects its surrounding
RF stimuli and intelligently adapts its operating parameters to network infrastructure while
meeting user demands. Since cognitive radios are considered as secondary users for using the
licensed spectrum, a crucial requirement of cognitive radio networks is that they must efficiently
exploit under-utilized spectrum (denoted as spectral opportunities) without causing harmful
interference to the PUs. Furthermore, PUs have no obligation to share and change their operating
parameters for sharing spectrum with cognitive radio networks. Hence, cognitive radios should
be able to independently detect spectral opportunities without any assistance from PUs; this
ability is called spectrum sensing, which is considered as one of the most critical components in
cognitive radio networks. Many narrowband spectrum sensing algorithms have been studied in
the literature [4] and references therein, including matched-filtering, energy detection [5], and
cyclo stationary feature detection. While present narrowband spectrum sensing algorithms have
focused on exploiting spectral opportunities over narrow frequency range, cognitive radio
networks will eventually be required to exploit spectral opportunities over wide frequency range
from hundreds of megahertz (MHz) to several gigahertz (GHz) for achieving higher
opportunistic throughput.

This is driven by the famous Shannon’s formula that, under certain conditions, the
maximum theoretically achievable bit rate is directly proportional to the spectral bandwidth.
Hence, different from narrowband spectrum sensing, wideband spectrum sensing aims to find
more spectral opportunities over wide frequency range and achieve higher opportunistic
aggregate throughput in March 6, 2013 DRAFT 3 cognitive radio networks. However,
conventional wideband spectrum sensing techniques based on standard analog-to-digital
converter (ADC) could lead to unaffordably high sampling rate or implementation complexity;
thus, revolutionary wideband spectrum sensing techniques become increasingly important. In the
remainder of this article, we first briefly introduce the traditional spectrum sensing algorithms for
narrowband sensing in Section II. Some challenges for realizing wideband spectrum sensing are
then discussed in Section III. In addition, we categorize the existing wideband spectrum sensing
algorithms based on their implementation types, and review the state-of-the-art techniques for
each category. Future research challenges for implementing wideband spectrum sensing are
subsequently identified in Section IV, after which concluding remarks are given in Section.

1.1PROBLEM DEFINITION

Currently, most of the spectrum sensing technology is narrowband-based, whose


detection efficiency is low, due to shadow fading effect, multipath effect and uncertainty of noise
in practical applications, if the spectrum sensing is done by single CR user, the detection
performance cannot meet the hard real-time and high reliable requirements of CRN system.
Energy detection is a non coherent detection technique in which no prior knowledge of pilot data
is required. The detection is based on some function of the received samples which is compared
to a predetermined threshold level the threshold is exceeded, it is decided that signal(s) is (are)
present otherwise it is absent. The computation of the threshold used for signal detection is
highly susceptible to unknown and varying noise level which result in low SNR environments.

Due to multipath fading, shadowing, and varying channel conditions, uncertainty affects
all the cognitive radio processes. Measurements taken by the SUs during the sensing process are
uncertain. Decisions are taken based on what has already been observed using the SUs
knowledge basis, which may have been impacted by uncertainty. This can lead to wrong
decisions, and, thus the cognitive radio system can take wrong actions. Thus, uncertainty
propagation influences the cognitive radio performance and mitigating it is a necessity.

SCOPE OF WORK

 A Cluster based spectrum sensing for CRN.

 Proposed a wide-band spectrum sensing algorithm, called Advanced Cluster Based


Spectrum Sensing, which combines hierarchical data-fusion idea with jointly
compressive reconstruction technology.
1.2. LITERATURE SURVEY

The concept of cognitive radio (CR) has been proposed[1]. The fundamental concept of
the CR relies on the spectrum sensing and re-configurability. Two kinds of users can be
classified in the CR analysis: one is the primary user (PU) and the other is the secondary user
(SU). PUs denotes the users that have the right or license to legally use a specific frequency, sub-
carrier or frequency band. In the CR, if the PU has no data transmitted in the allocated frequency
band, the SU can perform the spectrum sensing and transmits its own data in that frequency
band. However, if the PU begins to transmit data, SU should stop the transmission immediately
to avoid the signal collisions with the PU. The SU can perform the spectrum sensing again to
find another white spectrum or spectrum hole for data transmission in the CR. Therefore,
spectrum sensing is the fundamental and the critical issue should be solved properly if applying
the CR concept in the practical implementation. In literatures, three kinds of spectrum sensing
techniques can be classified, i.e., the energy-based detection, the matched-filter detection, and
the cyclostationary feature detection Basically, energy-based detection determines the condition
of the spectrum via the received signal energy. If the energy of the received signal exceeds a
predefined threshold, SU will decide that the frequency is currently occupied by the PU. For the
matched-filter detection method, SU uses a particular preamble or sequences to determine the
condition of the spectrum. Like the operation of the match filter (MF), if the correlation
calculation for the received signal with the preamble is greater than a predefined value, SU will
regard the spectrum as in busy. For the cyclostationary feature detection, the method uses the
inherit cyclic feature of the signal to determine the condition of a particular spectrum. For
example, in OFDM system, the transmitted symbol contains the cyclic-prefix (CP) which is the
duplication of the tail sequences of the transmitted OFDM symbol to avoid the inter-symbol
interference (ISI). Currently, the concept of the cooperative sensing is also proposed to further
enhance the detection probability of the spectrum sensing. Among these methods, the energy-
based detection method has the most simple and easy implementation advantages over the other
methods. In this paper, an energy-based spectrum sensing method based on the maximum
likelihood (ML) criterion will be proposed [1]. The proposed method avoids the troublesome
calculation of the required threshold in the conventional energy-based method and has almost the
same performance as the adoption of optimal threshold. The paper is organized as follows;
Section II describes the conventional energy-based spectrum sensing and the proposed ML
scheme. Besides, an extension of the ML method with the double threshold (DT) technique is
also provided to reduce the sensing period. Performance analysis of the conventional energy
based methods, such as the constant false alarm (CFAR) method, Constant Detection (CDR)
method and the proposed ML scheme are evaluated

Compressive sensing has been proposed as an alternative solution to scan the spectrum
by reducing the sensing time and the sampling rate [2]. With compressive sensing, high
dimensional sparse signals are acquired by extracting only few samples that reflect the main
information of the signal. This extraction is performed with the help of a sensing matrix, then the
original signal is recovered with a recovery algorithm. In this paper, we distinguish between one-
bit compressive sensing with one measurement and multi-bit compressive sensing with M
measurements, where M is less that the signal samples. Multi-bit compressive sensing indicates
the conventional compressive sensing. It can sample high dimensional signals by only acquiring
few measurements rather than acquiring the whole signal. However, multi-bit compressive
sensing still faces some problems in acquiring high dimensional signals under noise uncertainty.
Techniques with high recovery rate represent high processing time and complexity while fast
techniques are not efficient, which leads to a tradeoff between recovery rate and processing time
Recently, one-bit compressive sensing has been proposed to overcome the multi-bit compressive
sensing limitations and enhance the signal sampling efficiency. One-bit compressive sensing can
recover sparse signals by using only the sign of each measurement with an extreme quantization.
Preserving only the sign of the measurements is considered as the extreme case of sampling
A few papers that compare the efficiency of one-bit compressive sensing for spectrum
sensing applications have been published. These papers [2] did not consider sufficient
parameters for the performance analysis. The simulations were limited and the results were not
compared to those of the conventional compressive sensing as an alternative solution. Moreover,
to the best of our knowledge, a performance comparison between compressive sensing categories
has not been investigated before. Thus, there is a great need for a comparison between the
efficiencies of the compressive sensing categories under certain conditions for cognitive radio
networks. These conditions need to be identified by simulating a number of parameters that
cover the most important aspects in signal sampling and recovery performance. In this paper, we
analyze both compressive sensing categories and compare their efficiency using the recovery
SNR, recovery error, hamming distance, processing time, and complexity.

SNR estimation methods can be classified into two categories: data-aided and non-data
aided approaches. Data aided estimation techniques require the information about the properties
of the transmitted data sequences (pilot). These techniques are normally able to provide an
accurate estimate of SNR. However, in time varying channels, they need to employ larger pilot
information to enable the receiver to track the channel variations. This type of approaches leads
to excessive overhead imposing undesired capacity loss to the system [3]. On the other hand,
non-data aided algorithms estimate the SNR without impacting the channel capacity. These
techniques do not need any knowledge of the transmitted data sequence characteristics.
Techniques of this type use methods such as extracting and analyzing the inherent characteristics
of the received signal to estimate the noise and signal powers. Examples of data aided and non-
data aided methods are those described by Pauluzzi et al, such as split symbol moment estimator,
maximum likelihood estimator, squared signal to noise variance estimator, second and fourth
order moment estimator, and low bias algorithm negative SNR estimator. One of the non-data
aided methods is a technique based on the eigenvalues of the covariance matrix formed from the
received signal samples that was proposed by Hamid et al. [3]. This method initially detects the
eigenvalues as in [3] and employs the minimum descriptive length (MDL) criterion to split the
signal and noise corresponding to eigenvalues. It is a blind technique in the sense that it does not
have any knowledge of the transmitted signal and noise, and SNR is merely estimated based on
the received signal samples.
The selection criteria of these parameters are based on a number of factors such as type of
application, channel condition, and hardware limitation. It is obvious that this estimator could be
more efficient if an algorithm can dynamically optimize these parameters according to particular
situations. One possible solution is the use of evolutionary optimization algorithms such as
particle swarm optimization (PSO) and genetic algorithms. These techniques, as the name
implies, mimic the pattern of biological evolutions and evolve and iterate repeatedly to find the
optimum solution of an objective function corresponding to a specific situation. Some of these
algorithms, such as the genetic algorithm, are application-dependent and require selecting
appropriate initialization values to converge at a steady rate [3]. However, PSO algorithm does
not rely on a specific single variable initialization and is less complex.
Furthermore, since this method is based on sub-spaced decomposition of the signal, it
requires less processing time and is more accurate. However, this technique is highly dependent
on: 1) the number of received samples, 2) the number of eigenvalues, and 3) the
MarchenkoPastur distribution size. Therefore, in this paper, we propose the use of PSO
algorithm to optimize the operation of the eigenvalue-based SNR estimator in [3]. First, we
define an objective function for PSO algorithm which is the goodness of fitting of two
distributions involved in SNR estimation process. This objective function is a function of number
of samples, number of eigenvalues and, the distribution size. Then, we apply the PSO algorithm
to dynamically optimize these parameters. To assess the efficiency of the proposed method, we
compare the true SNR with estimated SNR using both PSO-based and original SNR estimation
techniques and find the error between them.

Cognitive radio has been proposed to overcome the spectrum scarcity issue and enable
dynamic spectrum utilization. It allows unlicensed users, secondary users (SUs), to use the free
spectrum when the owner, primary user (PU), is absent during a period of time. A cognitive
radio system performs communication through a 3-process cycle: spectrum sensing, decision-
making, and taking-action [4]. Spectrum sensing enables SUs to detect available channels to use
for their transmissions without interfering with the PU signals. Examples of spectrum sensing
techniques include energy, autocorrelation, and Euclidean distance based detection [4]. Energy
detection performs by comparing the energy of the SU received signals with a threshold. Despite
its simplicity, it cannot differentiate between the noise and the signal, which makes it inefficient
and inaccurate [4]. Autocorrelation based detection consists of comparing the autocorrelation
function of the SU received samples at lag one to the autocorrelation function of the SU received
samples at lag zero. If the two values are close, the PU signal is present, if not it is absent. This
technique can distinguish the signal from the noise, but, it does not recognize the internal thermal
noise [4]. Euclidean distance based detection consists on computing the Euclidean distance
between the autocorrelation function and a reference line corresponding to the internal noise of
the communication device. This function values are then compared with a threshold to decide
about the availability of the channel [4]. Spectrum scanning techniques allow the measurements
of the spectrum occupancy over time and frequency. A number of spectrum occupancy surveys
were conducted over the world whether for wideband or for licensed frequency bands. Examples
of these surveys are presented and discussed. In the authors conducted a spectrum occupancy
study over a narrow range of frequency using a low noise amplifier followed by a spectrum
analyzer. Energy detection was adopted as the simplest spectrum sensing technique with fixed
threshold and 1% false alarm. In [4], a spectrum measurement survey was performed over a short
frequency range to measure the spectrum occupancy and identify the free bands at different
locations using energy detection with a predefined threshold. In a campaign was conducted to
measure the spectrum occupancy over space, time, and frequency simultaneously. The survey
was performed using the power spectral density of the measured spectrum compared to a
constant threshold. Similarly, in, energy detection was employed for spectrum scanning in a long
frequency range with costly setup. In [4], the authors proposed a spectrum survey using
Euclidian distance based detection with SDR units. The measurements were performed and
compared to those of energy detection and autocorrelation based detection.
In this paper, we propose to adopt compressive sensing at the SU receiver before
performing the sensing. The wideband spectrum scanning is performed on the compressed
signals instead of capturing the whole signals followed by the spectrum sensing process.
Compressive sensing is a promising solution for the next generation of wireless communication
networks to enable signal acquisition at a lower rate than the Nyquist rate.
In this work, the compressive sensing is performed only on the sampling without
recovering the signal. As it is not necessary to recover the compressed signal to fulfill our main
objective, which is performing the spectrum scanning on the sampled received signals and not
the whole signal. The removed part of the signal after the matrix sampling includes only null
coefficients based on the signal sparsity and the main information is kept for the spectrum
sensing. As in this compressive sensing approach is known as the compressive signal processing
and allows solving the signal processing problems directly from the compressed signals.

Compressive sensing has been proposed as a low cost solution to speed up the scanning
process and reduce the computational complexity. It involves three main processes: sparse
representation, encoding, and decoding. During the first process, the signal, S, is projected in a
sparse basis. During the second process, S is multiplied by a sampling matrix, of MxN elements
to extract M samples from N of the signal, S, where M << N. In the last process, the signal is
reconstructed from the few M measurements [5]. For the encoding process, a number of
sampling matrices have been proposed in the literature, including random matrix , Circulant
matrix [5], Toeplitz matrix, and deterministic matrix . Because of their simplicity, more interest
has been paid to random matrices.
These matrices are randomly generated with independent and identically distributed
(i.i.d) elements such as Gaussian and Bernoulli distributions. In general, compressive sensing
requires that the sampling matrix satisfies the Restrict Isometry Property (RIP) condition. RIP is
a characteristic of orthonormal matrices bounded with a Restrict Isometry Constant (RIC), which
is a positive number between 0 and 1 that respects the RIP condition [5]. This condition
guarantees the uniqueness of the reconstructed solution, during the decoding process. For
random matrices, the matrix satisfies the RIP condition for small RIC [5]. However, these
matrices require a great deal of processing time and high memory capacity to store the matrix
coefficients [5]. Because of the randomness, the results are uncertain, which makes the signal
reconstruction inefficient.
For the decoding process, a number of algorithms that exploit the sparsity feature of
signals have been proposed in the literature. A sparse signal can be estimated from a few
measurements by solving the underdetermined system using three different types of algorithms:
Iterative relaxation, Greedy, and Bayesian models. The iterative relaxation category includes
techniques that solve the underdetermined system using linear programming. Some techniques
classified under this category are ℒ1 norm minimization known as basis pursuit, gradient descent
, and iterative thresholding . Greedy algorithms consist of selecting a local optimal at each step in
order to find the global optimum, which corresponds to the estimated signal coefficients.
Examples of techniques classified under this category are matching pursuit, orthogonal matching
pursuit, and stage wise orthogonal matching pursuit. Bayesian compressive sensing algorithms
consist of using a Bayesian model to estimate the unknown parameters in order to deal with
uncertainty in measurements. Examples of techniques classified under this category are:
Bayesian model using relevance vector machine learning, Bayesian model using Laplace priors,
and Bayesian model via belief propagation. All these Bayesian based algorithms were used only
with random matrices.
2. SYSTEM ANALYSIS

2.1 EXISTING SYSTEM

In the rapid development of current wireless communication systems, effective utilization


of the limited system spectrum to provide the high-speed data rate service becomes a very
critical issue for the system operators. Basically, the spectrum allocations of different wireless
systems are all pre-determined by the administration for each country. Cognitive radio
technology is a promising technology to solve the wireless spectrum scarcity problem by
intelligently allowing secondary, or unlicensed, users access to the primary, licensed, users’
frequency bands. Cognitive technology involves two main tasks: 1) sensing the wireless medium
to assess the presence of the primary users and 2) designing secondary spectrum access
techniques that maximize the secondary users’ benefits while maintaining the primary users’
privileged status. Energy-based maximum likelihood spectrum sensing method is used to avoid
the troublesome calculation of the required threshold in the conventional energy-based method.
Basically, the existing method uses the singular value decomposition method and a simple ratio
test of the eigen value to estimate the noise energy and the non-centrality parameters required for
the LR test. Since all these steps can be easily conducted at the SU, the existing methods can be
applied to the energy based spectrum sensing to improve the detection performance.

Disadvantages

 The computation of the threshold used for signal detection is highly susceptible to
unknown and varying noise level which result in low SNR environments
 It is not possible to distinguish among different primary users since energy detectors
cannot discriminate among the sources of the received energy.
 Performance depends on noise power estimation

2.2 PROPOSED SYSTEM

The fast development and great richness of wireless radio technology, wireless spectrum
has become the most valuable and needful resources ever now. As a dynamic spectrum resource
utilization technology, cognitive radio (CR) has received recent attention around the world. In
CRN, under the condition of protecting the authorized user (primary user) from interfering, the
fast and accurate detection of spectrum holes is the crucial premise and key procedure to
improve the spectrum utilization ratio. Currently, most of the spectrum sensing technology is
narrowband-based, whose detection efficiency is low, this entails new spectrum sensing
approaches with hard real-time character to emerge, wide-band spectrum sensing is the direction
of development in future.

The proposed system developed an algorithm called ACBSS in this paper, which
combines hierarchical data-fusion idea with jointly compressive reconstruction technology by
Compressive Sensing (CS) theory to realize the wide-band spectrum sensing. The simulation
results show that the ACBSS algorithm can sense the wide-band signal accurately and
efficiently, and also relieve the data fusion center from the heavy pressure of computation.

Advantages

 Data processing pressure can be reduced.


 Signal sensed by near CR users are of much greater relevance.
 To detect the wide-band spectrum holes are derived

2.3 SYSTEM REQUIREMENTS

2.3.1 HARDWARE SPECIFICATION

Processor : Intel Pentium IV Dual Core 2.8 GHz


Hard Disk : 160 GB

Monitor : LG 17” Color Monitor

RAM : 1 GB

Keyboard : 104 Keys Multimedia Keyboard

Mouse : Logitech Optical Mouse

CD – ROM : 52X CD-ROM.

2.3.2 SOFTWARE SPECIFICATION

Server Side Programming : ASP.NET 3.5

Middleware Programming : C#

Operating System : Windows 7


Web Server : Internet Information Server 7.0

Client Script : HTML, CSS and Java Script

Database : SQL-Server 2008

2.3.3 ABOUT SOFTWARE

NET FRAMEWORK

Microsoft® .NET Framework version 1.1 the .NET Framework is an integral Windows
component that supports building and running the next generation of applications and XML Web
services. The key components of the .NET Framework are the common language runtime and
the .NET Framework class library, which includes ADO.NET, VB.NET, and Windows Forms.
The .NET Framework provides a managed execution environment simplified development and
deployment, and integration with a wide variety of programming languages. For a brief
introduction to the architecture of the .NET Framework

The .NET Framework is a new computing platform that simplifies application


development in the highly distributed environment of the Internet. The .NET Framework is
designed to fulfill the following objectives.

 To provide a consistent object-oriented programming environment whether object


code is stored and executed locally but Internet-distributed, or executed remotely.
 To provide a code-execution environment that minimizes software deployment
and versioning conflicts.
 To provide a code-execution environment that eliminates the performance
problems of scripted or interpreted environments.
 To make the developer experience consistent across widely varying types of
applications, such as Windows-based applications and Web-based applications.
 To build all communication on industry standards to ensure that code based on the
.NET Framework can integrate with any other code.
The .NET Framework has two main components: the common language runtime and
the .NET Framework class library. The common language runtime is the foundation of the .NET
Framework. You can think of the runtime is an agent that manages code at execution time,
providing core services such as memory management, thread management, and remoting while
also enforcing strict type safety and other forms of code accuracy that ensure security and
robustness. In fact, the concept of code management is a fundamental principle of the runtime.
Code that targets the runtime is known as managed code, while code that does not target the
runtime is known as unmanaged code. The class library, the other main component of the .NET
Framework, is a comprehensive, object-oriented collection of reusable types that you can use to
develop applications ranging from traditional command-line or graphical use interface (GUI)
applications based on the latest innovations provided by VB.NET, such as Web Forms and XML
Web Services.

The following illustration shows the relationship of the common language runtime and
the class library to your applications and to the overall system. The illustration also shows how
managed code operates within a larger architecture.

NET FRAMEWORK IN CONTEXT


.NET Framework in context

FEATURES OF THE COMMON LANGUAGE RUNTIME

The common language runtime manages memory, thread execution, code execution, code
safety verification, compilation, and other system services. These features are intrinsic to the
managed code that runs on the common language runtime.

With regards to security, managed components are awarded varying degrees of trust,
depending on a number of factors that include their origin (such as the Internet, enterprise
network, or local computer). This means that a managed component might or might not be able
to perform file-access operations, registry-access operations, or other sensitive functions, even if
it is being used in the same active application.

The runtime enforces code access security. For example, users can trust that an
executable embedded in a Web page can play an animation on screen or sing a song, but cannot
access their personal data, file system, or network. The security features of the runtime thus
enable legitimate Internet-deployed software to be exceptionally feature rich.

The runtime also enforces code robustness by implementing a strict type-and-code-


verification infrastructure called the common type system (CTS). The CTS ensures that all
managed code is self-describing. The various Microsoft and third-party language compilers
generate managed code that conforms to the CTS. This means that managed code can consume
other managed types and instances, while strictly enforcing type fidelity and type safety.

In addition, the managed environment of the runtime eliminates many common software
issues. For example, the runtime automatically handles object layout and manages references to
objects, releasing them when they are no longer being used. This automatic memory
management resolves the two most common application errors, memory leaks and invalid
memory references.

The runtime also accelerates developer productivity. For example, programmers can
write applications in their development language of choice, yet take full advantage of the
runtime, the class library, and components written in other languages by other developers. Any
compiler vendor who chooses to target the runtime can do so. Language compilers that target the
.NET Framework make the features of the .NET Framework available to existing code written in
that language, greatly easing the migration process for existing applications.

While the runtime is designed for the software of the future, it also supports software of
today and yesterday. Interoperability between managed and unmanaged code enables developers
to continue to use necessary COM components and DLLs.

The runtime is designed to enhance performance. Although the common language


runtime provides many standard runtime services, managed code is never interpreted. A feature
called just-in-time (JIT) compiling enables all managed code to run in the native machine
language of the system on which it is executing. Meanwhile, the memory manager removes the
possibilities of fragmented memory and increases memory locality-of-reference to further
increase performance.
Finally, the runtime can be hosted by high-performance, server-side applications, such as
Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure enables
you to use managed code to write your business logic, while still enjoying the superior
performance of the industry's best enterprise servers that support runtime hosting.

.NET FRAMEWORK CLASS LIBRARY

The .NET Framework class library is a collection of reusable types that tightly integrate
with the common language runtime. The class library is object oriented, providing types from
which your own managed code can derive functionality. This not only makes the .NET
Framework types easy to use, but also reduces the time associated with learning new features of
the .NET Framework. In addition, third-party components can integrate seamlessly with classes
in the .NET Framework.

For example, the .NET Framework collection classes implement a set of interfaces that
you can use to develop your own collection classes. Your collection classes will blend
seamlessly with the classes in the .NET Framework.

As you would expect from an object-oriented class library, the .NET Framework types
enable you to accomplish a range of common programming tasks, including tasks such as string
management, data collection, database connectivity, and file access. In addition to these common
tasks, the class library includes types that support a variety of specialized development scenarios.
For example, you can use the .NET Framework to develop the following types of applications
and services:

 Console applications.
 Windows GUI applications (Windows Forms).
 ASP.NET applications.
 XML Web services.
 Windows services.
For example, the Windows Forms classes are a comprehensive set of reusable types that
vastly simplify Windows GUI development. If you write an ASP.NET Web Form application,
you can use the Web Forms classes.

CLIENT APPLICATION DEVELOPMENT

Client applications are the closest to a traditional style of application in Windows-based


programming. These are the types of applications that display windows or forms on the desktop,
enabling a user to perform a task. Client applications include applications such as word
processors and spreadsheets, as well as custom business applications such as data-entry tools,
reporting tools, and so on. Client applications usually employ windows, menus, buttons, and
other GUI elements, and they likely access local resources such as the file system and peripherals
such as printers.

Another kind of client application is the traditional ActiveX control (now replaced by the
managed Windows Forms control) deployed over the Internet as a Web page. This application is
much like other client applications: it is executed natively, has access to local resources, and
includes graphical elements.

In the past, developers created such applications using C/C++ in conjunction with the
Microsoft Foundation Classes (MFC) or with a rapid application development (RAD)
environment such as Microsoft® Visual Basic®. The .NET Framework incorporates aspects of
these existing products into a single, consistent development environment that drastically
simplifies the development of client applications.

The Windows Forms classes contained in the .NET Framework are designed to be used
for GUI development. You can easily create command windows, buttons, menus, toolbars, and
other screen elements with the flexibility necessary to accommodate shifting business needs.

SERVER APPLICATION DEVELOPMENT

Server-side applications in the managed world are implemented through runtime hosts.
Unmanaged applications host the common language runtime, which allows your custom
managed code to control the behavior of the server. This model provides you with all the features
of the common language runtime and class library while gaining the performance and scalability
of the host server.

The following illustration shows a basic network schema with managed code running in
different server environments. Servers such as IIS and SQL Server can perform standard
operations while your application logic executes through the managed code.

SERVER-SIDE MANAGED CODE

ASP.NET is the hosting environment that enables developers to use the .NET Framework
to target Web-based applications. However, ASP.NET is more than just a runtime host; it is a
complete architecture for developing Web sites and Internet-distributed objects using managed
code. Both Web Forms and XML Web services use IIS and ASP.NET as the publishing
mechanism for applications, and both have a collection of supporting classes in the .NET
Framework.

Server-side managed code

XML Web services, an important evolution in Web-based technology, are distributed,


server-side application components similar to common Web sites. However, unlike Web-based
applications, XML Web services components have no UI and are not targeted for browsers such
as Internet Explorer and Netscape Navigator. Instead, XML Web services consist of reusable
software components designed to be consumed by other applications, such as traditional client
applications, Web-based applications, or even other XML Web services. As a result, XML Web
services technology is rapidly moving application development and deployment into the highly
distributed environment of the Internet.

If you have used earlier versions of ASP technology, you will immediately notice the
improvements that ASP.NET and Web Forms offer. For example, you can develop Web Forms
pages in any language that supports the .NET Framework. In addition, your code no longer needs
to share the same file with your HTTP text (although it can continue to do so if you prefer). Web
Forms pages execute in native machine language because, like any other managed application,
they take full advantage of the runtime. In contrast, unmanaged ASP pages are always scripted
and interpreted. ASP.NET pages are faster, more functional, and easier to develop than
unmanaged ASP pages because they interact with the runtime like any managed application.

The .NET Framework also provides a collection of classes and tools to aid in
development and consumption of XML Web services applications. XML Web services are built
on standards such as SOAP (a remote procedure-call protocol), XML (an extensible data format),
and WSDL (the Web Services Description Language). The .NET Framework is built on these
standards to promote interoperability with non-Microsoft solutions.

For example, the Web Services Description Language tool included with the .NET
Framework SDK can query an XML Web service published on the Web, parse its WSDL
description, and produce C# or Visual Basic source code that your application can use to become
a client of the XML Web service. The source code can create classes derived from classes in the
class library that handle all the underlying communication using SOAP and XML parsing.
Although you can use the class library to consume XML Web services directly, the Web
Services Description Language tool and the other tools contained in the SDK facilitate your
development efforts with the .NET Framework.

If you develop and publish your own XML Web service, the .NET Framework provides a
set of classes that conform to all the underlying communication standards, such as SOAP,
WSDL, and XML. Using those classes enables you to focus on the logic of your service, without
concerning yourself with the communications infrastructure required by distributed software
development.
Finally, like Web Forms pages in the managed environment, your XML Web service will
run with the speed of native machine language using the scalable communication of IIS.

COMMON LANGUAGE RUNTIME

Compilers and tools expose the runtime's functionality and enable you to write code that
benefits from this managed execution environment. Code that you develop with a language
compiler that targets the runtime is called managed code; it benefits from features such as cross-
language integration, cross-language exception handling, enhanced security, versioning and
deployment support, a simplified model for component interaction, and debugging and profiling
services.

To enable the runtime to provide services to managed code, language compilers must emit
metadata that describes the types, members, and references in your code. Metadata is stored with
the code; every loadable common language runtime portable executable (PE) file contains
metadata. The runtime uses metadata to locate and load classes, lay out instances in memory,
resolve method invocations, generate native code, enforce security, and set run-time context
boundaries.

The runtime automatically handles object layout and manages references to objects, releasing
them when they are no longer being used. Objects whose lifetimes are managed in this way are
called managed data. Garbage collection eliminates memory leaks as well as some other
common programming errors. If your code is managed, you can use managed data, unmanaged
data, or both managed and unmanaged data in your .NET Framework application. Because
language compilers supply their own types, such as primitive types, you might not always know
(or need to know) whether your data is being managed.

The common language runtime makes it easy to design components and applications whose
objects interact across languages. Objects written in different languages can communicate with
each other, and their behaviors can be tightly integrated. For example, you can define a class and
then use a different language to derive a class from your original class or call a method on the
original class. You can also pass an instance of a class to a method of a class written in a
different language. This cross-language integration is possible because language compilers and
tools that target the runtime use a common type system defined by the runtime, and they follow
the runtime's rules for defining new types, as well as for creating, using, persisting, and binding
to types.

As part of their metadata, all managed components carry information about the
components and resources they were built against. The runtime uses this information to ensure
that your component or application has the specified versions of everything it needs, which
makes your code less likely to break because of some unmet dependency. Registration
information and state data are no longer stored in the registry where they can be difficult to
establish and maintain. Rather, information about the types you define (and their dependencies)
is stored with the code as metadata, making the tasks of component replication and removal
much less complicated.

Language compilers and tools expose the runtime's functionality in ways that are intended to
be useful and intuitive to developers. This means that some features of the runtime might be
more noticeable in one environment than in another. How you experience the runtime depends
on which language compilers or tools you use. For example, if you are a Visual Basic developer,
you might notice that with the commonlanguage runtime, the Visual Basic language has more
object-oriented features than before. Following are some benefits of the runtime.

 Performance improvements.
 The ability to easily use components developed in other languages.
 Extensible types provided by a class library.
 New language features such as inheritance, interfaces, and overloading for object-
oriented programming; support for explicit free threading that allows creation of
multithreaded, scalable applications; support for structured exception handling and
custom attributes.

If you use Microsoft® Visual C++® .NET, you can write managed code using the Managed
Extensions for C++, which provide the benefits of a managed execution environment as well as
access to powerful capabilities and expressive data types that you are familiar with. Additional
runtime features include:
 Cross- language integration, especially cross- language inheritance.
 Garbage collection, which manages object lifetime so that reference counting is
unnecessary.
 Self-describing objects, which make using Interface Definition Language (IDL)
unnecessary.

ADO.NET ARCHITECTURE

ADO.NET Architecture

REMOTING OR MARSHALING DATA BETWEEN TIERS AND CLIENTS

The design of the Dataset enables you to easily transport data to clients over the Web
using XML Web services, as well as allowing you to marshal data between .NET components
using .NET Remoting services. You can also remote a strongly typed Dataset in this fashion. For
an overview of XML Web services..
An overview of remoting services can be found in the .NET Remoting Overview. Note
that Data Table objects can also be used with remoting services, but cannot be transported via
an XML Web service.

.NET FRAMEWORK DATA PROVIDERS

A .NET Framework data provider is used for connecting to a database, executing


commands, and retrieving results. Those results are either processed directly, or placed in an
ADO.NET Dataset in order to be exposed to the user in an ad-hoc manner, combined with data
from multiple sources, or remoted between tiers. The .NET Framework data provider is designed
to be lightweight, creating a minimal layer between the data source and your code, increasing
performance without sacrificing functionality.

The following table outlines the four core objects that make up a .NET Framework data
provider.

Object Description

Connection Establishes a connection to a specific data


source.

Command Executes a command against a data source.


Exposes Parameters and can execute within the
scope of a Transaction from a Connection.

Data Reader Reads a forward-only, read-only stream of data


from a data source.

Data Adapter Populates a Dataset and resolves updates with


the data source.

The .NET Framework includes the .NET Framework Data Provider for SQL Server (for
Microsoft SQL Server version 7.0 or later), the .NET Framework Data Provider for OLE DB,
and the .NET Framework Data Provider for ODBC.
Note   The .NET Framework Data Provider for ODBC is not included in the .NET Framework
version 1.0. If you require the .NET Framework Data Provider for ODBC and are using the .NET
Framework version 1.0, you can download the .NET Framework Data Provider for ODBC at
http://msdn.microsoft.com/downloads. The namespace for the downloaded .NET Framework
Data Provider for ODBC is Microsoft.Data.Odbc.

THE .NET FRAMEWORK DATA PROVIDER FOR SQL SERVER

The .NET Framework Data Provider for SQL Server uses its own protocol to
communicate with SQL Server. It is lightweight and performs well because it is optimized to
access a SQL Server directly without adding an OLE DB or Open Database Connectivity
(ODBC) layer. The following illustration contrasts the .NET Framework Data Provider for SQL
Server with the .NET Framework Data Provider for OLE DB. The .NET Framework Data
Provider for OLE DB communicates to an OLE DB data source through both the OLE DB
Service component, which provides connection pooling and transaction services, and the OLE
DB Provider for the data source.

Comparison of the .NET Framework Data Provider for SQL Server and the .NET
Framework Data Provider for OLE DB

Comparison of .NET Framework Data Provider for SQL Server and OLEDB

To use the .NET Framework Data Provider for SQL Server, you must have access to
Microsoft SQL Server 7.0 or later. .NET Framework Data Provider for SQL Server classes are
located in the System.Data.SqlClient namespace. For earlier versions of Microsoft SQL Server,
use the .NET Framework Data Provider for OLE DB with the SQL Server OLE DB Provider
(SQLOLEDB).
VISUAL C# LANGUAGE

Microsoft C# (pronounced C sharp) is a new programming language designed for


building a wide range of enterprise applications that run on the .NET Framework. An evolution
of Microsoft C and Microsoft C++, C# is simple, modern, type safe, and object oriented. C# code
is compiled as managed code, which means it benefits from the services of the common language
runtime. These services include language interoperability, garbage collection, enhanced security,
and improved versioning support.

C# is introduced as Visual C# in the Visual Studio .NET suite. Support for Visual C#


includes project templates, designers, property pages, code wizards, an object model, and other
features of the development environment. The library for Visual C# programming is the .NET
Framework.

C# (pronounced “See Sharp”) is a simple, modern, object-oriented, and type safe


programming language. C# has its roots in the C family of languages and will be immediately
familiar to C, C++, and Java programmers. C# is standardized by ECMA International as the
ECMA-334 standard and by ISO/IEC as the ISO/IEC 23270 standard. Microsoft’s C# compiler
for the .NET Framework is a conforming implementation of both of these standards. C# aims to
combine the high productivity of Visual Basic and the raw power of C++.

Visual C#.NET is Microsoft’s C# development tool. It includes an interactive


development environment, visual designers for building Windows and Web applications, a
compiler, and a debugger. Visual C#.NET is part of suite of products, called Visual Studio.NET,
that also includes Visual Basic.NET, Visual C++.NET, and the Jscript scripting language. All of
these languages provide access to the Microsoft .NET Framework, which includes a common
execution engine and a rich class library. The .NET Framework defines a “Common language
Specification” (CLS), a sort of lingua franca that ensures seamless interoperability between CLS
compliant languages and class libraries. For C# developers, this means that even though C# is a
new language, it has complete access to the same rich class libraries that are used by seasoned
tools such as Visual Basic .NET and Visual C++.NET. C# itself does not include a class library.
C# is an object-oriented language, but C# further includes support for component-
oriented programming. Contemporary software design increasingly relies on software
components in the form of self-contained and self-describing packages of functionality. Key to
such components is that they present a programming model with properties, methods and events;
they have attributes that provide declarative information about the component; and they
incorporate their own documentation. C# provides language constructs to directly support these
concepts, making C# a very natural language in which to create and use software components.

Several C# features aid in the construction of robust and durable applications: Garbage
collection automatically reclaims memory occupied by unused objects; exception handling
provides a structured and extensible approach to error detection and recovery; and the type-safe
design of the language makes it impossible to have uninitialized variables, to index arrays
beyond their bounds, or to perform unchecked type casts.

C# has a unified type system. All C# types, including primitive types such as int and
double, inherit from a single root object type, Thus, all types share a set of common operations,
and values of any type can be stored, transported, and operated upon in a consistent manner.
Furthermore, C# supports both user-defined reference types and value types, allowing dynamic
allocation of objects as well as in-line storage of lightweight structures.

To ensure that C# programs and libraries can evolve over time in a compatible manner,
much emphasis has been placed on versioning in C#’s design. Many programming languages
pay little attention to this issue, and, as a result, programs written in those languages break more
often than necessary when newer versions of dependent libraries are introduced. Aspects of C#
’s design that were directly influenced by versioning considerations include the separate virtual
and override modifiers, the rules for method overload resolution, and support for explicit
interface member declarations.

C# 2.0 introduces several language extensions, including Generics, Anonymous Methods,


Iterators, Partial Types, and Nullable Types.

 Generics permit classes, structs, interfaces, delegates, and methods to be


parameterized by the types of data they store and manipulate. Generics are useful
because they provide stronger compile-time type checking, require fewer explicit
conversions between data types, and reduce the need for boxing operations and
run-time type checks.
 Anonymous methods allow code blocks to be written “in-line” where delegate
values are expected. Anonymous methods are similar to lambda functions in the
Lisp programming language. C# 2.0 supports the creation of “closures” where
anonymous methods access surrounding local variables and parameters.
 Iterators are methods that incrementally compute and yield a sequence of values.
Iterators make it easy for a type to specify how for each statement will iterate over
its elements.
 Partial types allow classes, structs, and interfaces to be broken into multiple
pieces stored in different source files for easier development and maintenance.
Additionally, partial types allow separation of machine-generated and user-
written parts of types so that it is easier to augment code generated by a tool.
 Nullable types represent values that possibly are unknown. A nullable type
supports all values of its underlying type plus an additional null state. Any value
type can be the underlying type of a nullable type.

ASP.NET

The .NET framework includes tools that ease the creation of web services. ASP.NET is
the latest offering from Microsoft toward the creation of a new paradigm for server-side
scripting. The systemwill see the basics of ASP.NET, which provides a complete framework
for the development of web applications. Here the systemget introduced into ASP.NET, the
platform requirements for ASP.NET applications, and the ASP.NET architecture. In addition,
the systemget introduced to web forms of ASP.NET applications, a new addition to ASP.NET.

ASP .NET differs in some ways from earlier versions Os ASP. ASP.NET has new
features such as better language support, a new set of controls, XML-based components, and
more secure user authentication. ASP.NET also provides increased performance by executing
ASP code.
Usually a software product undergoes many evolutionary phases. In each release version
of the software product, the software vendor fixes the bugs form previous versions and adds new
features. ASP 1.0 was released in 1996. Since then, two more versions of ASP (2.0 AND 3.0)
have been released. In various versions of ASP, new features have been added. However, the
basic methodology used for creating applications has not changed.

ASP.NET provides a unique approach toward web application development, so one


might say that ASP.NET has stared a new revolution in the world of web application
development. ASP.NET is based on the Microsoft.NET framework. The .NET framework.
The .NET framework is based on the common language runtime (CLR). There fore, it imparts all
of the CLR benefits to ASP.NET applications. These CLR benefits include automatic memory
management, support for multiple languages, secure user authentication, and ease in
configuration, and ease in deployment.

BENEFITS OF ASP.NET

Support for various programming language ASP.NET provides better programming-


language support than ASP. It uses the new ADO.NET earlier versions of ASP support only
scripting language such as VBScript and Jscript. Using this scripting language, the system can
write applications used to perform server-side processing, but this has two major drawbacks.
First, scripting language is interpreted and not complied. Therefore, the errors can only be
checked at runtime. This affects the performance of web applications. Second, scripting language
is not strongly typed. The scripting languages do not have a built –in set of predefined data types.
This requires developers to cast the existing objects of the language to their expected data type.
Thus, these objects can be validated only at runtime. This validation leads to a low performance
of web applications. ASP.NET continues to support scripting languages, but it supports complete
Visual Basic for server-side programming ASP.NET also provides support for c# (pronounced c
sharp) and C++.
CROSS – LANGUAGE DEVELOPMENT

ASP.NET provides flexibility to extend created in one language to another language. For
example, if the system has an object in C++, ASP.NET enables us to extend this object in Visual
Basic.

ASP.NET PAGE SYNTAX

DIRECTIVES
<% @ page language =”VB” […] %>

Code Declaration Blocks


<script run at=”server” […]>

[ lines of code]

</script>

CODE RENDER BLOCKS


<%

[inline code or expressions]

%>

HTML Control Syntax

<HTML element runat=”server” [attribute(s)]>

</HTML element>
CUSTOM CONTROL SYNTAX

CUSTOM SERVER CONTROLS


<ASP: Textbox id=”My Tbi” run at=”server”>

SERVER CONTROL PROPERTY


<ASP: Textbox maxlength=”80” run at=”server”>

SUB PROPERTY

<ASP: Label font-size=”14” run at=”server”>

SERVER CONTROL EVEN BINDING


<ASP: Button On Click=”My Click” run at=”server”>

DATA BINDING EXPRESSION


< asp: label

Text=’<%# data binding expression %>’

Run at =”server”/>

SERVER-SIDE OBJECTS TAGS


<object id=”id” run at=”server”

Identifier=”idName”/>

SERVER-SIDE INCLUDE DIRECTIVES


<!-#include pathtype=filename -->

SERVER-SIDE COMMENTS

%-- comment block -- %>


An application in ASP.NET consists of files, pages, modules, and executable code that reside in
one virtual directory and its subdirectories. Application state is stored in global variables for a
given ASP.NET application. For that Reason developers have to follow some implements
rules .Variables for storing application state occupy system resources.

A global variable has to be locked and unlocked to prevent problems with concurrent

access.

WEB FORMS SERVER CONTROLS

The term server controls always means Web Forms server controls, because they are
specially designed to work with Web Forms.

SERVER CONTROL FAMILIES

Web Forms provide different server control families

 HTML server controls


 ASP.NET server controls
 Validation controls
 User controls
 Mobile controls

DATA BINDING
The systemcan bind Web Forms control properties to any data in a data store. This so-called
data binding gives us nearly complete control over how data moves to the page and back again to
the data store.

PAGE CLASS

When a page is loaded, the ASP.NET runtime generates and instantiates a page class.
This object forms a collection of our separate components (like visual elements and business
logic). So all (visual and code) elements are accessible through this object.

HTML SERVER CONTROLS

The system can convert simple HTML elements to HTML server controls, let the
ASP.NET engine create an instance on the server, and now they are programmable on the server.
The conversion is done by simply adding attributes to the HTML tag. The attributes runat=server
informs the framework to create a server-side instance of the control. If the system additionally
assigns an ID, the system can reference the control in our code.

For example, the system can use the HTMLAnchor control to program against the HTML <a>
tag to dynamically generate the H Ref values, or use HtmlTable (HTML <table>) to
dynamically create tables and their content.

ASP.NET SERVER CONTROLS

ASP.NET server controls are abstract controls. There is no one-to-one mapping to


HTML server controls. But ASP.NET comes with a rich set of controls.

Another feature is the typed object model. This gives us the potential for type-safe
programming. Server controls can automatically detect what browser the system is using and
generate the proper version of HTML output.
BUTTON
This is way to enable the user to finish editing a form. A Button enforces the submitting
of the page, and the systemcan additionally raise events like the Click event.
TEXTBOX
A Textbox is an input box where the user can enter information like numbers, text, or
dates formatted as single line, multilane, or password. This control raises a Text Changed event
when the focus “leaves” the control.

VALIDATION CONTROLS
Another group of server controls are validation controls. These can be used to check the
user’s entries. Validation can be processed on the client and on the server.

Validation on the client side can be performed using a client script. In that case, the user
will be confronted with immediate feedback-without a roundtrip to the server. Server-side
validation in addition provides, for example, security against users bypassing client-side
validation.

ASP.NET PROVIDES THE FOLLOWING TYPES OF VALIDATION

Required entry- the field must be filled in by the user. Comparison to a value- the entered
value is checked against another value of another field, a database, or a constant value by using
comparison operators. Range checking – the user’s entry is checked to see whether it resides
between given boundaries. Pattern matching- a regular expression is defined that the entered
value must match. User’s defined- implement our own validation logic. When the validation
fails, an error message is generated and sent back to the client browser. This can be done in
several ways. For example, all error messages related to a specific transaction could be collected
and presented to the user in summary.

SQL SERVER INTRODUCTION

SQL stands for Structured Query Language. SQL is used to communicate with a
database. According to ANSI (American National Standards Institute), it is the standard language
for relational database management systems.
SQL statements are used to perform tasks such as update data on a database, or retrieve
data from a database. Some common relational database management systems that the SQL are:
Oracle, Sybase, Microsoft SQL Server, Access, Ingress, etc. Although most database systems
use SQL, most of them also have their own additional proprietary extensions that are usually
only used on their system.

The standard SQL commands such as “Select”, “Insert”, “Update”, ”Delete”, ”Create”,
and “Drop” can be used to accomplish almost everything that one needs to do with a database.
This tutorial will provide you with the instruction on the basics of each of these commands as
well as allow you to put them to practice using the SQL Interpreter.

TABLE BASICS

A relational database system contains one of more objects called tables. The data or
information for the database is stored in these tables. Tables are uniquely identified by their
names and are comprised of columns and rows. Columns contain the column name, data type,
and any other attributes for the column. Rows contain the records or data for the columns.

SELECTING DATA

The select statement is used to query the database and relatives selected data that match
the criteria that you specify. Here is the format of a simple select statement.

Select “column1” [,”column2”, etc] from “table name”

[Where “condition”]; [ ] =optional

The column names that follow the select keyword determine which columns will be
returned in the results. The system can select as many column names that you’d like, or you can
use a “*” to select all columns. The table name that follows the keyword from specifies the table
that will be queried to retrieve the desired results.

The where clause (optional) specifies which data values or rows will be returned
or displayed, based on the criteria described after the keyword where.
Conditional selections used in the where clause

= Equal

> Greater than

< Less than

>= Greater than or equal

<= Less than or equal

<> Not equal to

LIKE

The LIKE pattern matching operator can also be used in the conditional selection of the
where clause. Like is a very powerful operator that allows you to select only rows that are “Like”
what you specify. The percent sign “%” can be used as a wild card to match any possible
character that might appear before or after the characters specified.

For example

Select first, last, city

From empinfo

Where first LIKE ‘Er%’;

This SQL statement will match any first names that start with ‘Er’. Strings must be in
single quotes. Or the systemcan specify

Select first, last

From empinfo where last LIKE ‘%s’;


This statement will match any last names that end in‘s’.

Select * from user

Where first = ‘Erie’;

This will only select rows where the first name equals ‘Erie’ exactly.

CREATING TABLES
The create tables statement is used to create a new table. Here is the format of a simple
create table statement.
Create table “table name”

(“column1” “data type”,

“column2” “data type”,

“column3” “data type”);

FORMAT FOR CREATING TABLE BY USING OPTIONAL CONSTRAINTS

Create table “table name”


(“column1” “data type”

[Constraints],

“column2” “data type”

[Constraints],

“column3” “data type”

[Constraints],

[ ] = optional)

To create a new table, enter the keywords create table followed by the table name,
followed by an open parenthesis , followed by the first column name, followed by the data type
for that column, followed by any optional constraints, any followed by a closing parenthesis
before the beginning table and a closing parenthesis after the end of the last column definition.
Make sure you separate each column definition with a comma. All SQL statements should end
with a “;”.

The table and column names must start with a letter and can be followed by letters,
numbers, or underscore – not to exceed a total of 30 characters in length. Do not use any SQL
reserved keywords as names for tables or column names (such as “select”, “create”, “insert”,
etc).Data types specify what the types of data can be for that particular column. If a column
called “Last Name” is to be used to hold names, then that particular column should have a
“VarChar” (variable-length character) data type.

COMMON DATA TYPES

Char(size) Fixed-length character string. Size is specified in parenthesis.


Max 255 bytes.

VarChar (size) Variable-length character string. Max size is specified in


parenthesis.

Number (size) Number value with a max number of columns digits specified in
parenthesis.

Date Date value

Number(size, d) Number value with a maximum number of digits of “size” total,


with a maximum number of “d” digits to the right of the decimal

Number(size, d) Number value with a maximum number of digits of “size” total,


with a maximum number of “d” digits to the right of the decimal.
What are constraints? When tables are created, it is common for one or more
columns to have constraints associated with them. A constraint is basically a rule associated
with a column that the data entered into that column must follow. For example, a ‘unique”
constraints specifies that no two records can have the same value in a particular column. They
must all be unique.

The other two most particular constraints are “not null” which specifies that a column
can’t be left blank, and “primary key”. A “primary key” constraint defines a unique
identification of each record (or row) in a table. Constraints can be entered in this SQL
interpreter, however, they are not supported in this Intro to SQL tutorial & interpreter. They will
be covered and supported in the future release of the Advanced SQL tutorial- that is, if
“response” is good.

INSERTING INTO A TABLE

The insert statement is used to insert or add a row of data into the table. To insert records
into a table, enter the key words insert into followed by the table name, followed by an open
parenthesis, followed by a list of column names separated by commas, followed by a closing
parenthesis, followed by the keyword values, followed by the list of values enclosed in
parenthesis. The values that you enter will be held in the rows and they will match up with the
column names that you specify. Strings should be enclosed in single quotes, and numbers should
not.

Insert into “table name”

(First column...last column)

Values (first value...last value);

UPDATING RECORDS
The update statement is used to update or change records that match specified criteria.
This is accomplished by carefully constructing a where clause.
Update “table name”

Set “column name”= “new value”

[ ,”next column” = “newvalue2”...]

Where “column name” OPERATOR “value”

[and/or “column” OPERATOR “value”];

DELETING RECORDS
The delete statement is used to delete records or rows from the table.

Delete from “table name”

Where “column name”

OPERATOR “value”

[and/or “column”

OPERATOR “value”]

To delete an entire record/row from a table, enter “delete form” followed by the table
name, followed by the where clause which contains the conditions to delete. If you leave off the
where clause, all records will be deleted.

DROP A TABLE
The drop table command is used to delete a table and all rows in the table. To delete an
entire table including all of its rows, issue the drop table command followed by the table name.
Drop table is different from deleting all of the records in the table. Deleting all of the records in
the table leaves the table including column and constraint information. Dropping the table
removes the table definition as well as all of its rows.

Drop table “table name”

TABLE JOINS
All of the queries up until this point have been useful with the exception of one major
limitation- that is, you’ve been selecting from only one table are a time with your SELECT
statement. It is time to introduce you to one of the most beneficial features of SQL & relational
database system – the

Joins allow you to link data from two or more tables together into a single query result –
from one single SELECT statement. A “join” can be recognized in a SQL SELECT statement if
it has more than one table after the FROM keyword.

CASE TOOL FOR ANALYSIS


CASE Building Blocks:

CASE Tools

Integration Framework

Portability Services

Operating System

Hardware Platform

Environment Architecture

 To test the developed software

 To maintain the implemented software

 To trained the new people in software development

 To get clear idea about software engineering processes

The compilers, editors and debuggers those are available to support most
conventional programming languages. Web development tools include to the generation of
text, graphics, forms, scripts and other elements of a web page.

UML
Unified Modeling Language (UML) is a standardized visual specification language
for object modeling. UML is a general-purpose modeling language that includes a graphical
notation used to create an abstract model of a system, referred to as a UML model.

User

Concern Details

Job Details
Maintaining

Exam Details
Report

Commands
3. PROJECT DESCRIPTION

MODULES

 HOME
 LOGIN
 MEC
 OPTIMALCHOOSING
 RECONSTRUCTION

MODULE DESCRIPTION

HOME PAGE

This modules shows the entire sitemap and index to this upcoming events, ongoing web
projects, development sites and also provides the how to use this tool to submit the reports and
inspecting details to the system.

LOGIN

The main objective of this module is to authenticate the logged in clients, clients and others those
who are enter into the system for various purposes and they can be authenticated and authorized by the
secured mechanism that are enforced by the masonry system.

MEC

The Maximum Entropy Clustering (MEC) to divide all the sensing nodes into several clusters.
Then, each node within one cluster locally acquires the compressive measurement and send them
to head node.

OPTIMAL CHOOSING
The head node chooses the optimal measurement data from all received data, then sends the
chosen data to the fusion center. The rule is to select the measurement data of biggest absolute
value to keep the spectrum occupancy situation maximally.

RECONSTRUCTION

The fusion center uses the joint reconstruction scheme to reconstruct wide-band
spectrum. The data quantity can be decreased largely by head nodes for it only send the optimal
measurement data to fusion center, in where the data processing pressure can be reduced greatly
as well.

4.SYSTEM DESIGN
SYSTEM ARCHITECTURE
NODES

MEC

CLUSTER

CO M P R E S S I VE M E AS U R I N G

DATAS

OPTIMAL CHOOSING

FINAL DATA

JOINLY
RECONSTRUCTION

5. SYSTEM IMPLEMENTATION

6. CONCLUSION

Cognitive radio network is a system able to learn from the environment and adjust its
transmission parameters. With their awareness of the radio environment, SUs perform spectrum
sensing to identify free channels in order to efficiency exploit these channels. As the sensing
process requires a considerable amount of time, compressive sensing has been introduced as a
low-cost solution to speed up the channels scanning process and improve the detection rate.

A novel algorithm called ACBSS is proposed to do wide-band spectrum sensing in CRN.


Solution based on hierarchical data-fusion of clustering and jointly compressive reconstruction
of CS to detect the wide-band spectrum holes are derived and tested. The ACBSS algorithm
offers evident advantages over the conventional use of narrowband spectrum sensing
technologies, in terms of both reliability and real-time in detecting dynamic spectrum of CRN.
And a comparison between ACBSS and the other two wide-band spectrum sensing algorithms
called ICS and JCS is given. From the simulation results, it is shown that ACBSS has much
lower false-alarm probability than that of ICS, and also need less execution time than that of
JCS.

For future directions, there are several interesting research directions related to the
compressive sensing. For instance, advanced ADCs can be considered for future works. They are
highly needed to support the high sampling rate presented in the cognitive radio networks with
the high increase of wireless network services and mobile users. Moreover, hardware
implementation is yet another future direction in terms of designing fast and inexpensive ADCs
devises for signal sampling and integrating the compressive sensing algorithms on these devises.
In addition, implementing in hardware the compressive sensing techniques is another direction to
overcome the problems of synchronization, calibration, and uncertainty in measurements. In
addition, developing new and efficient signal acquisition models based compressive sensing is an
interesting direction of interest to cover all the signal models presented in real radio
environments. Also, handling the uncertainty and the imperfections of real radio networks by
designing practical compressive sensing techniques is an open door for researchers in this field.

REFERENCES

[1] Li, C. M., & Lu, S. H. (2016). Energy-based maximum likelihood spectrum sensing
method for the cognitive radio. Wireless Personal Communications, 89(1), 289-302.
[2] Salahdine, F., Kaabouch, N., & Ghazi, H. E. (2018). One-Bit Compressive Sensing Vs.
Multi-Bit Compressive Sensing for Cognitive Radio Networks. In IEEE Int. Conf.
Industrial Techno(pp. 1-6).
[3] Manesh, M. R., Quadri, A., Subramaniam, S., & Kaabouch, N. (2017, January). An
optimized SNR estimation technique using particle swarm optimization algorithm.
In Computing and Communication Workshop and Conference (CCWC), 2017 IEEE 7th
Annual (pp. 1-6). IEEE.
[4] Salahdine, F., & El Ghazi, H. (2017, October). A real time spectrum scanning technique
based on compressive sensing for cognitive radio networks. In Ubiquitous Computing,
Electronics and Mobile Communication Conference (UEMCON), 2017 IEEE 8th
Annual (pp. 506-511). IEEE.
[5] Salahdine, F., Kaabouch, N., & El Ghazi, H. (2016, October). Bayesian compressive
sensing with circulant matrix for spectrum sensing in cognitive radio networks.
In Ubiquitous Computing, Electronics & Mobile Communication Conference
(UEMCON), IEEE Annual (pp. 1-6). IEEE.
[6] Yarkan, S. (2015). A generic measurement setup for implementation and performance
evaluation of spectrum sensing techniques: Indoor environments. IEEE Transactions on
Instrumentation and Measurement, 64(3), 606-614.
[7] Sun, H., Nallanathan, A., Wang, C. X., & Chen, Y. (2013). Wideband spectrum sensing
for cognitive radio networks: a survey. IEEE Wireless Communications, 20(2), 74-81.
[8] Sharma, S. K., Lagunas, E., Chatzinotas, S., & Ottersten, B. (2016). Application of
compressive sensing in cognitive radio communications: A survey. IEEE Communication
Surveys & Tutorials.

You might also like