You are on page 1of 15

Computers and Electrical Engineering 68 (2018) 271–285

Contents lists available at ScienceDirect

Computers and Electrical Engineering


journal homepage: www.elsevier.com/locate/compeleceng

Design and simulation of a radio spectrum monitoring system with


T
a software-defined network☆

Yih-Chuan Lin , Zhe-Sheng Shih
Department of Computer Science and Information Engineering, National Formosa University, Yunlin 63201, Taiwan

A R T IC LE I N F O ABS TRA CT

Keywords: This paper proposes a software-defined network architecture for a radio spectrum monitoring
Software-defined radio system that aims to help regulate spectrum allocation and usage management for nationwide
Software-defined networks commercial or civilian wireless communication systems. The proposed software-defined network
Radio spectrum monitoring system architecture provides radio spectrum monitoring with the ability to coordinate the demand for
Multicast routing protocol
communication bandwidth between the control workstations and the monitoring stations, using
OpenFlow
software-defined radio devices. To evaluate the proposed architecture, a testbed is designed that
incorporates two versions of the Dijkstra algorithm to route the monitored data over the simu-
lated software-defined network. Several metrics are used to measure the transmission perfor-
mance, including packet loss rate, packet jitter, throughput, path cost, and average link utili-
zation. The results of applying the proposed software-defined network architecture to a radio
spectrum monitoring system in order to fulfill high traffic load requirements with software-de-
fined radio technologies are promising.

1. Introduction

A radio spectrum monitoring system (RSMS) is an essential function for any spectrum management system that is committed to
monitoring radio spectrum signals and detecting unexpected sources of interference [1]. The structure of an RSMS contains radio
spectrum monitoring infrastructure, radio spectrum hardware equipment, analysis tools and telecommunication network archi-
tecture. As described in [2], an RSMS is divided into subsystems that are responsible for different regions, each of which has several
radio monitoring stations and a control center. In terms of radio monitoring stations, there are three types that can be used: fixed,
portable, and mobile stations. Signal receivers (e.g. global positioning systems (GPS) or antennas) and radio spectrum signal pro-
cessing equipment (e.g. modulators, filters and amplifiers) are deployed at radio monitoring stations. Regional control centers also
play an important role in increasing operational efficiency, real-time access to radio-sensed spectrum data, and managing spectrum
monitoring data within a database. To achieve communication between radio monitoring stations and control centers, there is a need
to design and build a telecommunication network architecture. In a traditional RSMS architecture, the communication between radio
monitoring stations and control centers uses virtual private network (VPN) or mobile virtual private network (MVPN) techniques over
a wide area network (WAN) (Fig. 1).
However, with the emergence of the 5G era, different radio spectrum communications systems will coexist in our daily lives [3]. It
is difficult for traditional radio spectrum hardware equipment to deal with the various kinds of radio-monitored spectrum data in a
dynamic way. Therefore, the development of software-defined radio (SDR) technology is required [4,5] to enable the RSMS to easily


Reviews processed and recommended for publication to the Editor-in-Chief by Guest Editor Dr. J-S Sheu.

Corresponding author.
E-mail address: lyc@nfu.edu.tw (Y.-C. Lin).

https://doi.org/10.1016/j.compeleceng.2018.03.043
Received 30 September 2017; Received in revised form 26 March 2018; Accepted 26 March 2018
0045-7906/ © 2018 Elsevier Ltd. All rights reserved.
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

Fig. 1. RSMS network architecture [2].

monitor radio frequencies across a wide range of spectra. The early conception of SDR can be found in [6,7]. The technical core of
SDR is digital signal processing (DSP), which can implement a selection of various communication bands, such as very high frequency
(VHF) and ultra-high frequency (UHF), through software. Following advances in technology, it is now possible to run DSP programs
on central processing units (CPUs) or field programmable gate arrays (FPGAs). However, to implement the full flexibility and ver-
satility of SDR, radio frequency (RF) transceivers need to have a very wide RF range. For RSMS, it is advantageous to sense radio
spectrum data and obtain useful information using analysis tools such as digital modulation, spectrograms and vector diagrams on
control workstations (CWs), which are located in each control center. These analysis tools can help achieve monitoring tasks such as
spectrum occupancy measurement, direction finding and positioning, and unknown transmitter detection. Fig. 2 shows an illustration
of the SDR technique within RSMS [8], in which the SDR modules in a fixed monitoring station contain an antenna, an RF front end,
and a signal converter. In the control center, the SDR application running on a CW is the signal-processing receiver. When the CW
reads the sensed radio spectrum data, one fixed monitoring station will transmit the raw digital data of the radio spectrum to the CW
through the wired network. Compared to traditional hardware-oriented receivers, this configuration with an SDR receiver and
programmable software or FPGA code can reduce the system installation and maintenance costs of RSMS.
Consider an SDR example that receives radio spectrum data using a combination of receiver hardware and computer software
(Fig. 3). Fig. 3(a) shows an RTL2832u [9] receiver that supports SDR implementation, and can demodulate signal data received from
an antenna and then transmit these data to a PC through a serial interface. In terms of the device specification of this receiver, the

Fig. 2. SDR technique within RSMS [8].

272
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

Fig. 3. Example of SDR implementation.

frequency range is 52–2200 MHz and the maximum sampling rate may be 3.2 M samples/s. Fig. 3(b) illustrates the frequency
modulation (FM) radio data being accessed by SDR software called Linrad [10], which implements SDR functionalities on a PC and
supports data transportation through a network using TCP/IP protocols.
It is difficult to accommodate all types of SDR spectrum data traffic with varying digital data rates within the network infra-
structure (Fig. 1) of a traditional RSMS, since different monitoring tasks need different bandwidth capabilities and priorities (e.g. real-
time/non-real-time). Although VPN and MVPN are isolated communication methods that could be secured over a public network,
they have limited routing scalability and quality of service (QoS) [11]. This increases the difficulty of controlling packet jitter and
packet loss when traffic reaches bottleneck links in the transmission route. This problem is solved in the current paper by adopting an
SDN technique. The SDN is decoupled using a control plane and data plane for network devices, which are useful in programmable
networks [12]. The architecture of the SDN can be divided into three layers: the infrastructure layer, control layer, and application
layer.
In the infrastructure layer, OpenFlow [13] is currently the technique typically used for the southbound interface of an SDN, which
handles communication between the control plane and the data plane of networking devices. The mission of the infrastructure layer is

273
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

to handle packet processing, packet forwarding, and network status statistics using OpenFlow switches, which are able to carry out
network flow processing on commercial switches and routers that support the OpenFlow protocol. As described in [14], OpenFlow
switch specification version 1.3.x has been the standard support for hardware implementation. The core idea of the control layer is to
enable SDN controllers to administer infrastructure-layer devices such as OpenFlow switches using centralized management, which
can design intelligent network application services to implement automated network control (e.g. routing mechanisms), resource
allocation (e.g. link bandwidth), and awareness of global topology states (e.g. network topology maps). There are two types of
messages that can be used to exchange the data between the controller and the OpenFlow switch: (i) the packet that the switch sends
to the controller, called the Packet In message; and (ii) the packet that the controller sends out to the switch, called the Packet Out
message. In the application layer, it is possible to develop external applications using the northbound interface, which is an open
application programming interface (API) such as RESTful [15], and through which the applications in this layer can focus solely on
implementing the functionality of services using APIs. At this layer, there are many opportunities for third-party software providers to
develop business applications integrated with network controllability on demand for their enterprise customers.
This paper proposes a software-defined sensor network (SDSN) architecture for an RSMS that can be integrated with an SDR and
SDN to coordinate the varying demand for data communication between a CW and a radio monitoring station (MS) in an RSMS. To
evaluate the feasibility of our SDSN architecture, the Dijkstra algorithm [16] is used to carry out multicast data forwarding in the
SDSN. The proposed SDSN architecture does not cover the wireless transportation between the mobile or portable monitoring stations
and the base stations, and focuses solely on the wired network, including the leased lines between the fixed monitoring station and
the WAN and the transport networks between the WAN and the control centers.
The remainder of this paper is organized as follows. Section 2 illustrates the planning of the SDSN architecture and the definition
of the network model for the SDSN. Section 3 explains the types of SDSN architecture that allow RSMS communication and data
forwarding using multicast mechanisms. Section 4 describes the experimental environment and procedures using an emulation
testbed, which is used for performance testing and traffic reliability testing of the SDSN architecture. Finally, Section 5 concludes this
paper and provides perspectives on future work.

2. System architecture

We assume that the SDR sensors are equipped with monitoring stations in the RSMS to measure the radio energy emitted from the
emerging diverse types of commercial wireless communications systems. This paper proposes an SDSN architecture that serves as a
prototype of a network infrastructure for a contemporary RSMS that can accommodate dynamic and high-volume monitored digital
data transported from the deployed SDR sensor nodes. Fig. 4 illustrates the layered design of the system architecture, which com-
prises the infrastructure, data forwarding, control and application layers, stacked from bottom to top.

2.1. Infrastructure layer

The responsibility of this layer is to perform the basic operation of an RSMS, which includes the execution of radio monitoring
tasks by the MS, radio monitored data access by the CW, and data forwarding to the backhaul network. This layer includes three types
of MS and several regional control centers that contain CWs with monitoring applications, operating in the application layer.

2.2. Data forwarding layer

In this layer, the SDSN adopts a cross-data-center network topology, so that adequate network resources (e.g. network bandwidth,
redundant links) can be allocated according to the actual environmental conditions in order to accommodate various service re-
quirements dynamically. For the internal connections of a data center, a fat tree topology is proposed in [17] that can provide
scalable interconnection bandwidth, fault tolerance, and cost savings. Moreover, the components of this fat tree topology are SDN
switches that can support the OpenFlow protocol and implement multicast data forwarding. However, several backhaul connections
are employed for the external connections between data centers. Since the transportable and mobile MSs need to monitor radio
spectrum data or to carry out direction finding and positioning by moving their location, the sensed data are transferred to the data
centers via wireless broadband base stations.

2.3. Control layer

More than one SDN controller may be used to manage the network devices deployed in the data forwarding layer. Fig. 4 shows
that controller applications with network topology awareness can obtain the overall network topology, including the communication
links between network devices and the access ports of switches in the RSMS. With the network topology, a traffic monitor application
can collect traffic statistics from SDN switches and then calculate the bandwidth consumed by each link. The proposed SDSN ar-
chitecture relies on multiple controller applications. For example, an endpoint recognizer application is used to identify and record
basic information about the MS and CW, such as the device ID, regional location and device type. A multicast manager can allocate a
multicast IPv4 address to store with communication information for each multicast group, such as the root (MS) and its members
(CWs). A path finder can be used to try to find the optimal communication path between MS and CW according to the routing
mechanism, which was the Dijkstra algorithm.

274
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

Fig. 4. SDSN system architecture.

2.4. Application layer

This layer can be divided into two parts: external SDN applications and radio spectrum monitoring applications. In external SDN
applications, data can be visualized using the web graphical user interface (GUI), for example the network topology and link utili-
zation. In this way, the national control center can supervise and maintain the network operational status for the RSMS. Routing
mechanism parameter tuning is used to change the behavior of the path finder application artificially, so that the system can cope
with certain emergency situations, for instance when service data need to be forwarded with higher priority within a given period of
time. An endpoint-communication quality viewing application is used to observe the quality of multicast communication between the
MS and CW, such as the packet loss rate and jitter. Radio spectrum monitoring applications include those that fulfill the requirements
stated in [1,2], and applications related to the manipulation of SDR data.

2.5. Network model

In order to characterize the proposed SDSN for an RSMS, we represent it as an undirected graph G = (V, E; S, H), where V denotes
a set of nodes (switch devices) and E denotes a set of links. In addition, a set of MSs is represented as S, and a set of CWs in control
centers is represented as H. The definitions of the transmission path, multicast tree, and their communication costs are given below:

Fig. 5. Illustration of transmission path.

275
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

Fig. 6. Example of the paths in a multicast tree.

2.5.1. Transmission path


The transmission path from node x to node y is denoted as P(x,y), and is composed of consecutive links forming a sequence. P(x,y)
can be defined using Eq. (1), and an illustration of Eq. (1) is given in Fig. 5. Each link is represented by e and its bandwidth capacity
ceij, where eij represents the link between nodes i and j.
P (x , y ) = (exi, eij , ⋯eky ) (1)

2.5.2. Multicast tree


Each multicast tree in an SDSN can be represented as Tn(Vn, En; sn, Hn), where n = 1, 2, …, |S|. It is noted that each multicast tree
is a per-source tree, which means that it only has one root node (MS), represented here as sn, where sn∊Vn. Hn denotes the set of CWs
requesting the radio spectrum measurement data from sn, where Hn⊂Vn. Let Hn = {h1, h2, …, hm}, where m∊N+; thus, the all paths of
Tn starting from sn are represented by Eq. (2). Furthermore, each path in Tn can be defined as in Eq. (3). Fig. 6 shows an example of a
multicast tree.
MPTn = {P (sn, h1), P (sn, h2), …, P (sn, hm)} (2)

P (sn, h) = (e sn i , eij, …, ekh) , h ∈ Hn, i , j, k ∉ {sn} ∪ Hn (3)

2.5.3. Multicast path cost


The cost of each path in a multicast tree Tn(Vn, En; sn, Hn) can be defined as in Eq. (4), which is composed of two functions dn(i,j)
and cbwn(i,j).

Cost (P (sn, h)) = ∑ (dn (i, j ) + cbwn (i, j )), h ∈ Hn


eij ∈ En (4)

The value of dn(i,j) represents the current link transmission time divided by the maximum link transmission time, obtained from
historical records. The link transmission time shows that one packet is transferred through the link between switch nodes i and j in the
multicast tree Tn. In fact, the calculation of link transmission time depends on the SDN controller application, which records the
timestamp when sending or receiving packets to or from switches. Before the link transmission time can be calculated, information on
the switch reply time and packet forwarding time is needed. According to OpenFlow switch specification version 1.3.5 [18], echo
request and reply messages can provide the switch reply time through a calculation of the time difference between the timestamps of
the echo and reply messages. We can obtain the packet forwarding time by using the Packet Out message to send a packet to a
particular port of the switch, and then receiving the Packet In message of the packet from the corresponding switch. In this scenario,
the controller sends link layer discovery protocol (LLDP) packets as the measurement packets. Lastly, the SDN controller application
calculates the time differences between the timestamps.
The value of cbwn(i,j) represents the current traffic divided by the bandwidth capacity of the link between switch nodes i and j,
which are denoted by ceij in a multicast tree Tn. Note that 0 ≤ cbwn(i,j) ≤ 1. To get the port traffic statistics and speed for each switch,
the OpenFlow protocol provides port statistics and port description messages. Based on Eq. (4), each multicast tree cost can be
defined as in Eq. (5).

Cost (MPTn ) = ∑ Cost (P )


P ∈ MPTn (5)

2.6. The objective function and its constraints

The objective function in an SDSN is expressed in Eq. (6). The goal of this objective function is to assess a set of multicast trees in
accordance with the corresponding number of MSs requested by the CWs, such that the sum of all multicast tree costs in the set can be
a minimum.

276
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

Minimize ∑ Cost (MPTn ), n = 1, 2, …, S


Tn ⊂ G (6)

The objective function is subject to the following constraints:


Tn (Vn, En; sn, Hn ) ⊂ G (V , E ; S, H ), ∀ n ≤ S (7)

En = Vn − 1, ∀ n ≤ S (8)

sn = 1, 1 ≤ Hn ≤ H , ∀ n ≤ S (9)

si, sj ∈ S, si ≠ sj, ∀ i ≠ j (10)

Constraint (7) shows that each multicast tree must be a subset of graph G. Constraint (8) means that the edges of each multicast
tree will be acyclic [19]. Constraint (9) indicates that to construct a multicast tree in an SDSN, there must be only one MS (root) in the
tree, and the number of group members must be between one and the total number of CWs in graph G. Constraint (10) implies that an
MS cannot exist in multiple trees at the same time. This allows a CW to communicate with multiple MSs concurrently within the
SDSN.

3. Methods of SDSN architecture

In this section, the main functionalities of an SDSN system architecture are presented. When the RSMS starts the monitoring
operations, which generate a great deal of radio-monitored data traffic and real-time interactive traffic, SDR sensor nodes forward
this to data centers (control centers) over all the networking devices in the data forwarding layer simultaneously. Moreover, these
types of traffic types are inelastic, in that they require various amounts of network resources to meet the requirements of distinct
monitoring tasks [20]. In order to clarify the aim of dealing with network resource allocation for various monitoring tasks, the
methods used to implement the SDSN system architecture for an RSMS are described below.

3.1. Multicast forwarding implementation

All packets carrying monitored traffic are delivered through the paths of a multicast tree of transmission links, connected to the
network devices involved in the proposed SDSN. To construct each multicast tree, we use two versions of the Dijkstra algorithm to
calculate the route for the monitored data, originating from the SDR nodes, over the SDSN. One Dijkstra algorithm adopts the shortest
path routing metric in terms of hop count, while the other uses the maximum available bandwidth criterion (i.e. the cbw function
shown in Eq. (4)) to compute the best routing path between the source and destination devices. To implement the multicast tree, the
SDSN uses the flow and group tables in the switches conforming to the OpenFlow 1.3 specification to install suitable table flow entries
for a one-to-many forwarding ability. The flow table is an essential component for handling network flow processing on OpenFlow
switches, and follows the “match-action” principle [13] in which each flow entry has fields that are used to match packets and an
instruction field to define an action set for these matched packets. The group table is able to implement indirect, multipath, and
failover packet forwarding, in which the group entry has a unique identifier (group ID) in a group table. A group type field defines
both the behavior pattern of a group entry and action buckets that can be used to determine multiple actions in packet processing,
such as modifying packet headers and forwarding ports. Fig. 7 shows an example of flow and group table entries, which instruct the
switch to forward the incoming packets that are destined for the multicast group ‘224.255.0.10’ on UDP port 50,001 to the switch
ports 1, 2, and 3.

Fig. 7. Example of multicast forwarding, using a flow table and group table.

277
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

Fig. 8. Establishment of communication between the MS and CW.

3.2. Procedure for establishing communication

Fig. 8 depicts an example of the consecutive interactions between SDSN entities that establish a communication session. If a CW
intends to make a request to a MS to initiate a monitoring task and to receive the corresponding monitored data, the procedure
consists of eight steps, as follows:

• Step 1: The CW first needs to initiate the access request to an MS by sending a message.
• Step 2: The edge switch that receives the message forwards this message to the controller, using the Packet In message through the
OpenFlow channel.
• Step 3: The endpoint recognizer application accepts the request, records the basic information of the CW, and then passes the
message to the path finder application.
• Step 4: The path finder application determines the best path from the MS to the CW based on the network topology and the
associated statistics for link usage. Traffic monitoring and topology awareness applications are responsible for collecting this kind
of information.
• Step 5: The multicast manager application allocates one multicast IPv4 address to this communication for the CW.
• Step 6: The path finder application instructs all OpenFlow switches on the path to install flow entries using the FlowMod message,
and group entries using the GroupMod message.
• Step 7: The multicast manager application sends responses using the Packet Out message to inform the MS and CW that the
forwarding path has been established successfully.
• Step 8: The MS and CW activate their own SDR applications to transport the sensed radio spectrum data and execute the task.

3.3. Timing diagrams

Fig. 9 shows the detailed procedure for initiating communication between a CW and one MS for the execution of monitoring tasks
using SDSN. During this process, the CW, MS, OpenFlow switches (OFS of both the CW and MS), and SDN controller need to exchange
messages in different formats to complete the route determination; these are specified in Table 1, with reference to the ITU-R 2015
edition of Computer-Aided Techniques for Spectrum Management (CAT) [21]. Once the CW or MS has powered on and become ready
for operation, CW and MS registration (CWR and MSR) messages are sent to the endpoint recognizer application in the SDN con-
troller, which determines their availability in the RSMS. Before the CW is ready to launch the SDR application by starting to execute

278
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

Fig. 9. Timing diagram for establishing communication.

Table 1
SDSN Message formats.
Message types Identification IPv4 Address Response Station list Data type
(4 bytes) (4 bytes) (4 bytes) (7 bytes) (27 bytes/station) (10 bytes)

MSR V V – – –
CWR V V – – –
GSL V – – – –
SL – – V V –
RSD V – – – V
SD V V V – V
LMST – V – – V
FSD V V – – V
TMST – V – – V

radio spectrum monitoring tasks, it sends a ‘Get Station List’ (GSL) message to the endpoint recognizer application, to request
information about the available MSs in the system. In response, the endpoint recognizer sends the station list, embedded in a ‘Station
List’ (SL) message, to the corresponding CW. When a CW chooses an MS to initiate a given monitoring task, it sends a ‘Request Station
Data’ (RSD) message to the endpoint recognizer and path finder applications to establish a route to the multicast tree of the MS. The
multicast manager application issues FlowMod and GroupMod OpenFlow messages to reflect the creation of the route. Note that if the
multicast tree of MS has not yet been created, the SDN controller sends a ‘Launch MS Transmission’ (LMST) message to the MS, as
indicated by the dashed arrow in Fig. 9, to initiate the SDR application in the MS. On the CW side, the SDN controller sends a ‘Station
Data’ (SD) message to the CW to make it available to run the corresponding SDR application. Following this, the radio monitoring
traffic communication proceeds as usual.

279
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

Fig. 10. Timing diagram for terminating communication.

When the CW wants to terminate the radio monitoring task with the MS, it needs to send a ‘Finish Station Data’ (FSD) message to
the SDN controller, since it needs to clear the communication information and to modify the states of the multicast tree. Fig. 10 shows
the timing diagram for terminating communication between the MS and CW. When no CW exists to access the MS, the SDN controller
sends a ‘Terminate MS Transmission’ (TMST) notification message to MS, which instructs the MS to terminate the transmission. When
the CW sends an FSD message to the SDN controller, the endpoint recognizer and multicast manager applications clear the in-
formation about this communication. In addition, the path finder application instructs the OpenFlow switches to modify or delete the
flow entries and group entries.

Fig. 11. The testbed environment.

280
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

Table 2
PC hardware specifications.
Role Controller PC 1 Controller PC 2 Mininet PC

OS Ubuntu 14.04 Ubuntu 14.04 Ubuntu 14.04


Processor Intel i7-3770 Intel i5-2400 Intel Xeon E3-1231
Memory 8GB RAM 4GB RAM 16GB RAM

4. Performance evaluation

To evaluate the proposed SDSN in terms of its performance in delivering traffic for monitoring tasks, we built a testbed and
carried out a series of experiments using random test cases. Fig. 11 shows the testbed consisting of three personal computers, each of
which connects to a Cisco Layer-3 switch and has the hardware specifications given in Table 2. To avoid overloading of the SDN
controller due to receiving too many messages, two controller PCs using Ryu [22] SDN controller software were deployed in this
testbed to balance the load in terms of processing messages from the network devices. We assigned the process of handling RSMS
infrastructure communication protocols to Controller PC 1, and equipped Controller PC 2 with the ability to collect network traffic
statistics and to calculate the delay time for each switch link. To synchronize the information between the SDN controllers, we used
message broker software to implement the message exchange among the devices with high reliability. For the large-scale deployment
of devices in the testbed, we developed a network emulation environment using Mininet [23] and a Mininet PC; in this way, the scale
of the network topology, link bandwidth, and the number of terminal nodes became easily programmable in the experiments. For the
Mininet PC, a fat K-tree emulation topology [17] (K = 4) (Fig. 12) and radio monitoring simulated applications were designed to
execute on the terminal nodes in the Mininet network, which can imitate the communication behaviors of radio monitoring stations
and workstations.
The main purpose of the MS application is to transfer multiple types of monitored data to the CWs. Table 3 shows the list of
monitoring tasks and their associated bandwidth requirements, based on the results found in [1,2,24,25]. SDR monitoring appli-
cations are not considered in this performance evaluation, due to the limitations on hardware quantity and the available bandwidth.
To simulate the monitored data traffic, the MS uses a software-based data source to generate this traffic in the emulated network
topology. The CW application accesses at least one MS to obtain the multicast data, which acts as the server side of data transmission
and can gather the corresponding statistics such as packet loss rate, packet jitter, and packet speed.

4.1. Network performance tests

4.1.1. Scenario description


Table 4 gives the parameter settings for this emulation scenario. In this table, some parameters are referenced from [2], e.g. the
number of MSs and CWs. The link capacity between switches is 5 Mbps, and the link capacity between the MS and the switch is
10 Mbps, which can accommodate all the traffic for the monitoring tasks (Table 3). The coverage area of the RSMS is divided into
three regional areas, with a regional control center in the north, central, and south. The MSs are distributed in a specific way between
the three regions, while 10 CWs for each region are dynamically assigned. In addition, the probability that one CW will make a
request for access to one MS follows a Pareto distribution. Note that a CW can access a local MS with a probability of 80% to execute
the monitoring tasks, unless in some situations (with probability of 20%) it needs to access an MS in another region. For example,
suppose that the unknown transmitter is located at the border between different regions; the CW may need to access a local MS and a
cross-regional MS simultaneously for direction finding and positioning. Moreover, there is a bottleneck access scenario in which a
bottleneck MS in the central regional control center is simultaneously accessed at regular intervals by CWs in other regions, in order
to retrieve the BVS data. When a CW finishes a random 30–60 s data access (not including BVS data) to the MS(s), it has a random
period of idle time of between 1 and 3 s before starting the next access task. Note that the principle of communication between the MS

Fig. 12. Fat tree topology (K = 4).

281
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

Table 3
Bandwidth requirements for different monitoring tasks.
Code Tasks Bandwidth requirement (Mbps)

SM Signal measurement 0.425


DFP Direction finding and positioning 1.00
SOM Spectrum occupancy measurement 0.425
UTD Unknown transmitter detection 1.00
TCM Transmitter coverage measurement 0.31
RO Recording operation 0.12
VS Video streaming 3.0
BVS Bottleneck video streaming 1.00

Table 4
Parameter settings for network performance test.
Category Item Parameters

Topology elements K-ary Fat tree's k value 4 (20 switches, 32 links)


Total number of MSs 19
Total number of CWs 30
Link capacity between switches 5 Mbps
Link capacity between MS and switch 10 Mbps
Link capacity between CW and switch 100 Mbps
MS Access point for MS Core switch
Number of northern region MSs 6
Number of central region MSs 5
Number of the southern region MSs 8
CW Access point for CW Access switch
Idle time 1–3 s
Probability of access request 80%
Probability of accessing a local (regional) MS 80%
Number of MSs for DFP or UTD 2–3
Number of MSs for non-DFP and non-UTD 1–3
Access period 30–60 s
Interval time for accessing bottleneck MS 30 s (high loading)
60 s (low loading)
Time for CW accessing bottleneck MS 5s
Total test time 100 times (1 time = 5 min)

and CW follows the constraints of the objective function discussed in Section 2, which is that an MS only exists in one multicast tree
root, and a CW can access multiple MSs simultaneously. The methods for finding the best path to the multicast tree were designed on
the Dijkstra algorithm using two different route selection criteria: the shortest path (Dijkstra_p) and least-used bandwidth (Dijkstra_b)
metrics, respectively. Each run of the simulation lasted five minutes. Tables 5 and 6 show the network performance metrics, which
were collected and averaged after repeating the simulation 100 times.

4.1.2. Simulation results


Table 5 shows the results for the performance metrics, including the objective function value, path cost, path length, packet loss
rate, packet jitter, link utilization, and standard deviation of the link utilization. The total number of access sessions from the CWs
was about 473 and 534 for the lower and higher traffic cases, respectively. Based on these results, we can summarize some findings as
follows:

Table 5
Average network performance values.
Low loading High loading

Metric Dijkstra_p Dijkstra_b Dijkstra_p Dijkstra_b

Access sessions 473 474 535 534


Path length (hops) 3.00 3.33 3.00 3.28
Path cost 1.55 1.62 1.70 1.79
Link utilization (%) 55.63 65.66 57.68 67.14
Link utilization S.D. (%) 31.40 26.92 31.54 26.64
Packet loss rate (%) 23.52 24.66 24.63 26.63
Packet jitter (ms) 3.12 28.00 3.20 30.89
Objective function value 731.41 769.91 911.87 954.57

282
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

Table 6
Throughput (Mbps) of monitoring sessions.
Low loading High loading

Data type code Dijkstra_p Dijkstra_b Dijkstra_p Dijkstra_b

SM 0.33 (−0.095) 0.33 (−0.095) 0.32 (−0.105) 0.31 (−0.115)


DFP 0.75 (−0.25) 0.74 (−0.26) 0.73 (−0.27) 0.72 (−0.28)
SOM 0.33 (−0.095) 0.32 (−0.105) 0.32 (−0.105) 0.31 (−0.115)
UTD 0.75 (−0.25) 0.74 (−0.26) 0.73 (−0.27) 0.71 (−0.29)
TCM 0.24 (−0.07) 0.23 (−0.08) 0.23 (−0.08) 0.23 (−0.08)
RO 0.09 (−0.03) 0.09 (−0.03) 0.09 (−0.03) 0.09 (−0.03)
VS 2.03 (−0.97) 1.96 (−1.04) 1.98 (−1.02) 1.92 (−1.08)
BVS 0.79 (−0.21) 0.79 (−0.21) 0.81 (−0.19) 0.79 (−0.21)

• Using Dijkstra_b routes, the routing path lengths for each CW are longer than those using Dijkstra_p routes. In addition, the
multicast path cost in Eq. (4) for Dijkstra_b is higher than the cost for Dijkstra_p. This indicates that path length has more influence
on path cost than the bandwidth consumed.
• The objective function values for Dijkstra_b routing are higher than those for Dijkstra_p routing in both low loading and high
loading scenarios.
• Both versions of the Dijkstra algorithm suffer significant rates of packet loss in both the low and high traffic loading scenarios. The
reason for this could be that the static on-demand routes using the Dijkstra algorithm could not prevent network overload after
triggering the bottleneck MS accesses.
• The use of Dijkstra_p routes gives better mitigation of packet jitter than using Dijkstra_b routing in the two traffic loading sce-
narios. The reason for this could be that the path lengths using Dijkstra_b routes are longer than those for Dijkstra_p routes, since
the latter method always finds the shortest forwarding path.
• The Dijkstra_b algorithm has higher average link utilization due to the use of the route metric for consumed link bandwidth.
• The standard deviation of the link utilization for all the links in the topology reflects the level of traffic distribution in the network.
Obviously, Dijkstra_b is better than Dijkstra_p in balancing the traffic across the network.
• The Dijkstra_p algorithm, with a packet jitter of 3.12 ms (<10 ms) in low traffic cases, is very suitable for interactive or real-time
monitoring tasks such as the FM sound monitoring task, while the 67.14% link utilization of the Dijkstra_b algorithm is desirable
for high traffic cases such as SDR raw traffic.

As shown in Table 6, the throughput of each type of monitoring task is lower than the required values specified in Table 3. The
values in brackets represent the difference values obtained by subtracting the simulated values of the session throughput from the
corresponding bandwidth requirement for each type of monitoring. This implies that the use of the traditional Dijkstra algorithm to
route the multicast traffic is not sufficient to support the larger-scale deployment of MSs and CWs and single congested MS in the
RSMS. More sophisticated routing algorithms are required for this situation, and this requires further investigation.

4.2. Packet loss and link utilization

Consider the normal RSMS operation schedule. The simulation is modified to execute for a period of over five hours, which more
closely imitates the actual operation time of CWs in the RSMS. The network topology is identical to that specified in Table 4. The

Table 7
Parameter settings of the emulation scenario for traffic reliability testing.
Category Item Parameter

MS Access point of MS Core switch


Number of northern region MSs 6
Number of central region MSs 5
Number of southern region MSs 8
Number of bottleneck MSs 1
CW Access point of CW Access switch
Idle time 1–3 min
Probability of access request 80%
Probability of accessing a local (region) MS 80%
Number of MSs for DFP or UTD 2–3
Number of MSs for non-DFP and non-UTD 1–3
Access period 30–60 min
Interval time for accessing bottleneck MS 30 min
Time for CW accessing bottleneck MS 5 min
Total test time 5h

283
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

Fig. 13. Link utilization and standard deviation of link utilization.

other parameters are shown in Table 7. The bottleneck MS in the central control center is removed; instead, one extra bottleneck MS
that is not located in the RSMS is added to this simulation. In this case, the bottleneck traffic will include all access requests from the
CWs located in the three regional control centers.

Simulation results
This simulation generated results related to the link utilization, standard deviation of the link utilization, and packet loss rate.
Fig. 13 shows a high value for link utilization, indicating that the network resources (e.g. link bandwidth) in the network are
completely utilized. A low value for the standard deviation (SD) of the link utilization indicates that the links in the network were
used more equally rather than relying on a few links. Based on these results, the Dijkstra_b algorithm in Fig. 13(b) is better than the
Dijkstra_p algorithm in Fig. 13(a), since Dijkstra_b could distribute network traffic in terms of available bandwidth and achieve load
balance across the communication sessions.
Fig. 14 shows a density map for the severity level of packet loss events, in which a red point indicates a packet loss event occurring
in a local communication session, and a blue point represents a packet loss for a cross-regional communication session. A high rate of
packet loss is shown during the test period, both in Fig. 14(a) for the Dijkstra_p algorithm and in Fig. 14(b) for the Dijkstra_b
algorithm. Due to this frequent packet loss, the black line that indicates link utilization changes in value frequently within a short
time, and the thickness of the line is therefore increased. Based on the experimental results for packet loss rate, we found that there is
significant packet loss in communications using both versions of the Dijkstra algorithm, with 18.59% and 20.70% average packet loss
rate using the Dijkstra_p and Dijkstra_b routes, respectively. The reason for this is the inability of the traditional Dijkstra routing
algorithm to adapt to network overload events when a large number of communication sessions are launched dynamically. Of course,
an intuitive method for avoiding packet loss events would be to reduce the number of monitoring sessions or the quantity of con-
current CWs and MSs in the monitoring state. Nonetheless, full utilization of the network resources would be a better choice for
improving the reliability of traffic monitoring in the RSMS. Possible strategies for designing the multicast routing algorithm for the
RSMS therefore include some sophisticated mechanisms, such as admission control or prioritized routing for each incoming request to

Fig. 14. Density map for packet loss events and link utilization.

284
Y.-C. Lin, Z.-S. Shih Computers and Electrical Engineering 68 (2018) 271–285

join the multicast tree. In other words, new incoming monitoring sessions with higher priority could be given more reliable routes by
suspending the lower priority routes in the established multicast tree before starting the monitoring sessions.

5. Conclusions

We described the radio spectrum monitoring system and indicated the problem of network transmission bottleneck with modern
software-defined radio technologies. Then, we defined a multicast network model to ease the bottleneck problem. A centrally con-
trolled method was elaborated for maintaining the multicast routes and the network topology related information. We also im-
plemented a simulation testbed to evaluate the proposed network model and methods. A number of important conclusions can be
drawn. First, SDR-enabled RSMS allows regulators to be more powerful to enforce regulations of radio communications, but at the
same time, they also need to face the insufficiency of network capacity. Second, the proposed method has been shown to be promise
to fulfill the high traffic load requirements if SDR sensor nodes are used to monitor the radio spectrum. Third, if giving higher priority
to the local MS in terms of routes, it might have chance to improve the overall communication quality in the SDSN architecture, and
an admission control mechanism would be helpful in improving the reliability of packets delivered over the multicast tree, if an
incoming multicast join request that might jam one or more links can be suspended for a while. In the future, we will study more
complex routing algorithms that can avoid packet loss events. For example, since the amount of traffic for each monitoring task in the
RSMS is known in advance, it is possible to estimate the network overload before establishing the routing path between the MS and
CW.

References

[1] Handbook of spectrum monitoring. Recommend ITU Radiocommun Sect 2011:4–56. 308.
[2] Chen WT, Lin YC, Chang YT, Wang CC, Huang KC, Chung KW, Tseng WD, Hsu MC, Hsieh HC, Lin CY, Zheng ZL, Chang CC, Sun PK, Huang CC, Zhong JH, Hsu PJ,
Hsu CC, Liu CC. Study on the planning and optimization of the next generation radio monitoring system. Natl Commun Comm Case Study Res 2011:68.
[3] Chen M. The development trend of worldwide 5G. ICT J no 2016(168):5–11.
[4] Definitions of software defined radio (SDR) and cognitive radio system (CRS). Recommend ITU Radiocommun Sect, 2009. p. 1.
[5] Software-defined radio in the land mobile, amateur and amateur-satellite services. Recommend ITU Radiocommun Sect 2012;5-6:12–9.
[6] Mitola J. Software radios: survey, critical evaluation and future directions. IEEE Aerosp Electron Syst Mag Apr 1993;8:25–36.
[7] Mitola J. The software radio architecture. IEEE Commun Mag 1995;33(May):26–38.
[8] Chen W-T, Chang K-T, Ko C-P. Spectrum monitoring for wireless TV and FM broadcast using software-defined radio. Multimedia Tools Appl 2016;75(August
(16)):9819–36.
[9] RTL2832u [Online]. Available: http://www.realtek.com.tw/products/productsView.aspx?Langid=1&PFid=35&Level=4&Conn=3&ProdID=257.
[10] Linrad [Online]. Available: http://www.sm5bsz.com/linuxdsp/linrad.htm.
[11] IPSec VPN WAN Design Overview [Online]. Available: http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/WAN_and_MAN/IPSec_Over.html.
[12] Software-Defined Networking (SDN) Definition [Online]. Available: https://www.opennetworking.org/sdn-resources/sdn-definition.
[13] McKeown N, Anderson T, Balakrishnan H, Parulkar G, Peterson L, Rexford J, Shenker S, Turner J. OpenFlow: enabling innovation in campus networks. ACM
SIGCOMM Comput Commun Rev 2008;38:69–74.
[14] Tourrilhes J, Sharma P, Banerjee S, Pettit J. SDN and OpenFlow evolution: a standards perspective. IEEE Comput Soc 2014;47:22–9.
[15] Fielding RT. Architectural styles and the design of network-based software architectures Irvine: University of California; 2000. Doctoral Dissertation.
[16] Dijkstra algorithm [Online]. Available: https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm.
[17] Al-Fares M, Loukissas A, Vahdat A. A scalable, commodity data center network architecture. SIGCOMM '08 proceedings of the ACM SIGCOMM 2008 conference
on data communication. 2008. p. 63–74.
[18] OpenFlow switch specification version 1.3.5 [Online]. Available: https://3vf60mmveq1g8vzn48q2o71a-wpengine.netdna-ssl.com/wp-content/uploads/2014/
10/openflow-switch-v1.3.5.pdf.
[19] Ziegelmann M. Preliminaries” in constrained shortest paths and related problems - constrained network optimization. VDM Verlag; 2007. p. 8. ch. 2, sec. 2.1.1.
[20] Stallings W, Agboma F, Jelassi S. Types of network and internet traffic. Foundations of modern networking: SDN, NFV, QoE, IoT, and cloud, Addison-Wesley
professional. 2015. p. 40–2. ch. 2, sec. 2.1.
[21] Handbook of computer-aided techniques for spectrum management (CAT). Recommend ITU Radiocommun Sect 2015:66–81.
[22] Ryu SDN Framework[Online]. Available: https://osrg.github.io/ryu/.
[23] Mininet [Online]. Available: http://mininet.org/.
[24] Ko C-P. Master's thesis The Bandwidth Requirements of SDR-based Spectrum Monitoring System. Tainan, Taiwan: National Cheng Kung University; 2015. p.
6. 16.
[25] Zheng Z-L. Master's thesis. The Bandwidth Requirements of Radio Monitoring System. Tainan, Taiwan: National Cheng Kung University; 2012. p. 34.

Yih-Chuan Lin received the Ph.D. degrees in Electrical Engineering from National Cheng Kung University, Tainan, Taiwan, in 1997. He is currently a professor of
computer science and information engineering at National Formosa University, Taiwan. His research interests include computer networks, wireless sensor networks,
Internet technology and applications, image/video coding and processing. He is a member of IEEE.

Zhe-Sheng Shih received his B.Sc. and M.Sc. in Computer Science and Information Engineering from National Formosa University, Taiwan, in 2015 and 2017,
respectively. He is currently a project engineer of network traffic analysis at the Chunghwa Telecom Co., Ltd Data Communication Business Group, Internet Services
Dept., Taiwan. His research interests include computer networks and system integration.

285

You might also like