You are on page 1of 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/323281289

TreeSpirit: Illegal logging detection and alerting system using audio


identification over an IoT network

Conference Paper · December 2017


DOI: 10.1109/SKIMA.2017.8294127

CITATION READS

1 1,692

6 authors, including:

Nuwan Kuruwitaarachchi
Sri Lanka Institute of Information Technology
13 PUBLICATIONS   2 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Developing a QoS Based Routing Algorithm for VANET View project

EC adoption View project

All content following this page was uploaded by Nuwan Kuruwitaarachchi on 07 December 2018.

The user has requested enhancement of the downloaded file.


TreeSpirit: Illegal Logging Detection and Alerting
System using Audio Identification over an IoT
Network
P.G. Kalhara∗ , V.D. Jayasinghearachchi† , A.H.A.T. Dias‡ , V.C. Ratnayake§ , C. Jayawardena¶
and N. Kuruwitaarachchik
∗†‡§ Department of Software Engineering, ¶k Department of Information Systems Engineering,
Sri Lanka Institute of Information Technology, New Kandy Road, Malabe 10115, Sri Lanka

Abstract—Illegal logging has been identified as a major prob- Forest Monitoring WSN Architecture that addresses the above
lem in the world, which may be minimized through effective concern.
monitoring of forest covered areas. In this paper, we propose and
describe the initial steps to build a new three-tier architecture for II. S YSTEM D ESIGN
Forest Monitoring based on Wireless Sensor Network and Chain-
saw Noise Identification using a Neural Network. In addition to
detection of chainsaw noises, we also propose methodologies to
localize the origin of the chainsaw noise. Listening Post Listening Post
(Arduino) (Arduino)
Index Terms—illegal logging, chainsaw, forest monitoring,
wireless sensor network, neural network, audio identification,
deforestation.
Listening Post Sound Processor in Forest Listening Post
I. I NTRODUCTION (Arduino) (Raspberry Pi) (Arduino)

Illegal logging has been identified as a major problem in


the world, with studies indicating that more than 100 million
Base Station Computer
cubic metres of timber are still being cut illegally each year. (Raspberry Pi)
Furthermore, destruction of the worlds forests contributes up
to 20% of global carbon dioxide gas emissions [1].
Forest Surveillance plays a key role in controlling Illegal Cloud API
Logging. Various attempts have been made to carry out
effective surveillance of large forest-covered areas [2], [3] such
as, Satellite Image Processing, Local Video Surveillance and Web Application SMS
Gateway
Detection of Acoustic Signals of Chainsaws.
According to [2] and [4] currently, the optimal and the
only feasible surveillance strategy against Illegal Logging is Fig. 1. High-level System Architecture
the detection of acoustic signals of chainsaw considering the
practicalities of the logging industry. This position is further The proposed system consists of Listening Posts, Sound
strengthened by the fact that many available studies as well as Processors and a Base Station. The Listening Post continually
existing commercial implementations focus on identification listens and transmits the sounds of the forest to the designated
of Illegal Logging through detection of acoustic signals of Sound Processor. Each Sound Processor is allocated with three
chainsaws [2], [4]–[8]. Listening Posts. The Sound Processor will process the sound to
Wireless Sensor Networks (WSN) play a key role in such identify any signals that could potentially be chainsaw noise.
systems based on detection of acoustic signals of chainsaws. If it does detect such a sound, it will locate the origin of
Forest Monitoring WSNs consist of computing devices fixed the sound. Then this location information is transmitted to
strategically in a forested area which are capable of sensing, the base station. The base station uploads such received data
transmitting and in some instances processing the data cap- to the Cloud Application Programming Interface (API) which
tured [2], [4], [5], [7]. makes the data readily available through the associated web
While we acknowledge the valuable contributions of previ- application and Short Message Service (SMS) gateway.
ous and currently on-going work in this domain, we feel that The Listening Posts are based on the Arduino Uno (Fig. 2)
a cost-effective and practical long-range forest monitoring ar- device while the Sound Processors and the Base station are
chitecture based on detection of acoustic signals of chainsaws based on the Raspberry Pi 3 Model B (Fig. 3) device, attached
has not yet been realised. Hence this paper proposes a new with additional hardware as listed in Table I. All devices are
concerns, power, and hardware costs to a minimum. To address
all these concerns successfully, a low-cost hardware device
which transmits in the Industrial, Scientific, Medical (ISM)
radio band has been selected as the communication platform.
Two types of wireless links are required in the proposed
system,
1) Short range, high speed link between each Listening Post
and the Sound Processor.
2) Long range, low speed link between each Sound Pro-
cessor and the Base Station.

Fig. 2. Arduino Uno

Fig. 3. Raspberry Pi 3 Model B


Fig. 4. nRF24L01+ 2.4 GHz RF Transceiver

powered by solar panels with battery packs as backup power In case 1, a radio link must be constantly maintained
systems. between each Listening Post and the Sound Processor to
exchange the sounds recorded for analysis and sound local-
TABLE I ization. It is important that the quality of such transferred
H ARDWARE D EVICES U SED
sounds remain adequate for analysis during exchange. How-
ever, since Listening posts are low power devices, it is also
Platform Additional Devices important that the selected communication platform has low
nRF24L01+ WiFi Module
power consumption. Therefore, a low cost, low power, high
nRF24L01+ WiFi Module Base
DS3231 Real Time Clock Module
speed communication chip, based on the nRF24L01+ 2.4 GHz
Arduino Uno RF Transceiver (Fig. 4) has been selected as the preferred
ADMP401 MEMS Microphone
Solar Panel technology. This chip is capable of transmitting data at speeds
Battery Pack upto 2 Mbps [9] and has a range up to 1000 m line of sight
nRF24L01+ WiFi Module with improvements [10].
nRF24L01+ WiFi Module Base In case 2, a radio link must be occasionally established be-
DS3231 Real Time Clock Module tween a Sound Processor and the Base Station to transmit data
Raspberry Pi 3 Model B
Texas Instruments CC1310 Module regarding any potential illegal logging activity that has been
Solar Panel
detected by the Sound Processor. Since the requirement of this
Battery Pack
link is transmission of a limited set of characters indicating
the type of illegal activity and the predicted coordinates of the
origin of the sound, it is not important that the link is high
III. C OMMUNICATION A RCHITECTURE speed. However, Sound Processors are located deep within a
Communication Architecture is a part of the System Archi- forest while Base Station remains outside a forest, there is a
tecture focused on utilizing different hardware and software considerable distance between each Sound Processor and the
components to transmit and receive data on different IoT Base Station. Hence, the range becomes an important factor.
platforms defined in the System Architecture. Ideally, such Again, Sound Processors are low powered devices, hence it
data transmissions between devices should be reliable and have is important that the selected communication platform has
considerable range while keeping associated legal/ licensing low power consumption. To fulfill these requirements Texas
focuses on identifying chainsaw sounds in the forest. There-
fore, using a properly trained DNN, the device can identify
chainsaw sounds with higher accuracy.
2) Data-sets: One of the main problems with training a
Deep Neural Network in a supervised manner is the amount
of computational effort and labeled data required for efficient
learning. But the authors were not able to find a proper data-set
of chainsaw sounds available publicly. Therefore, the authors
had to collect a proper data-set. Our team was able to gather
more than 100 chainsaw audio clips from an actual rain-forest
at various distances and directions with natural obstacles in-
between.
3) Experiment Setup: We used Fourier’s Transform to
convert our audio data to the frequency domain. This allows
for a much simpler and compact representation of the data,
which will be exported as a spectrogram (Fig. 7). This process
will give us an image file containing the evolution of all the
frequencies of audio through time.
Time is on the x-axis, and frequency on the y-axis. The
highest frequencies are at the top and the lowest at the bottom.
The scaled amplitude of the frequency is shown in grey-scale,
with white being the maximum and black the minimum.
Fig. 5. Texas Instruments CC1310 LaunchPad The next thing we must do is to deal with the length of the
spectrogram. We can create fixed length slices of the spectro-
gram, and consider them as independent samples representing
Instruments CC1310 (Fig. 5 [11] communication module has the audio. We can use square slices for convenience, which
been selected as the preferred technology. means that we will cut down the spectrogram into 128 x 128
pixel slices. This represents 2.56 s worth of data in each slice.
IV. S OFTWARE A RCHITECTURE After slicing all audio into square spectral images, We can
now train a Deep Neural Network to identify these samples.
Software architecture of the proposed system can be divided
For this purpose, we have used TensorFlow [14].
into four main sections: Sound Recognition Module, Sound
The CNN takes in wave pattern as spectrogram and it does
Localization Module, System Reliability and Real-time Web
so based on ”weights” and ”biases” that need to have a correct
Application. Fig. 6 represents a high-level view of the overall
value for the prediction to work well. Each ”neuron” in a
software architecture of the proposed system.
CNN does a weighted sum of all of its inputs, adds a constant
called the ”bias” and then feeds the result through some non-
A. Sound Recognition Module
linear activation function. In this CNN we used softmax as an
1) Sound Identification Process: This operation identifies activation function for the last layer.
the incoming sound signals from the environment. To fulfill The training operation is happened on the CNN using
the requirement of the device we have tried two approaches 100 sound clips in a batch. There were three Convolutional
which are Audio Fingerprinting Mechanism and Deep Neural layers, three ReLU layers, three Pooling layers and one Fully
Network for Sound Identification (DNN). Connected layer with 50% dropout.
The first approach was an Audio Fingerprinting Mechanism Considering the available hardware resources and power
[12]. This mechanism repeatedly uses FFT over small windows consumption limitations I decided to use Berry Conda package
of time in the audio samples to create a spectrogram of the environment management system [15] to deploy this module
audio. Based on these spectrograms, it stores combination inside the Raspberry Pi. Berry Conda is a conda based Python
of frequency-time data into a database. By using this data- distribution for the Raspberry Pi. With it, we can install and
set, it will compare the incoming sound signal. but the worst manage a scientific or Pydata stack on our Raspberry Pi using
case is it compares the exact value with the available data- conda, a package and environment management system. All
set. Therefore, in order to increase the accuracy of the audio this can be done without compiling a single package.
identification process, it needs to maintain a huge data-set 4) Test Results: The module was evaluated with test data
inside the database. Since the Sound Processor has a limited to check the accuracy of the module.
storing capacity this approach was not a suitable solution for Fig. 9 and Fig. 10 depicts the loss and accuracy of the audio
the problem. identification module. The lower the loss, the better a model
The second approach was to use a Deep Neural Network (unless the model has over-fitted to the training data). The loss
(DNN) with a proper data-set [13]. This module mainly is calculated on training and validation and its interpretation
REST API Web Application

Remote
Sound Localization Module Database

Audio
2D Audio Identification
TDOA
Multilateration Module
Module
Module

Cloud Pusher
NRF24L01+ Radio
Module Transmitter
Web Socket API
Python Interpreter

Main Module

C++ ↔ Python Interface Raspberry Pi 3 (Base Station)

NRF24L01+
Module

Message
Composer
Radio
Transmitter
Protothreading Module

NRF24L01+ Radio
Received Audio Module Transmitter Audio Transmitter
(WAV Format)
+ Timestamp

Main Module Main Module

Raspberry Pi 3 (Sound Processor) Arduino Uno (Listening Post)

Fig. 6. Overall Software Architecture

Loss
2.6

2.4

Fig. 7. Fourier’s Transformation Spectrogram


2.2

2.0

1.8
Loss

1.6

Fig. 8. Square Slices 1.4

1.2

is how well the model is doing for these data. Loss is not in
1.0
percentage as opposed to accuracy and it is a summation of
the errors made for each sample in training or validation sets. 0.8
0 50 100 150 200 250 300 350
The loss graph approximately gives the value to 0.98. Epoch
According to the regression line, the module can further reduce
the loss function value by more epochs. However, the trained Fig. 9. Plot with Loss
module accuracy level is very high. Out of 50 test samples,
the module was able to identify 48 samples correctly.
is happening. To give the solution we tried out new methods
B. Sound Localization and available methods to find out which is fitted best for
Sound source localization in outdoor Environment without our purpose and architecture. Finally came up with TDOA
using microphone arrays and for specific architecture. Output combined with Multilateration as the solution for the SSL.
of this component is to point out the location where logging TDOA is basically about the time difference of received
Accuracy and vice versa. An FFT rapidly computes such transformations
1.0
by factorizing the DFT matrix into a product of sparse (mostly
0.9
zero) factors. As a result, it manages to reduce the complexity
of computing the DFT from O(n2 ), which arises if one simply
applies the definition of DFT, to O(n log n), where n is the
0.8
data size figure.
3) Noise Filtering: Basically, it reduces selected frequen-
0.7
cies 6 decibel times than initial intensity. High pass filter is
Accuracy

used in pydub library [16] which based on scipy library [17].


0.6 4) 2D Audio Multilateration: Sound Localization is not
available/ only discussed briefly in the existing solutions. Most
0.5 available solutions notify which zone of the forest the signal
is detected from but this could be inadequate as a zone could
0.4 be a large area comprising of acres of dense forest. However,
the author considers Sound Localization as an important part
0.3 of a Illegal logging detection systems for tropical rainforests
0 50 100 150 200 250 300 350
Epoch as tropical rainforests are dense forests and unless one knows
where one is headed, it is easy to get disorientated and lost.
Fig. 10. Plot with Accuracy Therefore, to make the task of detection easy we introduce
sound localization. Sound localization is based on a cluster.
Originally a technique called Triangulation was considered for
signals of given microphones. To get that we use FFT like
the Sound Localization. It required an array of two, directional
methods to calculate phase difference between two signals and
microphones attached to each Listening Post. Due to cost,
low band noise removal to remove the noise and increase the
technical and practical reasons, it was decided that only a
accuracy. Component is placed between audio fingerprinting
single MEMS digital Omnidirectional microphone will be
module and cloud. TDOA depends heavily on environmental
attached to each Listening Post. Triangulation is not possible
factors like noise. Selected method is accurate up to about 10
when using a single microphone. Hence a new technique called
meters distance between mics without optimization. Because
Multilateration [18], [19] is now used to locate the origins of
of this is the only possible method must go with this method
a Sound.
and accuracy will be decreased along with the distance. It is
Fig. 12 shows an explanation of the Multilateration algo-
recommended to be used low noise and less obstacles.
rithm that has been used in this system.
1) TDOA: To get the time difference of arrival between
two distanced microphones, phase property of the wave is
Y
used. to do that GCC-PHAT algorithm, Fast Fourier Transform
b(100,0) d(100,100)
algorithm is used with some filtering techniques to remove
R3
unwanted noise.
R1 Sound Source
(x,y,z)

50m
R0
R2

a(0,0) 50m c(0,100) X

Fig. 12. Multilateriation Algorithm

Let a, b, c, d be four Sound Listening Posts situated on a


100m × 100m grid. Let each nodes location be denoted by
Fig. 11. Fast Fourier Transformation →

P m = (xm , ym , zm ); where m = (0, 1, 2, 3) and Zm = 0 as
it is a 2-D Multilateration. Let the unknown location of the
2) Fast Fourier Transformation: To increase the perfor- →

Sound Source be denoted by, E = (x, y, z); where Z = 0 as
mance of the algorithm Fast Fourier Transform is used. It
it is a 2-D Multilateration.
simplifies the wave. A FFT algorithm computes the discrete
By using the Distance Formula,
Fourier transform (DFT) of a sequence, or its inverse (IFFT).

− →
− p
Fourier analysis converts a signal from its original domain (of- Rm = | P m − E |] = (xm − x)2 + (ym − y)2 + (zm − z)2
ten time or space) to a representation in the frequency domain (1)
By defining a (0, 0) as the origin, this could be simplified xm 2 + ym 2 + zm 2 x1 2 + y1 2 + z1 2
Dm = vTm − vT1 − +
to, vTm vT1
p (8)
R0 = x2 + y 2 + z 2 (2)
Using Eq. 8 on nodes m = (2, 3) will result in two
Let τm = Time difference between the time the sound was simultaneous equations which could be solved easily giving
emitted by the emitter (Te ) and received by the receiver m. the (x, y) position of the unknown source relative to the origin
Let v = the speed of Sound waves in air in m/s. Let Tm = a(0, 0).
Time the receiver m received the sound. B2
( B3 D3 − D2)
However, the issue is Te is an unknown quantity. To x=
overcome this, consider the node a(0, 0) as the origin and (A2 − B2.A3
B2 )
calculate the Time Difference of Arrival of each node relative −(A3x + D3)
to node a(0, 0). y=
B3
vτm = vTm − vT0 Given that the position of the origin a(0, 0) is known in
latitude and longitude values, the position of the sound source
vτm = Rm − R0 (3) can be estimated correctly.
Solving equations 1, 2 and 3 will provide the x, y location of C. System Reliability
the unknown sound source. However, doing so would require A forest monitoring system can be considered as an high
solving several non-linear simultaneous equations which could availability system which should be reliable and stable. To
be time and resource intensive for an embedded system. achieve high system reliability, a protocol suite has been
Therefore, it should be simplified mathematically to reduce introduced.
the complexity.
D. Data Integrity Protocol
2 2 Automatic hibernation of devices during very low power
Rm = (vTm + R0 ) •
to prevent data corruption in memory card.
Rm 2 = (vTm )2 + 2vTm R0 + R0 2 • Message Queuing Mechanism to store and resend mes-

0 = (vTm )2 + 2vTm R0 + R0 2 − Rm 2 sages that failed to deliver due to connection failures.


1) Keep-alive Protocol: A keep-alive protocol relying on
(R0 2 − Rm 2 ) exchanging periodical keep-alive or heartbeat messages is used
0 = vTm + 2R0 + (4)
vTm to detect radio link failures.
Remove 2R0 term by substituting the m = 1 term. 2) Time Synchronization Protocol: This protocol ensures
maintaining a correct clock time between all connected de-
(R0 2 − R1 2 ) vices. If actual local time could not be determined, all devices
0 = vT1 + 2R0 + (5)
vT1 of a single node are set to have a single relative time irre-
By Eq. 4 - Eq. 5, spective of the actual local time which eliminates the error
introduced by incorrect time.
(R0 2 − Rm 2 ) (R0 2 − R1 2 ) 3) System Stability Protocol: This protocol ensures to min-
0 = vTm − vT1 + − (6)
vTm vT1 imize the impact occurred by device malfunctions which can
R0 2 − Rm 2 can be replaced by using equation 2. be identified by keep-alive protocol.
• When a Sound Processor is malfunctioning all the con-
Rm 2 = xm 2 +ym 2 +zm 2 −x2xm −y2ym −z2zm +x2 +y 2 +z 2 nected Listening Posts are switched to connection es-
tablishment mode upon informing the agents about the
Rm 2 = xm 2 + ym 2 + zm 2 − x2xm − y2ym − z2zm + R0 2
incident. If the Listening Post can detect any nearby
R0 2 −Rm 2 = −xm 2 −ym 2 −zm 2 +x2xm −y2ym +z2zm (7) Sound Processor a new connection will be automatically
established until the malfunctioning device is replaced.
Combining equations 6 and 7 will result in a set of linear • When a Listening Post is malfunctioning the sound detec-
equations. tion and localization algorithms are adjusted accordingly
to minimize the impact.
0 = xAm + yBm + zCm + Dm
E. Real-time Web Application
2xm 2x1
Am = − Numerous functions are carried out by the real-time web
vTm vT1
application, including, but not limited to, managing the de-
2ym 2y1 vices connected to the network, assigning Listening Posts to
Bm = −
vTm vT1 Sound Processors, alerting users about significant incidents
2zm 2z1 and deploying software updates. The front-end of the web
Cm = − application is developed using Angular [20], which is one of
vTm vT1
the most-widely used JavaScript-based frameworks. All the [9] (2008) nRF24L01+ Single Chip 2.4 GHz Transceiver Pre-
system data are stored in a document-oriented database and liminary Product Specification v1. 0. Semiconductor, Nordic.
[Online]. Available: https://www.sparkfun.com/datasheets/Components/
exposed to front-end through JSON based Representational SMD/nRF24L01Pluss Preliminary Product Specification v1 0.pdf
State Transfer (REST) API which is developed using expressJS [10] Oitzu, “Fixing Your Cheap NRF24L01+ PA/LNA Module.” [Online].
[21], a rich JavaScript Framework for NodeJS [22] for building Available: http://blog.blackoise.de/2016/02/fixing-your-cheap-nrf24l01-
palna-module
applications and services. [11] CC1310 Simplelink Ultra-Low Power Sub-1GHz Wireless MCU. Texas
Instruments. [Online]. Available: http://www.ti.com/lit/ds/symlink/
V. F UTURE W ORK cc1310.pdf
[12] P. Cano, E. Batle, T. Kalker, and J. Haitsma, “A review of algorithms
Sound Localization refers to a listener’s ability to identify for audio fingerprinting,” in Multimedia Signal Processing, 2002 IEEE
the location or origin of a detected sound in direction and Workshop on. IEEE, 2002, pp. 169–173.
[13] K. J. Piczak, “Environmental sound classification with convolutional
distance. Due to the unavailability of a matching Sound neural networks,” in Machine Learning for Signal Processing (MLSP),
Localization mechanism [23]–[26] suitable for the proposed 2015 IEEE 25th International Workshop on. IEEE, 2015, pp. 1–6.
architecture, a new method has been developed based on the [14] (2017, Aug) Getting Started — TensorFlow. Google, Inc. [Online].
Available: {https://www.tensorflow.org/get started}
principles of Acoustic Multilateration which will approximate [15] Berry Conda. [Online]. Available: https://github.com/jjhelmus/
the origin of a sound. Further practical tests should be carried berryconda
out on this module to test and improve its accuracy. [16] pudub - Manipulate audio with a simple and easy high level interface.
[Online]. Available: https://github.com/jiaaro/pydub
[17] The SciPy library. [Online]. Available: https://www.scipy.org/scipylib/
VI. C ONCLUSION index.html
A new architecture for monitoring forests to detect illegal [18] Multilateration. [Online]. Available: http://students.cec.wustl.edu/
∼andrewcorrubia/multilateration.html
logging using chainsaw noise identification on a Wireless Sen- [19] Multilateration - Wikipedia. [Online]. Available: https://en.wikipedia.
sor Network (WSN) has been proposed in this paper as well org/wiki/Multilateration
as a methodology based on a Neural Network to accurately [20] (2017, Aug) Angular - What is Angular? Google, Inc. [Online].
Available: https://angular.io/docs
identify the chainsaw acoustic signals. The noise recognition [21] (2017) Express - Fast, unopinionated, minimalist web framework for
aspect along with the localization aspect of the system require Node.js. [Online]. Available: https://expressjs.com
continued research for a successful implementation in a real [22] (2017, Aug) Node.js. [Online]. Available: https://hapijs.com
[23] A. Pourmohammad and S. M. Ahadi, “N-dimensional N-microphone
environment. sound source localization,” EURASIP Journal on Audio, Speech, and
Music Processing, vol. 2013, no. 1, p. 27, 2013.
ACKNOWLEDGMENT [24] C. F. Scola and M. D. B. Ortega, “Direction of arrival estimation: A
two microphones approach,” 2010.
This work was initiated as a component of the final year [25] M. Imran, A. Hussain, N. M. Qazi, and M. Sadiq, “A methodology for
research project at the Sri Lanka Institute of Information sound source localization and tracking: Development of 3D microphone
array for near-field and far-field applications,” in Applied Sciences and
Technology (SLIIT), Malabe, Sri Lanka. The authors would Technology (IBCAST), 2016 13th International Bhurban Conference on.
like to express their sincere gratitude to the institute for IEEE, 2016, pp. 586–591.
providing excellent guidance throughout the project. [26] Y. GUO, J. WU, and S. ZHU, “SRP-PHAT Source Location Algorithm
Based on Chaos Artificial Bee Colony Algorithm,” 2015.
R EFERENCES
[1] S. Lawson and L. MacFaul, Illegal logging and related trade: Indicators
of the global response. Chatham House London, 2010.
[2] L. Czúni and P. Z. Varga, “Lightweight acoustic detection of logging
in wireless sensor networks,” in The International Conference on Dig-
ital Information, Networking, and Wireless Communications (DINWC).
Society of Digital Information and Wireless Communication, 2014, p.
120.
[3] L. Hema, D. Murugan, and R. M. Priya, “Wireless sensor network based
conservation of illegal logging of forest trees,” in Emerging Trends In
New & Renewable Energy Sources And Energy Management (NCET
NRES EM), 2014 IEEE National Conference On. IEEE, 2014, pp.
130–134.
[4] J. Papán, M. Jurečka, and J. Púchyová, “WSN for forest monitoring to
prevent illegal logging,” in Computer Science and Information Systems
(FedCSIS), 2012 Federated Conference on. IEEE, 2012, pp. 809–812.
[5] L. Petrica and G. Stefan, “Energy-Efficient WSN Architecture for Illegal
Deforestation Detection,” Int J Sensors Sensor Netw, vol. 3, no. 3, pp.
24–30, 2015.
[6] L. Czúni and P. Z. Varga, “Time Domain Audio Features for Chainsaw
Noise Detection Using WSNs,” IEEE Sensors Journal, vol. 17, no. 9,
pp. 2917–2924, 2017.
[7] “Rainforest Connection (RFCx): Phones Turned into Forest Guardians,”
2017. [Online]. Available: https://rfcx.org/
[8] M. Babis, M. Duricek, V. Harvanova, and M. Vojtko, “Forest Guardian–
Monitoring System for Detecting Logging Activities Based on Sound
Recognition,” Researching Solutions in Artificial Intelligence, Computer
Graphics and Multimedia, IIT. SRC, vol. 2011, p. 1, 2011.

View publication stats

You might also like