Professional Documents
Culture Documents
Integración de Redes Neuronales en Redes de Sensores Inteligentes.
Integración de Redes Neuronales en Redes de Sensores Inteligentes.
Abstract—Within the scope of sensor networks, there computing resource is connected to the system it would
exists a number of obstacles in the face of lower processing “inherit” the necessary knowledge and information from
power, if focusing on power consumption, as well as the other modules present. This approach would allow
maintenance and deployment issues when operating on a making the system more easily scalable, and as a result it
larger scale. Such issues as bandwidth availability as well as
would more easily deal with unexpected situations and
communication latency are also important factors to
consider. This article discusses a distributes sensor network more complex problems.
model that utilizes a low power approach along with neural
II. OVERVIEW OF RELATED WORK
networks to facilitate more wholesome local processing fro
faster responses as well as improve overall system efficiency. There are various ways can be accelerated using a
The physical deployment model used the Intel Movidius number of approaches, either by means of specialized
VPU modules and Raspberry Pi microcontrollers to hardware or via software optimization. Nowadays there
investigate and model the neural network integration. As are a number of processors and devices developed
part of the architecture, Caffe and TensorFlow frameworks
specifically to work with neural networks. Below is a
were used the most to process graphical and numerical data.
Certain abstraction layers had to be developed to facilitate
quick overview of currently present solutions. Mobileye
smoother horizontal communication between the units. The EyeQ is a specialized processor to accelerate machine
system was tested to operate with both graphical inputs such vision processing with the intended use lying in
as images or video feeds, as well as numerical information, autonomous vehicles. Google Tensor Processing Unit
such as analog sensor data. (Google TPU) is a tensor processor of the neural
Keywords—Neural Networks, Sensor Networks, Machine processing class, which is a specialized integral system
Learning, Deep Learning, Smart Sensors, Distributed developed by Google, and intended to be used in tandem
Systems, Low-Power Solutions, Scalable Architecture with the machine learning library TensorFlow, as part of
new approach development in big data processing. The
I. INTRODUCTION Intel Nervana Neural Network Processor (NNP) is the first
In today’s environment neural networks are finding commercially available tensor processor, which is
new applications in many branches of the industry. intended for training neural networks using deep learning.
However, while major solutions apply the cloud Another complimentary example to such an approach can
computing approach, where the data is funnelled to central be IBM TrueNorth, which is a neuromorphic processor,
servers for processing, this introduces issues of bandwidth implemented based on neuron interaction rather than
limitations, network access and latency when considered arithmetic modeling. Lastly in terms of hardware there is
in the context of larger scale data acquisition and the Intel Movidius Myriad 2 which is a multicore AI-
transducer networks. For example, a smart building may accelerator based on the VLIW (Very Long Instruction
wish to uphold the privacy of the collected data, while Word) architecture. With additional tie-ins for video
other tasks such as image and video processing would processing it serves as a basis for the Movidius Neural
require significant amounts of computational power. Compute Stick, a device that supports USB interface and
These sorts of tasks can greatly benefit from the use of a employs the technology of deep neural networks [1].
compact and smart data processing method, for large data Similarly, in terms of software there are a few prominent
sets or big data structures, as well as dynamic resource frameworks at the moment that were also employed in this
allocation or system expansion. Case-wise examples can work. They are the TensorFlow framework, which is an
include a situation where currently allocated system open source library for machine learning developed by
resource are insufficient for the presented task, may Google and aimed at establishing and training neural
require new classes of supplied information, graphical or networks, with the goal of automatic location and
otherwise, or simply a task that this particular system has classification of figures nearing perception levels close to
not dealt with before. In order to solve this problem the those of a human. The other framework that was used is
software layer, based on a neural network, would be Caffe (Convolution Architecture For Feature Extraction)
implemented in such a way that if a new device, or is a deep learning environment, developed by Yangqing
III. IMAGE RECOGNITION AND GRAPHIC PROCESSING To perform image recognition, during the initial stages
CAPABILITY INTEGRAITON the classical Convolutional Neural Network (CNN) model
Computer vision allows to discover, analyze track and can be used, for example AlexNet Figure 2 [6] [7].
classify objects. However in many applications, the main
task is offloaded to a remote server, or may induce high
performance and monetary costs if done locally. A few
examples of applications include medicine, industrial
applications as well as in the military [3] [4] [5]. For
example, some of the most advanced systems are capable
of sending the missile to a given area, instead of a specific
target. Upon arrival the missile will locate the targets
locally, after navigating to the correct location based on
received video data. Another new area of application is
the autonomous vehicle industry, including submersible Figure 2. Network architecture illustration
vehicles, aircrafts as well as land-based vehicles such as
As a starting point for testing, the dataset from The
cars, trains and robots. In these applications the level of
German Traffic Sign Recognition Benchmark [8] was
autonomy can vary from fully autonomous, for example
used. However in the future other types of datasets, as
UAVs, to assistive where the machine vision data is used
well as developments to the current one are in progress.
to support the driver or pilot and is limited by the
application and available computing and power resources. To minimize the time needed to train a neural network, the
To test graphic processing capabilities being integrated method of adaptation and improvement of already
established network models using the Caffe framework is
into the structure, the above applications were considered
employed. The framework describes the network in a
and used as a basis for the testing example. Given the fact
specialized “prototxt” file format, where the layers are
that the compact size of RPi and Movidius NCS positively
described in a style similar to JSON, as seen in Figure 3.
impact their mobility capabilities, the nodes with the
added functionality could also be used in self-driving cars The trained network is stored under a “caffemodel” file
or traffic-based applications. As mentioned previously the extension.
low power consumption of the overall unit also positively
reflects on the viability of such application. In order to
validate the core functionality being integrated, the
following stages are used.
Search and creation of the dataset to facilitate the
learning
The training of the deep neural network
Converting the input graphical flow (either static
images or video) into required format
Graphical processing of the data using the neural
network based on Movidius NCS Figure 3. An example of a layer within the neural network
545
To implement the aforementioned system, with
The obtained prototxt and caffemodel files are required dynamic architecture, the following set-up structure was
to create the “graph” file, which is in turn an input file for established. A microntroller board consisting of a
the neural chip interface. The file can be generated using a Raspberry Pi, the Movidius NCS modules and the Emotiv
specialized utility “MyNCCompile”. In order to utilize the Epoc+, the graphical representation of the setup can be
established infrastructure for video feed processing, a few seen in Figure 5 below.
slight modifications need to take place. After the video
feed is routed to the network, the “categories” and graph
files need to be modified to reflect the names of classes
within the trained network, in the same order as during its
learning procedure. This process can be automated the
same way as the dataset generation. By running the
modified software in tandem with the Movidius NCS we
are able to recognize the road sign presented to the unit
via a video feed as can be seen in Figure 4.
546
Figure 7. The graph of the recorded signals
547
utilization and placement based on specific applications consideration the reduction of nodes necessary has also
and use model, however they will not be discussed in improved. The system has performed block level testing
detail within the scope of this work. The above described on the ability to perform graphical data analysis using the
structure will also further benefit from leveraging the architecture outlined in Section III as well as complex
network aspect of the architecture overall, as will be numerical data processing discussed in Section IV, which
described in Section VI. also included an incorporation of a software layer to
allow more versatile data inputs.
VI. FUTURE WORK
REFERENCES
Future development of the functionality introduced in this [1] M. H. Ionica and D. Gregg, “The movidius myriad architecture’s
work is currently encompassing the following: potential for scientific computing,” IEEE Micro, vol. 35, no. 1, pp.
6–14, 2015.
Incorporating load distribution across multiple [2] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R.
NCS units Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional
Integrating the video processing component into a architecture for fast feature embedding,” in Proceedings of the
more complex system 22nd ACM international conference on Multimedia. ACM, 2014,
pp. 675–678.
Improving the dynamic resource allocation [3] A. Makaenko and V. Kalaida, “Technique of localization of face
Improving the “shared networks” architecture image for the systems of videocontrol on the bases of neuronet,”
implementation Thompson Polytechnic University Journal, vol. 309, no. 8, 2006.
[4] G. J. Beach, G. Moody, J. Burkowski, and C. J. Jacobus, “Portable
The last point of the aforementioned list refers to the composable machine vision system for identifying projectiles,”
ability of our system to “share” the networks it has already Jan. 11 2018, uS Patent App. 15/675,763.
[5] M. Nielsen, “Using neural nets to recognize handwritten digits,”
trained previously with other nodes or units, if the unit Neural Networks and Deep Learning, 2015.
determines that it is undertaking a similar task. The [6] H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao,
improvement to the obtained network may then be made D. Mollura, and R. M. Summers, “Deep convolutional neural
based on new data that is received, and an updated version networks for computer-aided detection: Cnn architectures, dataset
characteristics and transfer learning,” IEEE transactions on
of the network will be shared back via the “common medical imaging, vol. 35, no. 5, pp. 1285–1298, 2016.
library” function which is implemented by relying on the [7] R. Liao, A. Schwing, R. Zemel, and R. Urtasun, “Learning deep
network structure of the system, where the nodes are able parsimonious representations,” in Advances in Neural Information
to communicate internally without the need for external Processing Systems, 2016, pp. 5076–5084.
[8] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “Man vs.
network connections. computer: Benchmarking machine learning algorithms for traffic
sign recognition,” Neural Networks, no. 0, pp. –, 2012. [Online].
VII. CONCLUSION Available:
http://www.sciencedirect.com/science/article/pii/S0893608012000
In conclusion, the nodes of the transducer network were 457
able to assimilate the ability to use neural network [9] J. Meng, S. Zhang, A. Bekyo, J. Olsoe, B. Baxter, and B. He,
approaches to process bigger volumes of more complex “Noninvasive electroencephalogram based control of a robotic
arm for reach and grasp tasks,” Scientific Reports, vol. 6, p.
data. Furthermore, by incorporating the Visual Processing 38565, 2016.
Units (VPUs) into the new architecture the nodes have [10] L. Yuan and J. Cao, “Patients’ eeg data analysis via spectrogram
gained the ability to perform more sophisticated analysis image with a convolution neural network,” in International
and decision making based on graphical data such as Conference on Intelligent Decision Technologies. Springer, 2017,
pp. 13–21.
static imaging or dynamic video feeds. The improved [11] F. Shir, “Mind-reading system-a cutting-edge technology,” Mind,
processing capability allows for the system to become vol. 6, no. 7, 2015.
more adaptable to different environment and more [12] I. Lobachev and E. Cretu, “Smart sensor network for smart
efficient in terms of necessary units to cover a specific buildings,” in Information Technology, Electronics and Mobile
Communication Conference (IEMCON), 2016 IEEE 7th Annual.
area or obtain a desired volume of data. Furthermore, the IEEE, 2016, pp. 1–7.
increased processing power aided by the neural network [13] F. Conti and L. Benini, “A ultra-low-energy convolution engine
integration adds less than 1W of power budget increase for fast brain-inspired vision in multicore clusters,” in Design,
per NCS [13], which means that the overall power Automation & Test in Europe Conference & Exhibition (DATE),
2015. IEEE, 2015, pp. 683–688.
efficiency of the system as a whole, taking into
548