You are on page 1of 1066

Lecture Notes in Networks and Systems 22

Michael E. Auer
Danilo G. Zutin Editors

Online Engineering
& Internet of
Proceedings of the 14th International
Conference on Remote Engineering and
Virtual Instrumentation REV 2017, held
15–17 March 2017, Columbia University,
New York, USA
Lecture Notes in Networks and Systems

Volume 22

Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
The series “Lecture Notes in Networks and Systems” publishes the latest develop-
ments in Networks and Systems—quickly, informally and with high quality. Original
research reported in proceedings and post-proceedings represents the core of LNNS.
Volumes published in LNNS embrace all aspects and subfields of, as well as new
challenges in, Networks and Systems.
The series contains proceedings and edited volumes in systems and networks,
spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor
Networks, Control Systems, Energy Systems, Automotive Systems, Biological
Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems,
Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems,
Robotics, Social Systems, Economic Systems and other. Of particular value to both
the contributors and the readership are the short publication timeframe and the
world-wide distribution and exposure which enable both a wide and rapid
dissemination of research output.
The series covers the theory, applications, and perspectives on the state of the art
and future developments relevant to systems and networks, decision making,
control, complex processes and related areas, as embedded in the fields of
interdisciplinary and applied sciences, engineering, computer science, physics,
economics, social, and life sciences, as well as the paradigms and methodologies
behind them.

Advisory Board
Fernando Gomide, Department of Computer Engineering and Automation—DCA, School
of Electrical and Computer Engineering—FEEC, University of Campinas—UNICAMP,
São Paulo, Brazil
Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University,
Istanbul, Turkey
Derong Liu, Department of Electrical and Computer Engineering, University of Illinois
at Chicago, Chicago, USA and Institute of Automation, Chinese Academy of Sciences,
Beijing, China
Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta,
Alberta, Canada and Systems Research Institute, Polish Academy of Sciences, Warsaw,
Marios M. Polycarpou, KIOS Research Center for Intelligent Systems and Networks,
Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus
Imre J. Rudas, Óbuda University, Budapest Hungary
Jun Wang, Department of Computer Science, City University of Hong Kong
Kowloon, Hong Kong

More information about this series at
Michael E. Auer Danilo G. Zutin


Online Engineering &

Internet of Things
Proceedings of the 14th International
Conference on Remote Engineering
and Virtual Instrumentation REV 2017, held
15–17 March 2017, Columbia University,
New York, USA

Michael E. Auer Danilo G. Zutin
Carinthia University of Applied Sciences Carinthia University of Applied Sciences
Villach Villach
Austria Austria

ISSN 2367-3370 ISSN 2367-3389 (electronic)

Lecture Notes in Networks and Systems
ISBN 978-3-319-64351-9 ISBN 978-3-319-64352-6 (eBook)
DOI 10.1007/978-3-319-64352-6
Library of Congress Control Number: 2017953532

© Springer International Publishing AG 2018

This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by Springer Nature

The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

The REV conference is the annual conference of the International Association of

Online Engineering (IAOE) and the Global Online Laboratory Consortium
REV2017 was the 14th in a series of annual events concerning the area of
Remote Engineering and Virtual Instrumentation. The general objective of this
conference is to contribute and discuss fundamentals, applications, and experiences
in the field of Remote Engineering and Virtual Instrumentation and related new
technologies like Internet of Things, Industry 4.0, Cyber-security, M2M, and Smart
Objects. Another objective of the conference is to discuss guidelines and new
concepts for education at different levels for above-mentioned topics including
emerging technologies in learning, MOOCs & MOOLs, Open Resources, and
STEM pre-university education.
REV2017 has been organized in cooperation with Columbia University, New
York, and the International E-Learning Association (IELA) from March 15 to 17,
2017, in New York.
REV2017 offered again an exciting technical program as well as networking
opportunities. Outstanding scientists accepted the invitation for keynote speeches:
• George Giakos
IEEE Fellow, Professor and Chair, Director of Graduate Program, Manhattan
College, ECE Department, Riverdale NY, USA
• Greg Dixson
Director of Industrial Electronics, Phoenix Contact USA
• Helmut Krcmar
Chair of Information Systems, Academic Director of SAP University
Competence Center Munich, Technical University of Munich, Germany
• Robert J. Rencher
Sr. Systems Engineer and Co-Leader Boeing enterprise Internet of Things/
Digital Business strategy team, The Boeing Company, Chicago IL, USA
vi Preface

• Tarek M. Sobh
Senior Vice President for Graduate Studies and Research and Dean of the
School of Engineering, University of Bridgeport, USA
It was in 2004 when we started this conference series in Villach, Austria,
together with some visionary colleagues and friends from around the world. When
we started our REV endeavor, the Internet was just 10 years old! Since then, the
situation regarding Online Engineering and Virtual Instrumentation has radically
changed. Both are today typical working areas of most of the engineers, and are
inseparably connected with
• Internet of Things
• Cyber-physical Systems
• Collaborative Networks and Grids
• Cyber-cloud Technologies
• Service Architectures
to name only a few.
With our conference in 2004 (thirteen years ago), we tried to focus on the
upcoming use of the Internet for engineering tasks and the problems around it –
with big success as we can see today.
The following main themes have been discussed in detail:
• Online Engineering
• Cyber-physical Systems
• Internet of Things
• Industry 4.0
• Cyber-security
• M2M Concepts
• Virtual and Remote Laboratories
• Remote Process Visualization and Virtual Instrumentation
• Remote Control and Measurement Technologies
• Networking, Grid and Cloud Technologies
• Mixed-reality Environments
• Telerobotics and Telepresence, Coboter
• Collaborative Work in Virtual Environments
• Smart City, Smart Energy, Smart Buildings, Smart Homes
• Innovative Organizational and Educational Concepts
• Standards and Standardization Proposals
• Applications and Experiences
As submission types have been accepted:
• Full Paper, Short Paper
• Work in Progress, Poster
• Special Sessions
• Round Table Discussions, Workshops, Tutorials
Preface vii

All contributions were subject to a double-blind review. The review process was
very competitive. We had to review nearly 300 submissions. A team of about 129
reviewers did this terrific job. My special thanks go to all of them.
Due to the time and conference schedule restrictions, we could finally accept
only the best 116 submissions for presentation. The conference had again more than
140 participants from 31 countries from all continents.
REV2018 will be held in Düsseldorf, Germany, and REV2019 in Bangalore,

Michael E. Auer
REV General Chair
Online Engineering & Internet
of Things – Proceedings of the 14th International
Conference on Remote Engineering
and Virtual Instrumentation (REV 2017)


General Chair

Michael E. Auer

International Advisory Board

Kimberly DeLong MIT, Cambridge MA, USA

David Guralnick President of the International E-Learning
Association (IELA) and Kaleidoscope
Learning, USA
Ingvar Gustavsson Blekinge Institute of Technology, Sweden
Bert Hesselink Stanford University, USA
Zorica Nedic University of South Australia
Neil Albert Salonen President, University of Bridgeport, USA
Cornel Samoila University of Brasov, Romania

Conference Co-chairs

Doru Ursutiu IAOE President, Romania

Abul Azad GOLC President, USA
Tarek Sobh University of Bridgeport, USA

x Online Engineering & Internet of Things

Program Committee Chairs

Danilo Garbi Zutin, Austria

Elif Kongar, USA

ASEE Liaison

Navarun Gupta, USA

IEEE Liaison

Russ Meier, USA

Workshop and Tutorial Chair

Andreas Pester, Austria

Special Session Chair

Igor Titov, Russia

Demonstration and Poster Chair

Teresa Restivo, Portugal

Publication Chair and Web Master

Sebastian Schreiter, France

International Program Committee

Akram Abu-Aisheh Hartford University, USA

Laiali Almazaydeh Al-Hussein Bin Talal University, Jordan
Yacob Astatke Morgan State University, USA
Gustavo Alves ISEP Porto, Portugal
Chris Bach University of Bridgeport, USA
Online Engineering & Internet of Things xi

Nael Bakarad Grand Valley State University, USA

David Boehringer University of Stuttgart, Germany
Michael Callaghan University of Ulster, Northern Ireland
Manuel Castro MIT Madrid, Spain
Arthur Edwards University of Colima, Mexico
Torsten Fransson KTH Stockholm, Sweden
Javier Garcia-Zubia University of Deusto, Spain
Denis Gillet EPFL Lausanne, Switzerland
Olaf Graven Buskerud University College, Norway
Ian Grout University of Limerick, Ireland
Christian Guetl Graz University of Technology, Austria
Alexander Kist University of Southern Queensland, Australia
Petros Lameras Serious Games Lab, School of Computing,
Electronics and Mathematics, Coventry
Reinhard Langmann CUAS Dusseldorf, Germany
Ananda Maiti University of Southern Queensland, Australia
Zorica Nedic University of South Australia, Australia
Ingmar Riedel-Kruse Stanford University, USA
Hamadou Saliah-Hassane TÉLUQ, Montréal, Canada
Franz Schauer Tomas Bata University, Czech Republic
Juarez Silva University of Santa Catarina, Brazil
James Uhomoibhi University of Ulster, UK
Vladimir Uskov Bradley University, USA
Matthias Christoph Utesch Technical University of Munich, Germany
Igor Verner Technion Haifa, Israel
Katarina Zakova Slovak University of Technology, Slovakia

Internet of Things
Cloud-Based Industrial Control Services . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Reinhard Langmann and Michael Stiller
Wireless Development Boards to Connect the World . . . . . . . . . . . . . . . . 19
Pedro Plaza, Elio Sancristobal, German Carro, Manuel Castro,
and Elena Ruiz
CHS-GA: An Approach for Cluster Head Selection
Using Genetic Algorithm for WBANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Roopali Punj and Rakesh Kumar
Proposal IoT Architecture for Macro and Microscale Applied
in Assistive Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Carlos Solon S. Guimarães, Jr., Renato Ventura B. Henriques,
Carlos Eduardo Pereira, and Wagner da Silva Silveira
Using Industrial Internet of Things to Support Energy Efficiency
and Management: Case of PID Controller . . . . . . . . . . . . . . . . . . . . . . . . 44
Tom Wanyama
MODULARITY Applied to SMART HOME . . . . . . . . . . . . . . . . . . . . . . 56
Doru Ursuţiu, Andrei Neagu, Cornel Samoilă, and Vlad Jinga
Development of M.Eng. Programs with a Focus on Industry 4.0
and Smart Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Michael D. Justason, Dan Centea, and Lotfi Belkhir
Remote Acoustic Monitoring System for Noise Sensing . . . . . . . . . . . . . . 77
Unai Hernandez-Jayo, Rosa Ma Alsina-Pagès, Ignacio Angulo,
and Francesc Alías

xiv Contents

Testing Security of Embedded Software Through Virtual

Processor Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Andreas Lauber and Eric Sax

Virtual and Remote Laboratories

LABCONM: A Remote Lab for Metal Forming Area . . . . . . . . . . . . . . . 97
Lucas B. Michels, Luan C. Casagrande, Vilson Gruber, Lirio Schaeffer,
and Roderval Marcelino
A Virtual Proctor with Biometric Authentication for Facilitating
Distance Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Zhou Zhang, El-Sayed Aziz, Sven Esche, and Constantin Chassapis
From a Hands-on Chemistry Lab to a Remote Chemistry Lab:
Challenges and Constrains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
San Cristobal Elio, J.P. Herranz, German Carro, Alfonso Contreras,
Eugenio Muñoz Camacho, Felix Garcia-Loro, and Manuel Castro Gil
Advanced Intrusion Prevention for Geographically Dispersed
Higher Education Cloud Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
C. DeCusatis, P. Liengtiraphan, and A. Sager
Remote Laboratory for Learning Basics of Pneumatic Control . . . . . . . 144
Brajan Bajči, Jovan Šulc, Vule Reljić, Dragan Šešlija, Slobodan Dudić,
and Ivana Milenković
The Augmented Functionality of the Physical Models of Objects
of Study for Remote Laboratories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Mykhailo Poliakov, Karsten Henke, and Heinz-Dietrich Wuttke
More Than “Did You Read the Script?” . . . . . . . . . . . . . . . . . . . . . . . . . 160
Daniel Kruse, Robert Kuska, Sulamith Frerich, Dominik May,
Tobias R. Ortelt, and A. Erman Tekkaya
Collecting Experience Data from Remotely Hosted
Learning Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Félix J. García Clemente, Luis de la Torre, Sebastián Dormido,
Christophe Salzmann, and Denis Gillet
“Remote Wave Laboratory” with Embedded
Simulation – Real Environment for Waves Mastering . . . . . . . . . . . . . . . 182
Franz Schauer, Michal Gerza, Michal Krbecek, and Miroslava Ozvoldova
Remote Laboratories: For Real Time Access to Experiment Setups
with Online Session Booking, Utilizing a Database and Online
Interface with Live Streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
B. Kalyan Ram, S. Arun Kumar, S. Prathap, B. Mahesh,
and B. Mallikarjuna Sarma
Contents xv

Web Experimentation on Virtual and Remote Laboratories . . . . . . . . . . 205

Daniel Galan, Ruben Heradio, Luis de la Torre, Sebastián Dormido,
and Francisco Esquembre
How to Leverage Reflection in Case of Inquiry Learning?
The Study of Awareness Tools in the Context of Virtual
and Remote Laboratory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Rémi Venant, Philippe Vidal, and Julien Broisin
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem . . . . . . . . . . . . . 235
Venkata Vivek Gowripeddi, B. Kalyan Ram, J. Pavan,
C.R. Yamuna Devi, and B. Sivakumar
Flipping the Remote Lab with Low Cost Rapid
Prototyping Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
J. Chacón, J. Saenz, L. de la Torre, and J. Sánchez
Remote Experimentation with Massively Scalable
Online Laboratories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Lars Thorben Neustock, George K. Herring, and Lambertus Hesselink
Object Detection Resource Usage Within a Remote Real-Time
Video Stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Mark Smith, Ananda Maiti, Andrew D. Maxwell, and Alexander A. Kist
Integrating a Wireless Power Transfer System into Online
Laboratory: Example with NCSLab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Zhongcheng Lei, Wenshan Hu, Hong Zhou, and Weilong Zhang
Spreading the VISIR Remote Lab Along Argentina.
The Experience in Patagonia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Unai Hernandez-Jayo, Javier Garcia-zubia, Alejandro Francisco Colombo,
Susana Marchisio, Sonia Beatriz Concari, Federico Lerro,
María Isabel Pozzo, Elsa Dobboletta, and Gustavo R. Alves
Educational Scenarios Using Remote Laboratory VISIR
for Electrical/Electronic Experimentation . . . . . . . . . . . . . . . . . . . . . . . . . 298
Felix Garcia-Loro, Ruben Fernandez, Mario Gomez, Hector Paz,
Fernando Soria, María Isabel Pozzo, Elsa Dobboletta, André Fidalgo,
Gustavo Alves, Elio Sancristobal, Gabriel Diaz, and Manuel Castro

Use and Application of Remote and Virtual Labs in Education

Robot Online Learning Through Digital Twin Experiments:
A Weightlifting Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Igor Verner, Dan Cuperman, Amy Fang, Michael Reitman, Tal Romm,
and Gali Balikin
xvi Contents

Interactive Platform for Embedded Software Development Study . . . . . 315

Galyna Tabunshchyk, Dirk Van Merode, Peter Arras, Karsten Henke,
and Vyacheslav Okhmak
Integrated Complex for IoT Technologies Study . . . . . . . . . . . . . . . . . . . 322
Anzhelika Parkhomenko, Artem Tulenkov, Aleksandr Sokolyanskii,
Yaroslav Zalyubovskiy, and Andriy Parkhomenko
Incorporating a Commercial Biology Cloud Lab into
Online Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Ingmar H. Riedel-Kruse
Learning to Program in K12 Using a Remote Controlled Robot:
RoboBlock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Javier García-Zubía, Ignacio Angulo, Gabriel Martínez-Pieper,
Pablo Orduña, Luis Rodríguez-Gil, and Unai Hernandez-Jayo
Spatial Learning of Novice Engineering Students Through Practice
of Interaction with Robot-Manipulators . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Igor Verner and Sergei Gamer
Concurrent Remote Group Experiments in the Cyber Laboratory . . . . . 367
Nobuhiko Koike
The VISIR+ Project – Preliminary Results of the Training Actions . . . . 375
M.C. Viegas, G. Alves, A. Marques, N. Lima, C. Felgueiras, R. Costa,
A. Fidalgo, I. Pozzo, E. Dobboletta, J. Garcia-Zubia, U. Hernandez,
M. Castro, F. Loro, Danilo Garbi Zutin, and C. Kreiter
Laboratory Model of Coupled Electrical Drives for Supervision
and Control via Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Milan Matijević, Željko V. Despotović, Miloš Milanović, Nikola Jović,
and Slobodan Vukosavić
Online Course on Cyberphysical Systems with Remote Access
to Robotic Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Janusz Zalewski and Fernando Gonzalez
Models and Smart Adaptive Interfaces for the Improvement
of the Remote Laboratories User Experience in Education . . . . . . . . . . . 416
Luis Felipe Zapata Rivera and Maria M. Larrondo Petrie
Empowerment of University Education Through Internet
Laboratories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
Abdallah Al-Zoubi
Expert Competence in Remote Diagnostics - Industrial Interests,
Educational Goals, Flipped Classroom & Laboratory Settings . . . . . . . . 438
Lena Claesson, Jenny Lundberg, Johan Zackrisson, Sven Johansson,
and Lars Håkansson
Contents xvii

Parallel Use of Remote Labs and Pocket Labs in Engineering

Education. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
Thomas Klinger, Danilo Garbi Zutin, and Christian Madritsch
The Effectiveness of Online-Laboratories for Understanding Physics . . . . 459
David Boehringer and Jan Vanvinkenroye

Remote Control and Measurement Technologies

On the Fully Automation of the Vibrating String Experiment . . . . . . . . 469
Javier Tajuelo, Jacobo Sáenz, Jaime Arturo de la Torre, Luis de la Torre,
Ignacio Zúñiga, and José Sánchez
Identifying Partial Subroutines for Instrument Control Based
on Regular Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
Ananda Maiti, Alexander A. Kist, and Andrew D. Maxwell
Internet of Things Applied to Precision Agriculture . . . . . . . . . . . . . . . . 499
Roderval Marcelino, Luan C. Casagrande, Renan Cunha, Yuri Crotti,
and Vilson Gruber
Computer Vision Application for Environmentally Conscious
Smart Painting Truck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Ahmed ElSayed, Gazi Murat Duman, Ozden Tozanli, and Elif Kongar
Remote Monitoring and Detection of Rail Track Obstructions . . . . . . . . 517
Mohammed Misbah Uddin, Abul K.M. Azad, and Veysel Demir
Improving Communication Between Unmanned Aerial Vehicles
and Ground Control Station Using Antenna Tracking Systems . . . . . . . 532
Sebastian Pop, Marius Cristian Luculescu, Luciana Cristea,
Constantin Sorin Zamfira, and Attila Laszlo Boer
Remote RF Testing Using Software Defined Radio . . . . . . . . . . . . . . . . . 540
Stephen Miller and Brent Horine
Remote Control of Large Manufacturing Plants Using Core
Elements of Industry 4.0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
Hasan Smajic and Niels Wessel

Games Engineering
Dinner Talk: A Language Learning Game Designed
for the Interactive Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
Jacqueline Schuldt, Stefan Sachse, and Lilianne Buckens
The Experimento Game: Enhancing a Players’ Learning Experience
by Embedding Moral Dilemmas in Serious Gaming Modules . . . . . . . . . 561
Jacqueline Schuldt, Stefan Sachse, Verena Hetsch, and Kevin John Moss
xviii Contents

The Finite State Trading Game: Developing a Serious Game

to Teach the Application of Finite State Machines
in a Stock Trading Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
Matthias Utesch, Andreas Hauer, Robert Heininger, and Helmut Krcmar
A Serious Game for Learning Portuguese Sign
Language - “iLearnPSL” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
Marcus Torres, Vítor Carvalho, and Filomena Soares
The Implementation of MDA Framework in a Game-Based
Learning in Security Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
Jurike V. Moniaga, Maria Seraphina Astriani, Sharon Hambali,
Yangky Wijaya, and Yohanes Chandra
Industrial Virtual Environments and Learning Process . . . . . . . . . . . . . . 609
Jean Grieu, Florence Lecroq, Hadhoum Boukachour, and Thierry Galinho
How Game Design Can Enhance Engineering Higher Education:
Focused IT Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
Olga Dziabenko, Valentyna Yakubiv, and Lyubov Zinyuk
Physioland - A Serious Game for Rehabilitation of Patients
with Neurological Diseases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
Tiago Martins, Vítor Carvalho, and Filomena Soares

Human Computer Interfaces, Usability, Reusability, Accessibility

The Development of ICT Tools for E-inclusion Qualities. . . . . . . . . . . . . 645
Dena Hussain
Insights Gained from Tracking Users’ Movements Through
a Cyberlearning System’s Mediation Interface . . . . . . . . . . . . . . . . . . . . . 652
Daniel Stuart Brogan, Debarati Basu, and Vinod K. Lohani
Practical Use of Virtual Assistants and Voice User Interfaces
in Engineering Laboratories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
Michael James Callaghan, Victor Bogdan Putinelu, Jeremy Ball,
Jorge Caballero Salillas, Thibault Vannier, Augusto Gomez Eguíluz,
and Niall McShane
Approaching Emerging Technologies: Exploring Significant
Human-Computer Interaction in the Budget-Limited Classroom . . . . . . 672
James Wolfer
Touching Is Believing - Adding Real Objects to Virtual Reality . . . . . . . 681
Paulo Menezes, Nuno Gouveia, and Bruno Patrão
The Importance of Eye-Tracking Analysis in Immersive
Learning - A Low Cost Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
Paulo Menezes, José Francisco, and Bruno Patrão
Contents xix

Augmented Reality-Based Interactive Simulation Application
in Double-Slit Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
Tao Wang, Han Zhang, Xiaoru Xue, and Su Cai
Developing Metacognitive Skills for Training
on Information Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
Jesus Cano, Roberto Hernandez, Rafael Pastor, Salvador Ros,
Llanos Tobarra, and Antonio Robles-Gomez
Optimization of the Power Flow in a Smart Home . . . . . . . . . . . . . . . . . 721
Linfeng Zhang and Xingguo Xiong
A Virtualized Computer Network for Salahaddin University
New Campus of HTTP Services Using OPNET Simulator . . . . . . . . . . . 731
Tarik A. Rashid and Ammar O. Barznji

Online Engineering
GIFT - An Integrated Development and Training System
for Finite State Machine Based Approaches . . . . . . . . . . . . . . . . . . . . . . . 743
Karsten Henke, Tobias Fäth, René Hutschenreuter,
and Heinz-Dietrich Wuttke
A Web-Based Tool for Biomedical Signal Management . . . . . . . . . . . . . . 758
S.D. Cano-Ortiz, R. Langmann, Y. Martinez-Cañete, L. Lombardia-Legra,
F. Herrero-Betancourt, and H. Jacques
Optimization of Practical Work for Programming Courses
in the Context of Distance Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
Amadou Dahirou Gueye, Pape Mamadou Djidiack Faye,
and Claude Lishou
Enabling the Automatic Generation of User Interfaces
for Remote Laboratories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
Wissam Halimi, Christophe Salzmann, Hagop Jamkojian,
and Denis Gillet
A Practical Approach to Teaching Industry 4.0 Technologies . . . . . . . . . 794
Tom Wanyama, Ishwar Singh, and Dan Centea
Design of WEB Laboratory for Programming and Use
of an FPGA Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
Nikola Jović and Milan Matijević
Remote Triggered Software Defined Radio Using GNU Radio . . . . . . . . 822
Jasveer Singh T. Jethra, Pavneet Singh, and Kunal Bidkar
xx Contents

Open Educational Resources

MOOC in a School Environment: ODL Project . . . . . . . . . . . . . . . . . . . . 833
Olga Dziabenko and Eleftheria Tsourlidaki
Survey and Analysis of the Application of Massive Open Online
Courses (MOOCs) in the Engineering Education in China . . . . . . . . . . . 840
Yu Long, Man Zhang, and Weifeng Qiao
Conversion of a Software Engineering Technology Program
to an Online Format: A Work in Progress and Lessons Learned . . . . . . 851
Jeff Fortuna, Michael D. Justason, and Ishwar Singh
Increasing the Value of Remote Laboratory Federations
Through an Open Sharing Platform: LabsLand . . . . . . . . . . . . . . . . . . . 859
Pablo Orduña, Luis Rodriguez-Gil, Javier Garcia-Zubia, Ignacio Angulo,
Unai Hernandez, and Esteban Azcuenaga
Standardization Layers for Remote Laboratories as Services
and Open Educational Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874
Wissam Halimi, Christophe Salzmann, Denis Gillet,
and Hamadou Saliah-Hassane

Present and Future Trends Including Social and Educational Aspects

Innovative Didactic Laboratories and School Dropouts:
A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 887
Carole Salis, Marie Florence Wilson, Fabrizio Murgia,
and Stefano Leone Monni
Intellectual Flexible Platform for Smart Beacons . . . . . . . . . . . . . . . . . . . 895
Galyna Tabunshchyk and Dirk Van Merode
An Approach for Implementation of Artificial Intelligence
in Automatic Network Management and Analysis . . . . . . . . . . . . . . . . . . 901
Avishek Datta, Aashi Rastogi, Oindrila Ray Barman, Reynold D’Mello,
and Omar Abuzaghleh
Investigation of Music and Colours Influences on the Levels
of Emotion and Concentration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
Doru Ursuţiu, Cornel Samoilă, Stela Drăgulin, and Fulvia Anca Constantin
Framework for the Development of a Cyber-Physical Systems
Learning Centre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919
Dan Centea, Ishwar Singh, and Mo Elbestawi
Contents xxi

Applications and Experiences

The Use of eLearning in Medical Education and Healthcare
Practice – A Review Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
Blanka Klimova
Efficiency and Prospects of Webinars as a Method of Interactive
Communication in the Humanities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940
Natalya Nikolaevna Petrova, Lyudmila Pavlovna Sidorenko,
Svetlana Germanovna Absalyamova, and Rustem Lukmanovich Sakhapov
Port Logistics: Improvement of Import Process Using RFID . . . . . . . . . 949
Ignacio Angulo, Unai Hernandez-Jayo, and Javier García-Zubia
Integration of an LMS, an IR and a Remote Lab . . . . . . . . . . . . . . . . . . 957
Ana Maria Beltran Pavani, William de Souza Barbosa, Felipe Calliari,
Daniel B. de C Pereira, Vanessa A. Palomo Lima,
and Giselen Pestana Cardoso
Artificial Intelligence and Collaborative Robot to Improve
Airport Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
Frédéric Donadio, Jérémy Frejaville, Stanislas Larnier,
and Stéphane Vetault
Methodological Proposal for Use of Virtual Reality VR
and Augmented Reality AR in the Formation of Professional Skills
in Industrial Maintenance and Industrial Safety . . . . . . . . . . . . . . . . . . . 987
Jose Divitt Velosa, Luis Cobo, Fernando Castillo, and Camilo Castillo
Sketching 3D Immersed Experiences Rapidly by Hand
Through 2D Cross Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001
Frode Eika Sandnes
Analyzing Modular Robotic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014
Reem Alattas
An Educational Physics Laboratory in Mobile Versus Room
Scale Virtual Reality - A Comparative Study . . . . . . . . . . . . . . . . . . . . . . 1029
Johanna Pirker, Isabel Lesjak, Mathias Parger, and Christian Gütl
Human Interaction Lab: All-Encompassing Computing Applied
to Emotions in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044
Hector Fernand Gomez Alvarado, Judith Nunez-R, Luis Alberto Soria,
Roberto Jacome-G, Elena Malo-M, and Claudia Cartuche
Distance Learning System Application for Maritime Specialists
Preparing and Corresponding Challenges Analyzing . . . . . . . . . . . . . . . . 1050
Vladlen Shapo
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059
Internet of Things
Cloud-Based Industrial Control Services
The Next Generation PLC

Reinhard Langmann1 ✉ and Michael Stiller2

( )

Hochschule Duesseldorf University of Applied Sciences, Duesseldorf, Germany
Fraunhofer Institute for Embedded Systems and Communication Technologies ESK,
Munich, Germany

Abstract. The paper presents the concept and implementation for Cloud-based
Industrial Control Services (CICS) as a next generation PLC. As a distributed
service-oriented control system in the cloud, a CICS controller can replace the
traditional PLC for applications with uncritical timing in terms of Industry 4.0.
The CICS services are programmed to industry standards, pursuant to standard
IEC 61131-3, and executed in a CICS runtime in the cloud. This paper gives an
overview about the concept and implementation, discusses the results of appli‐
cation examples as well as the evaluation of the operability of a CICS controller.

Keywords: Control service · Cloud-based controller · Web-based control system ·

Automation system

1 Introduction

Industrial controls and, in particular, PLC controllers currently form an important techno‐
logical basis for the automation of industrial processes. Even in the age of Industry 4.0
(I40) and Industrial Internet, it can be assumed that these controllers will continue to be
required to a considerable extent for the production of tomorrow. However, the controllers
must fulfil a range of additional requirements resulting from the new production conditions.
When applying Industry 4.0 principles [1], high-quality networked production systems
result based on Cyber Physical Systems (CPS), also referred to as Cyber Physical Produc‐
tion Systems (CPPS). A series of I40 requirements are placed on the future controllers used
in these systems. Current PLC controllers cannot yet fulfil the majority of these require‐
ments or can only do so on a rudimentary basis or with extremely high expense.
Basic requirements of future and I40-able PLC controllers involve efficient networking
in an, at least partially, global network and the ability to provide control functions as
control services in this network. Here the IP network (IP – Internet Protocol) functions as
a global network in the version as Intranet or Internet with all associated standardised
Information and Communication (IC) technologies. Only in this way can the required inte‐
gration become part of a future I40 production landscape.

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_1
4 R. Langmann and M. Stiller

The paper describes the concept and a prototype implementation for the new type of a
PLC controller in which the controller functions (programs) will be implemented as control
services in a cloud. The programming of this new PLC occurs as is usual in industry
pursuant to the standard IEC 61131-3. The R&D results described in the paper have been
provided since 2014 as part of the R&D project “Potential, structure and interfaces of
cloud-based industrial control services (CICS)”.

2 State-of-the-Art

Resulting from the historical development of PLC controllers, these have been devel‐
oped as proprietary device systems that are operated locally under real-time conditions.
If a networking of these controllers is necessary from a user viewpoint, proprietary TCP/
IP protocols or those standardised in the automation sector (Modbus TCP, Profinet etc.)
are used for this. The standard technologies widespread from the Internet and Web have
so far hardly played any role for PLC controllers.
For a number of years, however, a transformation has been under way, with PLC
manufacturers increasingly integrating IC technologies from the web world in their
systems, such as web server and HTML pages for diagnosis and configuration, in order
to adapt the controllers incrementally to the new requirements.
Four different approaches to make PLC controllers I40 compatible can essentially
be revealed from state-of the-art technologies. Generally, all work assumes that the
creation of the control programs must take place in accordance with the standard
IEC61131-3, i.e. controllers from the cloud are currently only accepted in the industry
if the engineering also follows the industry standard.

2.1 Introduction of Basic Web Technologies

Most of the newer PLC controllers already contain a web server as well as special HTML
pages built into the device, enabling a browser-based configuration and diagnosis of the
controller. Process data or program variables form the control program and can also be
read, and sometimes also written, with restrictions. Access via a web browser occurs
through the HTTP protocol, which is query-based and therefore relatively slow. Exam‐
ples of this can be found in [2]. The solutions are proprietary and adapted to the relevant

2.2 Global Networking of Process Data

For integration of the PLC controllers in supervisor, management and coordination

systems (e.g. SCADA or MES systems), which are partly based on web technologies,
additional modules are integrated in the PLC controllers, which enable a bidirectional
and event-based process data transmission between the controller and supervisor &
management system. This includes solutions such as, for example, the use of Java applets
on websites for access to Siemens controllers [3], the web connector with MQTT broker
Cloud-Based Industrial Control Services 5

in Bosch-Rexroth controllers [4], or browser-based access to controllers that already

contain an OPC UA server [5].
These solutions also involve proprietary and closed control-integrated modules.
Although the modules utilise web technologies, they cannot be transferred to other
controllers. The I40 requirements can only partly be fulfilled with sometimes high adap‐
tation expense for integration in a CPPS.

2.3 Introduction of Service Principles

Based on the I40 requirement for the service capability of an I40 controller, some
projects [6, 7] are involved with the integration of service functions in PLC controllers.
Thus, the Device Protocol for Web Services (DPWS) enables, as a standardised protocol,
service-based access to PLC controllers [8] etc., also for reading/writing process data.
The internal functional system of a PLC should be equipped correspondingly for this
purpose with the support of the controller manufacturer.
An option for implementation of the DPSW, independently of the manufacturer of
a PLC, is shown in [9]. A service server is implemented here as a functional module
based on the standard IEC 61131-3 programming language. This can then be used for
the control programming.
However, the DPWS solutions have one basic disadvantage: Instead of reducing or
removing the information encapsulation (I40 requirement), additional functionalities
(service functions) are encapsulated in the controller. Moreover, DPSW uses the very
heavy-duty and complex Microsoft web service protocols. The attainable transmission
times of process data via a global network therefore tend to be in the upper range.

2.4 Virtualisation of PLCs

Current R&D work deals with the virtualisation of complete PLC controllers and their
outsourcing into a cloud. A scalable control platform for cyber-physical systems in
industrial productions is researched and realised in [10]. Such a control platform is
intended to provide scalable computing power that is automatically made available
depending on the complexity of the algorithms. The strict requirements of production
technology, such as real-time capability, availability and security should be met.
In [11], a cloud-based controller is presented, which also uses a virtual control system
in an IaaS cloud. The work of [12] also uses virtualised PLC controls in the cloud and
connects these to OPC UA-based automation devices using web technologies. Problems
with the virtualisation of PLC result especially from the fact that already available
manufacturer-specific PLCs are virtualised. These controllers, however, are closed
systems, which were originally not developed considering the aspect of web technolo‐
gies. Adjustments, modifications or extensions of these controllers by third parties are
hardly possible. Functionality cannot be resolved as services. The flexibility of virtual‐
isation is very limited.
In summary, it can be estimated that there are different solutions and efforts to equip
PLC controllers with additional functions in order to be able to use the controllers in an
Industry 4.0-type IP network. To this end, the known work already uses web technologies
6 R. Langmann and M. Stiller

in part, in a manufacturer specific and/or limited way, and increasingly also use the service
principle as well as cloud structures as a new paradigm for the realisation of control func‐
tions. However, there are several deficits exist, which accordingly require additional

3 Concept

3.1 Control Classification

To assess the I40 capabilities of a PLC controller, a classification is introduced which

divides an industrial controller according to its two abilities Service Ability (SA) and
Control Locality (CL). The properties are divided according to Class C
(Controller) = <SA><CL> . With the proposed methodology, I40 control classes can
be defined and structural configurations for Cloud-based Industrial Control Services
(CICS) can be indicated (Table 1).

Table 1. I40 capability of a controller

Class Service ability Control locality
0 No service All control programs are encapsulated locally in the
physical (hardware) system
1 Services only for non-critical Some control programs that include non-critical and
and overarching overarching functionalities are not located on the local
functionalities hardware but are instead distributed to other systems (for
example, in the network)
2 Services for most functions Most control programs are distributed in the network.
available Control programs which are critical in terms of time and
safety remain in the local hardware
3 All control programs as All control programs are distributed in the network. Third
services instances can access all the control algorithms in real time
and change them

Looking at a PLC as a CPS component, the traditional IEC 61131 control program
(CP) can be divided into three parts:
• basic functional program part (CP basic - CPb),
• a program part which performs superior, administrative and/or user interface func‐
tions (CP supervisory - CPs),
• critical part of the program regarding real-time and security (CP critical - CPc).
In order to evaluate the I40 capabilities of a CICS controller, this 3-part structuring
of the control program is used, among other things. Figure 1 shows the structure of such
a PLC.
Cloud-Based Industrial Control Services 7

Fig. 1. Structure of a PLC as a CPS component

If the control system, as shown in Fig. 1, is used as the basis and modified as a result
of the increasing displacement of the control programs into a cloud as services, it leads
to the evolution of a PLC as CPS component “industrial control”, as shown in Fig. 2.

Fig. 2. Evolution of an industrial controller (PLC) as a CPS component

Three types of CPS components (Fig. 2) are produced according to the aforemen‐
tioned disassembly of the PLC program into three parts:
(a) The controller only implements the program components CPb and CPc. The tradi‐
tional runtime environment of a PLC is still required.
(b) For safety reasons, only the CPc program parts are implemented in the CPS compo‐
nent. The classic and manufacturer-specific PLC runtime machine is no longer
required. The implementation of the CPc could also be carried out with specific
embedded program parts (e.g. in C).
(c) The CPS component no longer contains a control part, but only sensors and actua‐
tors. All control programs are distributed in the network.
Service ability considers the ability of a control to utilise control functionalities (control
program parts) as services in the sense of cloud computing. According to Table 1, the
program parts CPb, CPs and CPc can be distributed unequally. In class C11, for example,
the uncritical and overlapping functionalities (CPs) are not located on the local control
platform, but distributed on other systems in the network (corresponds to a traditional,
distributed control system). However, part of the CPs could also be used as a service from
a cloud.
8 R. Langmann and M. Stiller

3.2 CICS Basis Model

A CICS base model must take into account both, the aspects of control engineering and
the web technology features.
From a control engineering point of view, a CICS control based on traditional PLC
consists of the following components (Fig. 3):
• CISC program (CICS-P): IEC61131-3 control program in the PLCopen XML nota‐
tion. It includes only the program and the variables, but no controller configuration.
• CICS runtime (CICS-RT): Execution environment for the CICS program. It can be
cycle controlled or event-based.
• CICS router (CICS-R): Device configuration for a CICS controller, i.e. it is deter‐
mined which CPS components (which automation devices) are connected to the

Fig. 3. General structure of a CICS controller

By separating CICS-P, CICS-RT and CICS-R in a CICS controller and the principle
arbitrary distribution of the individual components in an IP network utilising cloud
technologies, both a change in the control program code (control algorithm) as well as
a change in the device configuration with, for example, the replacement of modules and
Plug & Work in real time are possible. CICS-R and CICS-P can be exchanged on-the-
fly during a program cycle.
For the identification of a viable CICS basic model, it is also necessary to show
possible solution variants for CICS, starting with the basic principles on the web, and
then to reflect these in the available web technologies.
In principle, two types of network computers are available in the Internet (Web) as
a world-wide computer network:
• Server computers that can provide and run IT entities (objects, services, programs).
• Client computers that can only run IT entities.
As a working principle on this server/client computer network, the client-server
principle, i.e. a client must first submit a request to run an IT entity on the server. This
means that application technology IT entities in the server cannot act on their own (self-
acting). The client is usually a web browser on the client computer.
Cloud-Based Industrial Control Services 9

If one considers any application-specific functional system, which is implemented

with web technology tools, the models for the execution (RUN) of this system are the
result, as shown in Fig. 4:
(a) The functional system is only stored on the server and is only executed there. The
execution of the system in the server is started by the client (server mode).
(b) The functional system is stored on the server and is loaded into the client via a
request. The system is only executed in the client (client mode).
(c) The functional system is stored on the server and components of the functional
system are also executed on the server. Other components are loaded into the client
via a request and these components are also executed there. The execution of the
system components in the server is started by the client (mixed mode).

Fig. 4. Models for executing a web technology based functional system (block with black
font = only saved, block with white font = is being executed)

In all three cases, the functional system can be distributed over several servers (cloud)
or even several clients. If the control technology based CICS structure, shown in Fig. 3,
is adapted to the web based functional systems, seen in Fig. 4, two server and client
based CICS base models are obtained:
1. Server Mode (SM): The CICS runtime is statically linked to a fixed CPS component
in a configuration process. After the CICS control has been started via the client, the
CICS controller automatically connects to the associated CPS component via the IP
network and executes the control program.
2. Server-based Mixed Mode (SMM): Before starting the CICS runtime, a CICS router
is loaded from the server to the client. After the CICS runtime is started, this router
dynamically connects the CPS component with the CICS runtime on the server. The
process data from the automation device are now routed to the server via the client.
3. Client Mode (CM): CICS runtime and CICS routers are executed as an instance on
the client (Web browser). The client is an inherent part of the CICS control system
and is necessarily required for executing the control program. The server is no longer
required at the runtime.
10 R. Langmann and M. Stiller

4. Client-based Mixed Mode (CMM): The control program runs in the CICS runtime
on the client, but the communication to the CPS component runs over a dynamically
reconfigurable CICS router in the server.
Figures 5 and 6 illustrate the four basic models of a CICS control.

Fig. 5. Component structure and communication paths for server-based CICS solutions (1) –
Server Mode (SM); (2) – Server-based Mixed Mode (SMM)

Fig. 6. Component structure and communication paths for client-based CICS solutions (1) –
Client-base Mixed Mode (CMM) (2) – Client Mode (CM)

3.3 Control Services

The controller features of a CICS controller should no longer be available as classic
control functions, but rather as control services according to the service paradigm. A
CICS (literally: Cloud-based Industrial Control Service) can thereby use all the features
of cloud computing and, thus, create new business models, such as the rental of control
In terms of information technology, CICS has to be produced, parametrised, distrib‐
uted, stored and recalled as objects by means of methods. Since the components of a
CICS control are no longer available as hardware, but only as software objects in the IT
Cloud-Based Industrial Control Services 11

or Web world and generally also are stored there in databases, it makes sense to use data
models for the modelling of the CICS service architecture. Figure 7 shows the CICS
structure, seen in Fig. 3, as an Entity Relationship Diagram (ERD).

Fig. 7. CICS services of a CICS controller according Fig. 3 presented as an ERD diagram

According Fig. 7 a CICS controller is realised with two services: Runtime service
and Router service. Both CICS services are built according to the principle of web-
oriented automation services [13].

CICS-RT: Runtime Service

Corresponding to the state machine in a traditional PLC, the CICS-RT also has a defined
sequence behaviour as the most important component in order to execute a control
A CICS-RT can be operated in cyclic mode and event-based mode. In cycle mode,
the I/O image is updated, equivalent to a traditional PLC. In event-based operation, the
control program is executed only when the value of an input variable changes or an
internal event occurs (for example, the execution of a timer).

CICS-R: Router Service

The I/O configuration service of a CICS control system (CICS-R) is separated from the
CICS runtime for the following reasons:
• Securing a dynamic reconfiguration, i.e. in the case of an identical control program,
the I/O configuration can be changed within a program cycle.
• Identical machines/systems can be operated with the same control program despite
different I/O modules.
• A distributed separate configuration service forms the basis for a future automatic
IIoT1-based device configuration.
CICS routing works according to the following two principles:
• A CICS program (PLCopen XML program) works with absolute I/O addresses.
• The CICS router connects the absolute I/O addresses to the real I/O addresses of the
devices (CPS components).
Figure 8 illustrates the functionality of a CICS router.

IIoT – Industrial Internet of Things.
12 R. Langmann and M. Stiller

Fig. 8. CICS router

The digital and analogue inputs and outputs of a device, connected via the channel
interface, are routed to absolute I/O program addresses and transferred to the CICS-RT
via a CICS block channel. The routing rules (interconnection matrix) are defined via a
CICS-R XML file. The I/O process data is transmitted via the bidirectional CICS block
channel as a string between the CICS router and the CICS runtime.

4 Implementation

Within the context of the CICS project, there were two prototype implementations for
a CICS controller:
• CICS controller in the Client Mode (CM – Fig. 6-2)
• CICS controller in the Server-based Mixed Mode (SMM – Fig. 5-2)

4.1 CICS Control in the Client Mode

In the case of the CM solution, the CICS controller is implemented entirely as an instance
on the client (web browser). A process data communication with the automation devices
takes place only between the client and devices. Therefore the CICS controller can also
be operated in a separated and local (private) network.
From the perspective of technical implementation, the CICS CM Controller is a very
compact solution. In this case, CICS-RT and CICS-R can be implemented as a common
service and run in the client (Web browser). However, the work of the CICS runtime
depends on the type of operation and performance of the client computer and has to take
its limits into account. Figure 9 illustrates a CICS controller in the client mode.
Cloud-Based Industrial Control Services 13

Fig. 9. CICS control in the Client Mode (CM)

The CICS controller is implemented as a JavaScript object, which is loaded onto the
client from a cloud (CICS Cloud). Communication between the CICS controller instance
and the automation devices takes place via a universal gateway as a web connector. The
gateway is implemented in an embedded system ( of Wiesemann & Theis) as
a Device Gateway for Modbus TCP and for a proprietary TCP protocol. WebSocket is
used as a web protocol [14].
The control programs for the CM prototype were created with the industry-standard
programming system PC WORX (Phoenix Contact) in the language IL and exported as
PLCopen XML programs for execution in the CICS runtime. The execution is performed
via a JavaScript-based IL interpreter.

4.2 CICS Control in the Server-Based Mixed Mode

In the case of the SMM solution, the CICS runtime is executed in the server (cloud) as
an instance and the CICS router in the client (browser) as an instance (Fig. 10).

Fig. 10. CICS control in the Server-based Mixed Mode (SMM)

Here, as well, a direct process data communication takes place only between the
client and devices. Between CICS-R and CICS-RT, there is a special bidirectional CICS
14 R. Langmann and M. Stiller

block channel for the transmission of I/O images. The process data are transmitted by
this channel as Strings over WebSocket. The CICS runtime is operated via an HMI proxy
on the client. In terms of technical implementation, the CICS-SMM controller is a
distributed elaborate solution. However, process data connection to the devices can also
be performed locally and the CICS-RT can use the full performance of the server
anyway. A dynamic re-configuration is easy and possible.

5 Application

The CICS components (CICS-RT and CICS-R) are constructed as services according
to the WOAS principle [15] and are instantiated as JavaScript objects using uniform and
consistent methods. They can therefore be used in an available IoT platform or as stand-
alone web pages. The freely available IIoT platform WOAS was used for the following
application examples ( The applications therefore did not have to
be programmed, but instead were configured in the IIoT platform in a browser-based
EDIT mode with little effort.
An application example for a CICS-CM and for a CICS-SMM described hereafter.

5.1 Application of a CICS-CM Controller

The CICS-CM was successfully tested at a processing and testing station and presented
at the SPS/IPC/Drives automation fair in Nuremberg (Germany) in 2015 (Fig. 11).

Fig. 11. Application example of a CICS controller in client mode (CM); (a) processing and test
station; (b) I/O modules, IoT gateway and switch; (c) HMI in the web browser for operating/
visualisation of the CICS-CM
Cloud-Based Industrial Control Services 15

The PLC program works like a cyclical state machine that waits for the presence of
a piece in position 1 of the table and then performs some actions in other positions; it
returns to position 1 and repeats the process. The dedicated PLC program has around
250 lines and uses about 60 variables. Some of the advantages of the CM solution are:
• The server is no longer required for the runtime. The server/cloud only serves the
purpose of storing the CICS services, including the control programs. Faults or fail‐
ures of the server have no effect on the CICS control system at runtime.
• The process data communication between the devices and the CICS controller can
be limited to the local network when the client is also located on this network.
• Depending on the performance class of the client, several CICS control instances can
run on a client and thus control different CPS components (devices).
• The quality and reliability of communication between the CICS controller and
connected devices can be monitored by the client.
• Generally, any client (PC, tablet, smartphone) can function as a CICS controller.

5.2 Application Examples of a CICS-SMM Controller

The CICS-SMM controller has been extensively tested in various application examples.
In one application example, a CICS-SMM controller controls a working cell consisting
of two processing and test stations (Fig. 11a) and a loading robot. Figure 12 shows the
technological structure for this example.

Fig. 12. Technological structure of a CICS-SMM application example

Two CICS controller instances control the two processing and test stations. Another
CICS instance is responsible for coordinating the robot with the two stations. Same as
in the test example for the CICS-CM, the connection of the respective CICS router
instances to the devices of the two stations takes place via a universal gateway as a web
connector. The Modbus TCP interface of the robot is connected to the Internet via
16 R. Langmann and M. Stiller

WebSocket using a device gateway, realised by means of Node-RED [16]. The

application was successfully presented at SPS/IPC/Drives 2016.

6 Evaluation

6.1 Realtime Characteristics

A CICS control system uses IP networks for data transmission, regardless of the solution
variation. From the perspective of an automation technician, these networks are a priori
neither reliable nor deterministic and are not within the jurisdiction of the respective
technical automation solution. Extensive time measurements for different communica‐
tion structures were therefore performed for both CICS prototypes.
A practice-oriented method was chosen for the time measurements, which allows
direct statements about the reaction time of a CICS controller [17]. With respect to the
real-time capability, the following general statements can thus be made:
• With a CICS CM solution, response times of about 80…120 ms at a 95% probability
can be achieved by a standard Internet connection.
• With a CICS SMM solution, the reaction times likewise with a probability of 95%
are about 100 ms.
If the CICS controller is operated only in the Intranet, response times of under 40 ms
can certainly be achieved. Altogether, the statement can be made that technical processes
with process times of >150 ms (simple assembly process, temperature and mixing
processes, climate and energy processes, etc.) can already be performed successfully
from the cloud by means of a CICS.

6.2 Operability

The operability of a CICS control system is understood to mean the characteristics which
are important for a conventional PLC: reliability, data security and machine safety. In
this regard, new challenges arise that need to be studied and solved for the practical use
of CICS control systems.
For the sake of maintaining reliability, the two realised prototypes were investigated
in more detail by way of example. In this process, monitoring the network quality, inte‐
grating a ping-pong channel as well as a local operation for the process data transfer to
the device played a role.
Although reliability problems can be recognised with the methods described, they
cannot be eliminated. However, the CICS controller can at least be brought into a safe
state in problem situations.

7 Summary

Using the CICS concept, a new type of industrial control was developed and tested that
allows for the complete detachment of control function and associated equipment to
Cloud-Based Industrial Control Services 17

globally distributed, cloud-based software control services. A CICS control is operated

by a classic IEC61131-3 control program, thus ensuring the interoperability and indus‐
trial compatibility of the control system. The application of the service paradigm for
industrial control functions significantly increases flexibility, meets industry 4.0 require‐
ments such as changeability, reconfiguration and autonomy, and enables new business
models to lease automation functions.
For testing and evaluation, prototypical implementations were deployed for a purely
client-based CICS controller and for a mixed client/server-based CICS controller. Both
CICS controller types were successfully tested in the context of application scenarios
from production automation. An evaluation of the CICS applications showed that simple
technical processes with process times of greater than 150 ms can already be controlled
reliably over the standard Internet.
With CICS, previous hardware-oriented and centralised procedures for the control
of automated devices, machines and systems (e.g. PLC controllers) can be distributed
and used transparently for uncritical real-time conditions (e.g. environmental processes,
logistics processes, energy processes, simple assembly processes) through IP-network-
distributed software functions.

Acknowledgments. The IGF project CICS (18354N) of the Forschungsvereinigung

Elektrotechnik beim ZVEI e.V. – FE, Lyoner Str. 9, D-60528 Frankfurt am Main is funded via
the AiF within the framework of the programme for funding industrial community research and
development (IGF) of the Federal Ministry of Economics and Technology based on the resolution
of the German parliament.


1. Kagermann, H., Wahlster, W., Helbig, J.: Recommendations for implementing the strategic
initiative INDUSTRIE 4.0. In: Research Union: Business - Science, April 2013
2. Evans, Z.: Web Server Technology in Automation. Siemens Industry (2011)
3. Klindt, C.J., Baker, R.B.: Interface to a programmable logic controller. US 6853867 B1,
8 February 2005
4. Bosch-Rexroth: WebConnector: Connect simply automation and web environment (in
German). Product News, p. 50 (2015)
5. Beckhoff automation GmbH: From sensor to IT Enterprise - Big Data & analytics in the
cloud (2015).
6. EU project: SOCRADES 2006–2009.
7. Colombo, A.W., et al.: Industrial Cloud-Based Cyber-Physical Systems. Springer,
Switzerland (2014)
8. Microsoft: Introducing DPWS (2015).
9. Mathes, M., et al.: SOAP4PLC: web services for programable logic controller. In: 17th
Euromicro International Conference on Parallel, Distributed and Network-Based Processing,
Proceedings, pp. 210–219 (2009)
10. ISW of the University Stuttgart: Industrial cloud-based control platform for the production
with Cyber-Physical Systems (piCASSO - in German). BMBF project, Stuttgart (2013).
18 R. Langmann and M. Stiller

11. Grischan, E.: UACloud-Based Automation (in German). atp edn., 3/2015, pp. 28–32
12. Schmitt, J.: UACloud-Enabled Automation Systems using OPC UA. atp edn., 7–8/2014,
pp. 34–40
13. Competence Center Automation Düsseldorf (CCAD): WOAD – Interface description (in
German). – R&D document, Düsseldorf, 09 April 2014
14. Braß, M.: Development and test of an Industry 4.0 gateway for the Internet access to Modbus
TCP and proprietary TCP protocols. (in German). – Documentation of project work, CCAD,
25 January 2016
15. Langmann, R., Meyer, L.: Architecture of a web-oriented automation system. In: 18th IEEE
International Conference on Emerging Technologies & Factory Automation, ETFA 2013,
10–13 September 2013, Cagliari, Italine, Proceedings
16. Node-RED (2016).
17. Langmann, R.: An interface for CPS-based automation devices (in German). In: Proceedings
of AALE 2014, pp. 133–142, DIV-Verlag, Munich (2014)
Wireless Development Boards to Connect
the World

Pedro Plaza1(&), Elio Sancristobal2, German Carro2, Manuel Castro2,

and Elena Ruiz2
Plaza Robótica, Torrejón de Ardoz, Spain
UNED, Madrid, Spain

Abstract. Nowadays, Wireless applications are widely extended in the Sci-

entific, Education and Hobbyist communities. The aim of this paper is to pro-
vide a review of some of the most popular boards which allow an ease way to
develop a wide range of applications related to STEM (Science, Technology,
Engineering and Mathematics) in an educational manner. Moreover, the scope is
focused on those development boards which allow Wireless communications in
order to perform Things which can be integrated into an Internet of Things
environment. Arduino WiFi Shield, Arduino Yún Shield, Arduino MKR1000,
NodeMCU ESP8266 and Onion Omega have been analyzed, compared and
discussed. The analysis has been carried out attending on the Built-in Hardware,
the Programmer Interface, the connection possibilities and the Developer
Community which is behind the corresponding board.

Keywords: IoT  Wireless  Robotics  Education  STEM

1 Introduction

The aim of this paper is to present some of the current development boards which can
be used to deploy IoT (Internet of Things) applications within an STEM (Science,
Technology, Engineering and Mathematics) educational environment.
Nowadays, there are a wide range of development boards which can be classified in
several ways. According to [1], the development platforms can be categorized in four
• Based on Microcontrollers,
• based on Microprocessors,
• based on FPGA (Field Programmable Gate Array), and
• Hybrid Development Platforms.
The IoT (Internet of Things) movement has impacted on the traditional Develop-
ment Boards. There are lots of IoT based applications which are being developed in
many different fields [2]. Some examples are [3] in Emergency Medical Services, [4] in
Cloud Computing and [5] in Remote Educational laboratories.

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_2
20 P. Plaza et al.

The Arduino WiFi Shield provides to the Arduino board a wirelessly internet
connection [6]. It cannot work in a stand-alone mode. Hence, this board requires a
microcontroller to interact with other elements.
The Yún Shield easily brings the Yún features to Arduino and Genuino boards. It is
a good choice for IoT projects using wireless connection to access the internet [6].
Genuino MKR1000 is a powerful board that combines the functionality of the Zero
and the Wi-Fi Shield. It is the ideal solution for makers wanting to design IoT projects
with minimal previous experience in networking [6].
This board is an Open Source Firmware and development kit that helps the IoT
product prototyping within a few Lua script lines [7].
The Onion Omega is a Hardware development platform with built-in WiFi and a
full Linux Operating System [8].
There are a vast variety of software development platforms which can be used as
core of IoT applications. Some of them are cost-effective platforms such as the men-
tioned in this paper. Furthermore, these platforms can be easily included with the aim
of elaborating robotic educational activities where the proactive learning is empowered
through experiments in the real world.
This paper is divided in four sections. Section 2 presents the analyzed IoT devel-
opment boards. Section 3 compares all of them. The last section summarizes the
achieved conclusions after the performed investigation.

2 Wireless Development Boards

2.1 Arduino and Genuino IoT Boards
Arduino is an Open Source Hardware and Software project. Additionally, Arduino is
supported by a user community that designs and manufactures devices and interactive
objects, and it is worldwide extended and growing day by day [6]. There are four
development boards provided by Arduino for IoT purposes: Arduino WiFi Shield,
Arduino Yún Shield and Arduino MKR1000. Along this section all of them are ana-
lyzed. All of them are programmed using the Arduino Software IDE (Integrated
Development Environment). The Arduino language is based on C/C++. It links against
AVR Libc and allows the use of any of its functions. The Arduino IDE is produced for
the following operating systems: Windows, Mac OS (operative System), Linux.
Additionally, a portable IDE can be used for Windows and Linux. Every element of the
mentioned platforms in this section – Hardware, Software and documentation – is
freely available and open-source.
Arduino boards are intended for United States meanwhile Genuino is the sister
brand for products which are sold outside the United States.
The Arduino WiFi Shield provide a wirelessly connection to Arduino boards. The
connection is established by following a few simple instructions in order to connect
Things through the internet. This board presents the following characteristics:
• It is based on the HDG204 Wireless LAN (Local Area Network) 802.11b/g System
Wireless Development Boards to Connect the World 21

• There is an onboard micro-SD card slot, which can be used to store files for serving
over the network.
• The board mechanical data are: Length: 63.2 mm and width: 53.5 mm.
• The Arduino WiFi Shield board cost is 69.00 € [6].
In the same way than Arduino WiFi Shield, the Yún Shield extends the Arduino
board with the power of a Linux based system which enables advanced network
connections and applications. This board presents the following characteristics:
• Yún Web Panel and the ‘‘YunFirstConfig’’ sketch can be used to connect through
WiFi or wired network (Ethernet) in a simple way.
• The Shield preferences and sketch uploading can be performed directly from the
attached Arduino/Genuino board.
• The board mechanical data are: Length: 68.6 mm and width: 53.3 mm.
• The Genuino Yún Shield board cost is 39.90 € [6].
Genuino MKR1000 has been designed to offer a practical and cost effective
solution for projects which require Wi-Fi connectivity. This board presents the fol-
lowing characteristics:
• It is based on the Atmel ATSAMW25 SoC (System on Chip).
• This processor is part of the SmartConnect family of Atmel Wireless devices.
SmartConnect family is specifically designed for IoT projects.
• The ATSAMW25 includes also a single 1  1 stream PCB (Printed Circuit Board)
• The board includes a Li-Po charging circuit which allows the use of a Li-Po battery
as external power. Additionally, a 5 V external power supply is allowed. Internally,
the MKR1000 switches automatically from both supply sources.
• The board mechanical data are: Length: 65.0 mm and width: 25.0 mm.
• The Genuino MKR1000 board cost is 31.99 € [6].

2.2 NodeMCU ESP8266

NodeMCU is an open source IoT platform. The term NodeMCU refers to the Firm-
ware. This board presents the following characteristics:
• The Lua scripting language is used for programming the board. It is based on the
eLua project.
• The Development Kit is based on ESP8266, integrates GPIO (General Purpose
Input Output), PWM, IIC, 1-Wire and ADC all in the same board.
• The board mechanical data are: Length: 38.0 mm and width: 25.0 mm.
• The NodeMCU ESP8266 board cost is 7.95 € [9].

2.3 Onion Omega

This board includes a built-in WiFi, it is Arduino-compatible and a Linux is running
inside. This board presents the following characteristics:
• It lets the Hardware prototyping using familiar tools such as Git, pip, npm.
22 P. Plaza et al.

• High level programming languages such as Python, Javascript, PHP can be used.
• The Onion Omega is fully integrated with the Onion Cloud with the aim of creating
Internet of Things applications.
• It is Open Source. The processor is the Qualcomm Atheros AR9331 SoC.
• The board mechanical data are: Length: 42.7 mm and width: 26.4 mm.
• The Onion Omega board cost is 19.99 $ [10]. Using [11] for currency conversion
from United States dollars to Euros the board cost is 17.94 €.

3 Discussion

Along the previous sections seven IoT development platforms have been analyzed with
the aim of knowing about the Built-in Hardware, the Programmer Interface, the con-
nection possibilities and the Developer Community which is behind the corresponding
There are two types of Arduino/Genuino IoT boards: Shield boards and the
full-integrated boards. The Shield boards require an additional microcontroller in order to
interact with other elements such as sensors or actuators which are widely used in robotic
education. On the other hand, NodeMCU and Onion Omega can be used in a stand-alone
mode. Table 1 summarizes the microcontroller and processor for each board.

Table 1. IoT development board processing device.

IoT development board Microcontroller Processor
Arduino WiFi Shield Atmel AT32UC3 None
Genuino Yún Shield None Atheros AR9331
Genuino MKR1000 SAMD21 Cortex-M0+ None
NodeMCU ESP8266 None Tensilica Xtensa LX106
Onion Omega None Big-Endian

With the aim of powering the boards, it is important to know what is voltage level
for each one in order to adapt levels from the board to other connected devices. Table 2
lists the IoT development boards and the voltage for the power supply and the input
and output port interfaces.
Other important characteristic for the development is the available memory. The
presented boards include different kind of memory resources, Table 3 compiles which
type of memory – volatile and non-volatile - is available and how much memory can be
In common applications, IoT boards should interface with other devices using a
wired connection. These communications usually are performed using a serial inter-
face. Furthermore, digital and analog ports are used in order to read sensor values or
interact with some kind of actuators. Table 4 summarizes the serial and port interfaces
which can be performed for each IoT board.
Wireless Development Boards to Connect the World 23

Table 2. IoT development board power supply and port interface voltage level.
IoT development board Power supply Input/Output voltage
Arduino WiFi Shield 5 V externally There is not Input/Output
port interface
Genuino Yún Shield 3.3 V There is not Input/Output
port interface
Genuino MKR1000 5 V or Li-Po single cell, 3.7 V, 3.3 V
700 mAh minimum
NodeMCU ESP8266 5 V from USB or 3.3 V from VIN 3.3 V
Onion Omega 5 V from USB or 3.3 V from VIN 3.3 V

Table 3. IoT development board: volatile and non-volatile resources.

IoT development board Volatile memory Non-volatile memory
Arduino WiFi Shield Internal SRAM: 64 KB Internal flash: 512 KB on-board
micro SD slot
Genuino Yún Shield RAM: 64 MB DDR2 Flash: 16 MB on-board micro SD
Genuino MKR1000 Internal SRAM: 32 KB Internal flash: 256 KB
NodeMCU ESP8266 Internal RAM: 64 KB for QSPI flash: 512 KB to 4 MB
Internal RAM: 96 KB for
Onion Omega RAM: 64 MB DDR2 Flash: 16 MB

Table 4. IoT development board; serial and port interfaces.

IoT development board Serial interfaces Discrete ports
Arduino WiFi Shield SPI, USB, ICSP, None
Genuino Yún Shield SPI, USB, ICSP and None
Genuino MKR1000 SPI, I2C, and UART Digital I/O pins: 8
PWM output: 12
Analog input pins: 7 (ADC 8/10/12 bit)
Analog output pin: 1 (DAC 10 bit)
NodeMCU ESP8266 SPI, I2C, I2S and Digital I/O pins: 10 (can be used for
UART PWM, I2C, 1-wire)
Analog input pin: 1 (ADC 10 bit)
Onion Omega SPI, I2S and UART Digital I/O pins: 18
24 P. Plaza et al.

When there are Hardware and Software developments, the software required for
programming the board and the used language are very important elements. Table 5
states the IoT development board programming software and the used language for
programming them.

Table 5. IoT development board; programming tool and language.

IoT development board Programming tool Programming language
Arduino WiFi Shield Arduino IDE Arduino language (based on C/C++)
Genuino Yún Shield Arduino IDE, Arduino language (based on C/C++),
Web interface Python
Genuino MKR1000 Arduino IDE Arduino language (based on C/C++)
NodeMCU ESP8266 NodeMCU Simple LUA based programming
Firmware, language, Arduino language (based on
Arduino IDE C/C++)
Onion Omega Serial terminal, Python, Javascript, PHP
SSH terminal

Other important specification for any application is the size restriction. None of the
analyzed boards are very large, the larger one has a length of 68.6 mm and a width of
53.3 mm - Genuino Yún Shield – and the smallest one is sized with a length of
38.0 mm and a width of 25.0 mm - NodeMCU ESP8266. Table 6 compares the IoT
development boards dimensions.

Table 6. IoT development board dimensions.

IoT development board Length Width
Arduino WiFi Shield 63.2 mm 53.5 mm
Genuino Yún Shield 68.6 mm 53.3 mm
Genuino MKR1000 65.0 mm 25.0 mm
NodeMCU ESP8266 38.0 mm 25.0 mm
Onion Omega 42.7 mm 26.4 mm

Moreover, the board cost is compared too. Due to the project funding, the cost is an
important aspect which should be have in mind. As it can be seen, none of them are
especially expensive, the most expensive board costs 69.00 € - Arduino WiFi Shield –
and the cheapest one costs 7.95 € - NodeMCU ESP8266. All of them are affordable for
majority of projects. Table 7 specifies the IoT development boards cost.
Furthermore, other important aspect is getting support during the development,
communities are very useful. Traditionally, manufacturers provided some telephone or
some e-mail for this purpose. Nowadays, most of manufacturers include a forum with
the aim of providing support to their customers. These forums are built by company’s
Wireless Development Boards to Connect the World 25

Table 7. IoT development board cost.

IoT development board Cost
Arduino WiFi Shield 69.00 €
Genuino Yún Shield 39.90 €
Genuino MKR1000 31.99 €
NodeMCU ESP8266 7.95 €
Onion Omega 17.94 €

experts and the customers and all of them form a community for the corresponding
Arduino WiFi Shield, Arduino Yún Shield and Arduino MKR1000 are supported
by the Arduino community [12]. NodeMCU ESP8266 is supported by two commu-
nities: NodeMCU community [13] and ESP8266 community [14]. Onion Omega is
supported by the Onion community [15].
Finally, when a development is started it is important to get references about things
what other people has made and the way they are using the development boards.
Arduino projects are widely extended to scientific community, professionals and
hobbyists [16] presents a body area network, for acquiring data related to body position
and some simple movements based on a WiFi Shield stacked on an Arduino ChipKIT
MAX32 [17] describes a low-cost Wi-Fi sensor network based on ESP8266. Enable
monitoring of heart pulse sensor data on the cloud with ESP8266 is detailed in [18].
Using one of the described IoT development boards, mobile robots can be used as
remote laboratory in order to teach computer science in a similar way that is described
in [19, 20]. It can be used as a guide for using IoT as a TEL (Technology Enhanced
Learning) tool.

4 Conclusions

The result of this work shows the results of the analyzed IoT development boards that
can be introduced easily in classrooms within a STEM context. Performed educational
activities using the mentioned platforms are also considered. Hence, recommendations
are included with the aim of easing the inclusion of one of them in a classroom.
Any of this IoT development boards can be used in classrooms or remotely in order
to provide an easy way with the aim of including robotics within a STEM context.
Additionally, they allow making homemade applications framed in a DIY (Do It
Yourself) context.
Presented analysis is part of the state of the art of a doctoral thesis; a novel approach
to collaborative robotic educational tool is being developed. An Open Hardware plat-
form that can be used in classrooms with aim of developing educational programs
related to robotics in a collaborative environment which promotes innovation and
motivation for students during the learning process 1. The platform which is being
developed presents wirelessly connections such as Bluetooth a WiFi as enhancements 2.
The Wireless connection is provided by a WiFi development board which is integrated
as part of the collaborative robotic educational tool. The doctoral thesis is being carried
26 P. Plaza et al.

out in the Engineering Industrial School of UNED (Spanish University for Distance
Education) and the Electrical and Computer Engineering Department (DIEEC).

Acknowledgment. The authors acknowledge the support provided by the Engineering Indus-
trial School of UNED, the Doctorate School of UNED, and the “Techno-Museum: Discovering
the ICTs for Humanity” (IEEE Foundation Grant #2011-118LMF).
And the partial support of the eMadrid project (Investigación y Desarrollo de Tecnologías
Educativas en la Comunidad de Madrid) - S2013/ICE-2715, IoT4SMEs project (Internet of
Things for European Small and Medium Enterprises), Erasmus+ Strategic Partnership nº
2016-1-IT01-KA202-005561), and PILAR project (Platform Integration of Laboratories based on
the Architecture of visiR), Erasmus+ Strategic Partnership nº 2016-1-ES01-KA203-025327.
And to the Education Innovation Project (PIE) of UNED, GID2016-17-1, “Prácticas remotas
de electrónica en la UNED, Europa y Latinoamérica con Visir - PR-VISIR”, from the Academic
and Quality Vicerectorate and the IUED (Instituto Universitario de Educación a Distancia) of the

1. Plaza, P., Sancristobal, E., Fernandez, G., Castrom, M., Pérez, C.: Collaborative robotic
educational tool based on programmable logic and Arduino. In: 2016 Technologies Applied
to Electronics Teaching (TAEE), Seville, pp. 1–8 (2016)
2. Merino, P.P., Ruiz, E.S., Fernandez, G.C., Gil, M.C.: A wireless robotic educational
platform approach. In: 2016 13th International Conference on Remote Engineering and
Virtual Instrumentation (REV), Madrid, pp. 145–152 (2016)
3. Xu, B., Xu, L.D., Cai, H., Xie, C., Hu, J., Bu, F.: Ubiquitous data accessing method in
IoT-Based information system for emergency medical services. IEEE Trans. Ind. Inform.
10(2), 1578–1586 (2014)
4. Nastic, S., Sehic, S., Vogler, M., Truong, H.-L., Dustdar, S.: PatRICIA – a novel
programming model for IoT applications on cloud platforms. In: Service-Oriented
Computing and Applications (SOCA)
5. Fernandez, G.C., Ruiz, E.S., Gil, M.C., Perez, F.M.: From RGB led laboratory to servomotor
control with websockets and IoT as educational tool. In: 2015 12th International Conference
on Remote Engineering and Virtual Instrumentation (REV), pp. 32–36, 25–27 February
6. Arduino. Accessed 21 Nov 2016
7. NodeMcu ESP8266. Accessed 21 Nov 2016
8. Onion Omega. Accessed 21 Nov 2016
9. NodeMCU v2 - Lua based ESP8266.
esp8266. Accessed 21 Nov 2016
10. Onion Omega. Accessed 21 Nov 2016
11. Currency conversion. Accessed
21 Nov 2016
12. Arduino forum. Accessed 21 Nov 2016
13. NodeMCU forum. Accessed 21 November 2016
14. ESP8266 forum. Accessed 21 Nov 2016
15. Onion community. Accessed 21 Nov 2016
Wireless Development Boards to Connect the World 27

16. Orha, I., Oniga, S.: Study regarding the optimal sensors placement on the body for human
activity recognition. In: 2014 IEEE 20th International Symposium for Design and
Technology in Electronic Packaging (SIITME), Bucharest, pp. 203–206 (2014)
17. Thaker, T.: ESP8266 based implementation of wireless sensor network with Linux based
web-server. In: 2016 Symposium on Colossal Data Analysis and Networking (CDAN),
Indore, pp. 1–5 (2016)
18. Škraba, A., Koložvari, A., Kofjač, D., Stojanović, R., Stanovov, V., Semenkin, E.:
Streaming pulse data to the cloud with bluetooth LE or NODEMCU ESP8266. In: 2016 5th
Mediterranean Conference on Embedded Computing (MECO), Bar, pp. 428–431 (2016)
19. Lopes, M., Gomes, I., Trindade, R., Silva, A., Lima, A.C.: Web environment for
programming and control of mobile robot in a remote laboratory. IEEE Trans. Learn.
Technol. PP(99), 1–1
20. Charlton, P., Avramides, K.: Knowledge construction in computer science and engineering
when learning through making. IEEE Trans. Learn. Technol. PP(99), 1–1
CHS-GA: An Approach for Cluster Head
Selection Using Genetic Algorithm for WBANs

Roopali Punj(B) and Rakesh Kumar

Department of Computer Science and Engineering, NITTTR, Chandigarh, India

Abstract. Wireless Body Area Networks (WBANs), an advancing tech-

nology in the field of pervasive healthcare monitor patients ubiquitously
and provide real-time feedback. Data communication consumes more
energy than data processing in WBANs. As it is nearly impractical to
replace or recharge the dead sensor nodes, it has become a major concern
to overcome issues related to data communication in WBANs that affect
network lifetime and energy consumption. In this paper, we propose an
efficient algorithm for cluster head selection using genetic heuristics for
enhancing network lifetime and harnessing energy consumption of the
sensor nodes. It uses genetic heuristics and divides the network into clus-
ters. A cluster head is chosen for inter and intra-cluster communication.
Clustering is a feasible solution as it reduces the number of direct trans-
missions from source to sink. It enhances network lifetime and reduces
energy consumption as there is inverse relationship between the two, i.e,
less the energy consumption more is the network lifetime. The proposed
algorithm is also analyzed mathematically in terms of time complexity,
overhead and fault tolerance which reveals that our algorithm outper-
forms the existing techniques such as AnyBody and HIT in terms of
energy efficiency and network lifetime.

Keywords: Cluster head · Energy optimization · Genetic algorithm ·

Load balancing · WBANs

1 Introduction
The recent technological advancements have witnessed vast expansion of WSN
applications in many fields such as Power system applications, Disaster emer-
gency response, Healthcare applications, Air pollution monitoring, Structural
monitoring, Urban temperature monitoring, Precipitation monitoring, Water
pipeline monitoring, Ubiquitous geo-sensing, Commercial asset tracking, Urban
Internet and many more [1]. Sensor nodes are capable of sensing, processing
and transmitting physical, biological and environmental factors such as sound,
temperature and motion. WBANs, an extension of WSNs, consists of low-power,
intelligent, minute, lightweight sensor nodes to monitor the human body func-
tionalities, physiological parameters, physical activities and environmental con-
ditions of the patients. WBANs are useful in many health related application

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 3
CHS-GA: An Approach for Cluster Head Selection 29

spheres such as battlefield, disaster healthcare, biomedical applications, monitor-

ing military troop movements and large scale behavioral studies [2]. Sensor nodes
can either be implanted on human body (in-body) or are wearable (on-body)
forming a network known as WBAN [3]. WBANs are capable of monitoring
physiological parameters such as ECG, EEG, EMG, body temperature, blood
pressure, heart rate, blood oxygen level etc. Sensor nodes can process the sensed
data as well as transmit it to the remote server but have limited battery life-
time, memory, and storage. As every operation consumes energy of sensor nodes;
maximum in communication, they shed their energy soon. Harsh or remote envi-
ronmental conditions make it nearly impractical to revive or swap the battery of
sensor nodes over and again. Therefore, efficient energy consumption of sensor
nodes is a major concern in WBANs to augment network lifetime.
Clustering is a feasible solution for efficient communication and energy con-
sumption as it reduces the number of direct transmissions from source to sink.
Clustering, a process of categorization of similar data into disjoint classes, called
clusters, i.e., the data within a cluster are highly similar, while the objects in dif-
ferent clusters are more dissimilar. It is an example of unsupervised classification
[4]. Clustering in WBANs can be represented mathematically by considering a
set of input sensor nodes N = {n1 , n2 , n3 , .., nt }. The aim of clustering is to par-
tition the input set of sensor nodes into disjoint subsets, C = {c1 , c2 , c3 , ..., cm },
such that Cj = φ and Cj = Ck for j = k. Clustering helps in reducing collisions
amongst cluster members; balancing load and to find feasible number of cluster
heads etc. [5]. Thus, the problem of clustering can be considered as an NP-Hard
optimization problem [4]. The researchers have made a certain progress, but
there is still a space for optimization. The details will be discussed in related
In this paper, we have proposed an efficient cluster head selection algorithm
based on genetic heuristics. The objective of the proposed algorithm is to find
an optimal set of cluster heads in WBANs for optimizing energy consumption,
load balancing and enhancing network lifetime.
This paper is organized as follows. Section 2 describes the related work.
Section 3 explicates the proposed algorithm. Section 4 analyzes complexity of
the proposed algorithm and compares it with existing techniques. Finally, Sect. 5
concludes the paper.

2 Related Work

Many clustering protocols have been developed for WSNs in the recent past
[6–8]. Low Energy Adaptive Clustering Hierarchy (LEACH) [6], is a single hop
clustering algorithm. It reduces energy consumption but the number of dead
nodes increase with the increasing number of nodes which ultimately affects the
network lifetime. In [7], a cluster based routing protocol, Energy Aware Clus-
tering Algorithm (EADC), has been developed to solve the problem of imbal-
anced energy usage at CH by constructing equal sized clusters enhancing network
lifetime. But an overhead occurs in sending large number of control messages.
30 R. Punj and R. Kumar

In [8], the authors have proposed An Unequal Multi-Hop Balanced Immune Clus-
tering Protocol (UMBIC). It partitions the network into cluster sand constructs
optimum cluster heads and routing tree amongst them. But it works only for
static sensor nodes.
The algorithms proposed for WSNs are not suitable for WBANs as the lat-
ter have some specific properties [3,9]. Therefore, some algorithms have been
proposed for WBANs [10–12]. In [11], routing protocol Anybody for body area
networks has been proposed. It is a self-organized multi-hop routing protocol
and is better than the previously proposed algorithm LEACH [6] in terms of
constant number of clusters with the increasing number of nodes but it does not
take into account the residual energy. In [10], authors had proposed a Hybrid
Indirect Transmission (HIT) algorithm for data gathering that makes use of two
or more clusters and multiple multi-hop indirect transmissions. The authors had
focused on energy consumption and network delay. But the residual energy of the
sensor nodes is not taken into consideration which increases the number of dead
nodes per round. In [12] cluster based epidemic control through smartphone-
based body area networks has been proposed. It is efficient in densely populated
area and closer social interaction zone. It is efficient than traditional epidemic
control methods in dynamic data collection and numerical assumption about
social interaction. But missing data poses a serious threat to any data collection
But the already existing algorithms do not consider residual energy of the sen-
sor nodes which ultimately affects the network lifetime and energy consumption,
so there is need for an efficient cluster head selection algorithm that enhances
the network lifetime and minimizes energy consumption.

3 Cluster Head Selection Using Genetic Algorithm


In this section, we explain the prerequisites of our problem and describe the
proposed algorithm. In this paper, we propose an efficient cluster head selec-
tion algorithm in WBANs for optimizing energy consumption, load balancing
and enhancing network lifetime. The process of cluster head selection is repre-
sented using the properties of Genetic Algorithm (GA). GA follows the process
of natural evolution and evaluates the fitness of an individual. The fitness value
depends upon the parameters specific to the application. It is multi-objective
optimization criteria for (i) load balancing at CH, (ii) reducing energy consump-
tion and (iii) enhancing network lifetime.
As it has been proved that cluster head selection is an NP-Hard problem [13],
it requires random and optimization techniques to select CH. Thus, we choose
GA for cluster head selection because of its properties such as evolutionary,
convergence and global optimum solution. GA is based on a search procedure
that uses random choice to guide search through a parameter space. GA mainly
require value of the objective function associated with the particular problem in
hand. GA is basically used for optimizing parameters to approach some global
CHS-GA: An Approach for Cluster Head Selection 31

optimal point. The basic GA is explained in Algorithm 1 [14]. In this paper,

the goal is to select the cluster head amongst the cluster members while guar-
anteeing the optimization of its objective function based on sense radius and
residual energy. The genetic operators are application specific and can be mod-
ified accordingly. The various steps in GA specific to cluster head selection in
WBANs are explained in the next section.

Algorithm 1. Genetic Algorithm

1: Select a set of initial population P from population pool N
2: while (Stopping criteria not true) do
3: Evaluate each individual i in P on the basis of the Fitness Function f
4: end while
5: for (Next Generation G) do
6: Select the Parent Chromosomes p from P
7: Apply Genetic Operators on p
8: Evaluate Fitness of each individual j in G
9: end for
10: The best candidates of G forms the New Generation P 
11: return P 

The block diagram of the proposed algorithm CHS-GA is explained in Fig. 1.

The various modules of the proposed algorithm are explained below. Also, a
pseudo code for the proposed algorithm CHS-GA is given by Algorithm 2.

a. Initialization: In
√ initialization block, CHS-GA randomly selects ten nodes in
the range 2 to n from initial population to explore the genetic diversity in
the search space. Each cluster head serves equal number of cluster members to
achieve fairness and load balancing. Each sensor node has associated with it
the sense radius, Rs and residual energy, Re. The new selected node must be
within the sensing range of the previous node so that the cluster is not widely
dispersed. Thus, saving transmission time. The residual energy of sensor nodes
must satisfy a particular threshold, ET h so that it can be a candidate for
cluster head. The nodes that satisfy the fitness function, are selected as cluster
members. Thus, as an output, a cluster is obtained with 10 cluster members.
b. Fitness Function: Fitness function block is used to determine the quality of
individuals obtained as an output from the initialization block.  Each node is
evaluated on the basis of the fitness function represented by f = i=1 Rsi Rei
subject to Rsi ≤ Rsi+1 and Rei ≥ ET h where Re is the residual energy, Rs is
the sense radius and ET h is the threshold for residual energy. The best node
is elected as the cluster head. The selected cluster head advertises a message
to the cluster members and maintains a routing table for communication
purposes. As an output, a cluster head is obtained which communicates with
the cluster members using the routing table.
c. Genetic Operators: Genetic Operators are used to select the next generation
on the basis of the previous generation. Selection Criteria, a genetic operator,
32 R. Punj and R. Kumar

evaluation If all
of each sensor Yes
node nodes
added in covered ?
Selection No
of Sensor Best node
Nodes is elected Next Ge-
as CH neartion
is selected
Rsi >
Rei >
ET h ?


Nodes are
added to

Fitness Function Genetic Operators


Fig. 1. Block diagram of CHS-GA

is used to improve the quality of the initial population by selecting best

chromosomes, here referred to as nodes, for new generation. The proposed
algorithm uses Roulette Wheel Scheme (RWS) to select multi-hop nodes of the
selected CH. The nodes are then sorted in non-decreasing order on the basis of
euclidean distances from cluster head. Finally, top ten nodes are selected using
RWS for the new generation. Mutation genetic operator improves the quality
of new generations as it mixes the two chromosomes from one parent by
altering the genes respectively. It is used to preserve the genetic diversity. The
proposed algorithm applies mutation by selecting the nodes corresponding to
the previous CH as new generation which maintains the diversity while the
population is similar in local minima. The proposed algorithm uses boundary
as the mutation operator to check for the sensing radius of the nodes.

4 Complexity Analysis
This section presents the complexity analysis of proposed algorithm CHS-GA.
In this paper, we had analyzed the computational complexity of the proposed
algorithm with respect to time complexity, overhead and fault tolerance.

Lemma 1. The total time to find cluster heads required to cover the whole net-
work is inversely proportional to n and the overhead is O(n).
CHS-GA: An Approach for Cluster Head Selection 33

Algorithm 2. CHS-GA
1: Repeat ∀n
2: Total Population, Nt = n1 , n2 , n3 , . . . nt
3: while i = 1 to 10 do
4: ni ← Random Selection (Ni )
5: if (Rsi ≤ Rsi+1 && Rei ≥ ET h ) then
6: N c ← Nc n i
7: i=i+1
8: end if
9: Each node maintains a Routing Table (RT) consisting of Node Id, Residual
Energy and Next Hop
10: end while
11: NCH = max(Nc )
12: NCH floods a message to rest of the nodes in cluster
13: a = Multi-Hop nodes for NCH
14: for (a = 1; a ≤ k; a + +) do
15: calc D(ai , ai+1 )
16: end for
17: Nk = nodes are selected using RWS with min(D) for max(k) = 10
18: NextCluster = Nk
19: for (Nk = 1; Nk <= 10; Nk + +) do
20: Goto step 2
21: end for
22: until all the sensor nodes are included in any of the cluster

Proof. It depends upon the number of total sensor nodes (N ) and cluster mem-
bers (n) in the network. These values are pre-defined or user specific, therefore,
the optimal number of cluster heads required to cover the whole network is known
beforehand. The constraints to include any node in the cluster are checked glob-
ally that reduces the overhead of checking conditions at each level. The nodes
that do not satisfy the constraints, i.e., outliers are declared as dead nodes. The
worst case occurs when n is small. It will lead to less number of cluster members
in a cluster and hence more number of cluster head in the network. If n is large
then the physical size of cluster will increase which denies the meaning of clus-
tering. The best case is when n has an optimal value that balances the load at
cluster head as well as manages the physical size of cluster. Also, load balancing
helps in reducing energy consumption at CH.
After CH selection, it advertises control messages to all cluster members
for initiating communication process. CH and all cluster members maintain a
routing table with 3 entries, i.e, node id, residual energy and next hop values.
Since the control messages are transmitted only once, we neglect this overhead.
Worst case occurs when the selected cluster head dies and for re-election of the
cluster head control messages are transmitted again.
34 R. Punj and R. Kumar

Lemma 2. The proposed algorithm CHS-GA is fault-tolerant.

Proof. Cluster members send data to cluster head as soon as the cluster head is
elected. Routing table is updated dynamically. CH is a powerful node and there
is less probability of its failure and being declared as a dead node. CHS-GA is
fault tolerant because when the cluster head fails, all the nodes will be prevented
from sending their data to the dead cluster head. Cluster head keeps track of the
node which is second to it in terms of residual energy with the help of routing
table. Before it completely shreds its energy, it sends a control message consisting
of its residual energy to the second best node and informs the node to act as
cluster head. The newly formed cluster head advertises a control message to all
the cluster members for further data communication. Thus, enhancing network

As AnyBody [11] has constant number of cluster heads and serves unequal
number of cluster members, cluster head shreds its energy soon. This leads to
increase in energy consumption and decrease in network lifetime. Whereas in
the proposed algorithm, cluster head serves equal number of cluster members
which balances load at cluster head. Thus, enhances network lifetime and reduces
energy consumption. As HIT [10] uses chaining of cluster heads for transmitting
data from cluster head to sink, network delay increases which ultimately affects
the total transmission time. Also, it does not take into account the residual
energy of nodes which increases the number of dead nodes affecting network
lifetime. HIT has no mechanism if cluster head dies. Whereas the proposed
algorithm considers the residual energy of sensor nodes. Also, in worst case if
cluster head dies it notifies the cluster members for future communication. Thus,
the proposed algorithm CHS-GA outperforms the existing techniques in terms
of energy consumption, network lifetime and fault tolerance.

5 Conclusion and Future Scope

An efficient cluster head selection algorithm has been proposed to improve load
balancing and network lifetime at cluster head in WBANs. It uses genetic heuris-
tics for cluster head selection and create equal-sized clusters to balance load at
cluster head. The proposed algorithm checks for nodes which are in vicinity of
the randomly selected nodes so that the nodes lie within each other sensing
radius. The proposed algorithm guarantees for selecting a node as cluster head
with highest residual energy and to find optimal set of cluster heads for the com-
plete network coverage in a time inversely proportional to total number of sensor
nodes. Thus, balancing load at the CH which leads to reduced energy consump-
tion and enhanced network lifetime. Following this line of research, in future this
algorithm can be tested on the moving sensor nodes taking into consideration
the fluctuating distance between the sensor nodes and the base station. Security
mechanism can also be included at the cluster data for secure data transmission
between the source and sink.
CHS-GA: An Approach for Cluster Head Selection 35

1. Rashid, B., Rehmani, M.H.: Applications of wireless sensor networks for urban
areas: a survey. J. Netw. Comput. Appl. 60, 192–219 (2016)
2. Misra, S., Chatterjee, S.: Social choice considerations in cloud-assisted WBAN
architecture for post-disaster healthcare: data aggregation and channelization. Inf.
Sci. 284, 95–117 (2014)
3. Movassaghi, S., Abolhasan, M., Lipman, J.: A review of routing protocols in wire-
less body area networks. J. Netw. 8(3), 559–575 (2013)
4. Hruschka, E.R., Campello, R.J., Freitas, A.A., de Carvalho, A.C.P.L.F.: A survey
of evolutionary algorithms for clustering. IEEE Trans. Syst. Man Cybern. Part C
(Appl. Rev.) 39(2), 133–155 (2009)
5. Gajjar, S., Sarkar, M., Dasgupta, K.: FAMACRO: fuzzy and ant colony optimiza-
tion based MAC/routing cross-layer protocol for wireless sensor networks. Procedia
Comput. Sci. 46, 1014–1021 (2015)
6. Heinzelman, W.B., Chandrakasan, A.P., Balakrishnan, H.: An application-specific
protocol architecture for wireless microsensor networks. IEEE Trans. Wireless
Commun. 1(4), 660–670 (2002)
7. Yu, J., Qi, Y., Wang, G., Gu, X.: A cluster-based routing protocol for wireless sen-
sor networks with nonuniform node distribution. AEU Int. J. Electron. Commun.
66(1), 54–61 (2012)
8. Sabor, N., Abo Zahhad, M., Sasaki, S., Ahmed, S.M.: An unequal multi-hop bal-
anced immune clustering protocol for wireless sensor networks. Appl. Soft Comput.
43, 372–389 (2016)
9. Movassaghi, S., Abolhasan, M., Lipman, J., Smith, D., Jamalipour, A.: Wireless
body area networks: a survey. IEEE Commun. Surv. Tutorials 16(3), 1658–1686
10. Culpepper, B.J., Dung, L., Moh, M.: Design and analysis of hybrid indirect trans-
missions (HIT) for data gathering in wireless micro sensor networks. ACM SIG-
MOBILE Mob. Comput. Commun. Rev. 8(1), 61–83 (2004)
11. Watteyne, T., AugéBlum, I., Dohler, M., Barthel, D.: Anybody: a self-organization
protocol for body area networks. In: Proceedings of the ICST 2nd International
Conference on Body Area Networks, pp. 1–6, Florence, Italy (2007)
12. Zhang, Z., Wang, H., Wang, C., Fang, H.: Cluster-based epidemic control through
smartphone-based body area networks. IEEE Trans. Parallel Distrib. Syst. 26(3),
681–690 (2015)
13. Chatterjee, M., Das, S.K., Turgut, D.: WCA: a weighted clustering algorithm for
mobile ad hoc networks. Cluster Comput. 5(2), 193–204 (2002)
14. Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learn-
ing, 8th edn. Pearson Education, London (1989)
Proposal IoT Architecture for Macro
and Microscale Applied in Assistive Technology

Carlos Solon S. Guimarães Jr.1(B) , Renato Ventura B. Henriques1 ,

Carlos Eduardo Pereira1 , and Wagner da Silva Silveira2
Federal University of Rio Grande do Sul, Avenue Osvaldo Aranha,
103, Porto Alegre, RS, Brazil
Affecty Systems, Street Universidade das Missões, 464, Sant Ângelo, RS, Brazil

Abstract. Technology is present in different sectors of society, a world

mediated by information and communication technologies, can offer for
people with special needs the possibility of improving the limitations
imposed by their physiological condition. Today the Internet of Things
(IoT) is the emerging technology that can provide people with special
needs the support to achieve a better quality of life. It is in this con-
text that the proposal of an IoT architecture with indoor and outdoor
scenarios connected to Assistive Technologies (AT).

Keywords: Application server · Assistive technology · Internet of

Things · Smart city · Smart home · Smart stick · Smart wheelchair

1 Introduction
Objects around us have been connected for decades. Devices like TV remote
controls and garage door openers have been part of our domestic landscape for
generations. Industrial applications of these technologies-for example, through
remote monitoring and control of production-are also nothing new. In fact,
even the phrase “Internet of Things” or the abbreviation IoT is not a recent
invention [1].
However, recent developments in both networks and devices are enabling
much greater range of connected devices and IoT functionalities. Today, the
phrase “Internet of Things” refers to the world of smart connected objects and
devices. All of this is made possible by the miniaturization of electronic devices,
accompanied by a huge increase in the availability of internet connectivity. The
potential applications of this new IoT are virtually unlimited, and they have the
ability to greatly improve the quality of life of people. Devices allow a user to
change his or her thermostat remotely, dim or increase the intensity of lights,
control door locks, activate alarm systems, etc. While these applications certainly
add a level of fun and convenience for all users, the applications take on a whole
new level of importance when used by persons with disabilities and older adults.

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 4
Proposal IoT Architecture for Macro and Microscale 37

Some researches are proposing the integration of potential applications

between IoT and AT [1]. We highlight Domingo [2], which provides an overview
of IoT for people with disabilities with relevant application scenarios and the
key benefits are described together with the research challenges, which remain
open for further research, in Gubbi et al. [3] presents architectures for services
and different case studies, where Healthcare is highlighted as a future direc-
tion of IoT, emphasizes that, although IoT technologies can be very useful for
the care and support of people with disabilities, it is important to remember
that joining different areas represents both technological and social challenges,
involves interdisciplinary work among Science, engineering, sociology and social
This article explores the proposal of an IoT architecture with scenar-
ios of indoor applications Smart Home (Microscale) and outdoor Smart City
(Macroscale), integrated with AT Smart Stick and Smart Wheelchair. The paper
is structured as follows. Section 2 describes the pros of the IoT Architecture
with the application scenarios. Section 3 defines AT. Section 4 presents the pro-
totype of the Front-End and Back-End proposal. Finally, in Sect. 5, the article
is finished.

2 Proposal IoT Architecture

The implementation of IoT (intelligent network, smart home, smart city, per-
sonalized wearables) can be conceptualized as an ecosystem or scenarios - from
a technical point of view (focusing on norms, protocols or skills) and from a
social perspective (Analysis of social relationships or use cases), case studies
should be considered as user-oriented IoT implementations. Notably, however,
the smart home is a microscale ecosystem, while an smart city is a macroscale.
In both cases, questions about architecture models for implementing Internet
infrastructure of things should be analyzed for each application scenario [4].
The proposed IoT architecture is Service-Oriented Architecture (SOA) for the
construction of software solutions that use as their main element units of devel-
opment called services, which are self-described elements, platform agnostics, that
perform functions and that can range from simple requests to complex processes [5].
The tiered model of the service-oriented architecture provides services consumed
by people or other organizations to execute their activities, enabling the composi-
tion of new services and processes, Fig. 1 shows the conceptual model of the pro-
posed architecture.
The interactions between these components are search, publish, and interac-
tion operations. The service provider represents the layer that hosts the service
by allowing clients to access the service. The service provider provides the service
and is responsible for posting the description of the service it provides. The ser-
vice requestor is the application you are looking for, invoking an interaction with
the service, that is, requesting the execution of a service. Consumers search for
services on the registration server and retrieve information related to the com-
munication interface for services during the development phase or during client
38 C.S.S. Guimarães Jr. et al.

Fig. 1. Proposal of IoT architecture applied in assistive technologies. Source: Author.

execution [6]. Tools of middlewares and frameworks are being researched, at first
we will use the Robot Operating System (ROS) as the main framework for the
application server, it provides the services you would expect from an operating
system, including hardware abstraction, low-level device control, implementation
of commonly-used functionality, message-passing between processes, and pack-
age management [4]. Secondary frameworks and midlewares will also be used to
develop scenarios and case studies [8,9]. Future multi-agent layers will be added
in the architecture.

2.1 Smart Home: Microscale

Smart homes refer to the integration of technology and information about the
home network for a better quality of life. Since intelligent machines are equipped
with an automatic environmental control system and with various devices such
as automatic sensors control devices, actuator devices and safety devices [4].
Initially, as a starting point we are using the openHAB framework for some
smart home services [7]. OpenHAB is a software for integration of different sys-
tems and technologies of residential automation into a single solution, and can
act as a central system. This feature makes it interoperable, since openHAB
can communicate with devices that use protocols such as Z-wave, KNX, xPL,
Enocean, MQTT, etc. Being free software, written in Java, it works on top of
any device that can run a JVM. OpenHAB has a web server integrated into its
user interface. Figure 2 shows the OpenHAB architecture.
Proposal IoT Architecture for Macro and Microscale 39

Fig. 2. OpenHAB architecture: indoor environment. Fonte: OpenHAB [8].

The integration of sensors in the Smart home environment with openHAB is

essential to have a control interface. By integrating different hardware technolo-
gies and protocols, openHAB is being tested. Your subsystems can be deployed
and configured independently. There are different web-based user interfaces
(Classic UI, GreenT and Comet Visu) and there are also native clients for iOS
(openHAB) and Android (HABDroid).

2.2 Smart City: Macroscale

Smart city can be defined as the use of information and communication technolo-
gies to detect, analyze and integrate as key information of the central systems
in the execution of cities. At the same time, the smart city can make a smart
response to different types of needs, including daily subsistence, environmental
protection, public safety, city services, accessibility, industrial and commercial
activities. In short, “smart city” is the real approach of “smart planet” apply
to the specific region, achieving the informational and integrated management
of cities. It can also be said to be an effective integration of intelligent planning
ideas, intelligent building modes, intelligent management methods, and intelli-
gent development approaches [4].
To test for Smart City, this API is used from Google Maps and OpenGTS [8].
The GPS module sends the global positioning information using a communica-
tion protocol, such as NMEA 0183 protocol In this case, the protocol is based on
American Standard Code for Information Interchange (ASCII) and outputted
serially to the controller that transfers data over a GSM connection to GGSN
(Gateway GPRS Support Node) mobile operator providing the data to a remote
server over a TCP (Transmission Control Protocol), as shown in Fig. 3.
A digital city, refers to remote sensing, global positioning system (GPS),
geographic information systems (GIS) and other space information technologies
as the main means, building the digital city’s geographical information structure,
platform construction Of urban geographic information for the public service.
40 C.S.S. Guimarães Jr. et al.

Fig. 3. Outdoor environment. Fonte: Author.

3 Visual Impaired and Handicapped

In this section the case studies will be partially presented with an approach to
the development process based on the scenarios seen in the previous sections.

3.1 Visual Impairment: Smart Stick

An appropriate product design requires interaction with the practice of
ergonomics. For all this, ergonomics must be present in all stages of project
development. The design of the handle or cable of the embedded system of the
Smart Stick project has been developed in conjunction with the Department of
Design and Graphic Expression (DGE) the Federal University of Rio Grande do
Sul - UFRGS, Fig. 4 presents some models in development for integration with
the embedded system [9].

Fig. 4. Prototypes of cables for the embedded system. Source: Design and Graphic
Expression (DEG) - UFRGS.

The case study has been developed based on Silva [9], it is an electronic
system to support mobility to replace sight by sound and Vibration [10]. There
are many configurations that can be defined for a design of an electronic walk-
ing stick, Fig. 5 presents the conceptual model numbered with the deployment
diagram Smart Stick.
Proposal IoT Architecture for Macro and Microscale 41

Fig. 5. Conceptual model with deployment diagram smart stick. Source: Author.

The case study partially describes the design of a Smart Stick for Telemetry
and Telecontrol of an Embedded System applied to the macro and micro navi-
gation of the visually impaired. The conceptual model shows that the embedded
micro navigation system (surrounding environment) is integrated into the Stick,
while the macro navigation (Telemetry and remote control) is adapted to the
visually impaired (board computer wearable or mobile phone). This separation
takes the weight of the electronic cane and also divides the processing.

3.2 Handicapped: Smart Wheelchair

For this case study, a motorized wheelchair will be used that are assembled with
modules, sensors and controllers [10]. In this way, a chair can connect with the
scenarios and interact with the environments. The information is made available
to an application server for which the system uses the services appropriate for the
needs of the users and working conditions [11]. Figure 6 presents the deployment
diagram Smart Wheelchair.

Fig. 6. Deployment diagram for smart wheelchair. Source: Author.
42 C.S.S. Guimarães Jr. et al.

Wheelchair users can enjoy of intelligent control that allow the diversion of
obstacles and avoid irregular terrain, executed automatically, providing greater
safety and comfort for macro and micro navigation. The goal of the instrumenta-
tion is to develop a smart, low-cost wheelchair that, guided by a sensor network,
can avoid obstacles and prevent mistaken actions regardless of user actions.

4 Front-End and Back-End System

In addition to the application server, frameworks, middleware and embedded

systems, the Front End and Back End of the project scenarios need to be devel-
oped. Initially, some prototypes of Front-End’s are being built, with fields of
“Identification”, that through the information inserted in the Front-end of the
application will be consulted the database in the application server. Each user of
the system will carry an identifier. Through this identifier, the services [5] will be
selected for the category of assistive technology of the registered user. The system
must provide services for assistive technologies (Smart Stick and Smart Wheel-
chair), application scenarios (Smart Home and Smart City) and make personal
information available to the user. Figure 7 partially presents screen prototypes
for the Smart Home and Smart City scenarios [8,13].

Fig. 7. Partial screen prototypes for the smart home and smart city scenarios. Source:
Author and Wagner Silveira.

The system back-end allows administrator access to all content and function-
ality. A central panel is developed with shortcuts to the most common options
and is divided by menus that allow to manage contents and functionalities. To
access with administrator profile must be informed in the authentication screen
an identifier referring to the administration, the system will provide features
related to this profile. The system must allow access to all configurations refer-
ring to the smart stick and smart wheelchair, in the reporting area can be gener-
ated statistics referring to the quantity of devices available by city and generate
detailed report [6].
Proposal IoT Architecture for Macro and Microscale 43

5 Conclusions
The project is in the process of heuristics and has been gaining space for the
development of an IoT architecture with scenarios of applications connected to
AT to assist in the orientation and mobility of people with disabilities. The
project is in the research, modeling and definition phase of hardware and soft-
ware devices, prediction for the first tests for the second half of 2017, with
partial system. Future work includes the improvement of the project as a whole,
development of embedded hardware, new services for assistive technologies and
multi-agent systems for adaptive scenarios.

Acknowledgement. The authors would like to thank to CAPES, this work has been
funded by the research project PROCAD Assistive Technologies.

1. G3ict: Internet of Things: New Promises for Persons with Disabilities (2015). center/publications and reports/p/productCategory
books/subCat 2/id 335. Accessed 7 Sept 2016
2. Domingo, M.C.: An overview of the Internet of Things for people with disabilities.
J. Netw. Comput. Appl. 35, 584–596 (2015)
3. Gubbi, J., Buyya, R., Marusic, S., Palaniswami, M.: Internet of Things (IoT): a
vision, architectural elements, and future directions. J. Fut. Gener. Comput. Syst.
29, 1646–1658 (2013)
4. Robot Operating System: Open-source collection of software frameworks for robot
development (2016). Accessed 11 Nov 2016
5. Vermesan, O., Friess, P.: Internet of Things. Converging Technologies for Smart
Environments and Integrated Ecosystems, p. 17363. European Commission,
Belgium (2013)
6. Erl, T.: Service-oriented architecture. In: Concepts, Technology, and Design, pp.
83–280. Indianapolis, Indiana (2005)
7. Botta, A., Donato, W., Donato, W., et al.: Integration of cloud computing and
Internet of Things: a survey. J. Fut. Gener. Comput. Syst. 56, 684–700 (2016)
8. openHAB: A vendor and technology agnostic open source automation software for
your home (2016). Accessed
16 Sept 2016
9. Guimarães, C.S.S., Pereira, C.E., Henriques, R.V.B.: Telemetry and remote an
embedded system applied in macro and micro blind navigation. Paper presented
at the 11th international conference on remote engineering and virtual instrumen-
tation, Porto, Portugal, pp. 424–433, February 2014
10. Silva, R.F.L.: Integrated product design to urban design: long Bengal electronics.
Dissertation, Federal University of Santa Catarina, Brazil (2009)
11. Lee, E.A., Seshia, S.A.: Introduction to Embedded Systems. A Cyber Physical
Systems Approach, pp. 93–370. University of California, Berkeley (2011)
12. Marques, P.J.: Proposal of a wheelchair position determination system in an intel-
ligent environment. Dissertation, Federal University of Rio Grande do Sul, Brazil
13. Open GPS Tracking System: Open-Source GPS Tracking System - OpenGTS
(2016). Accessed 11 Aug 2016
Using Industrial Internet of Things to Support Energy
Efficiency and Management: Case of PID Controller

Tom Wanyama ✉
( )

Faculty of Engineering, W Booth School of Engineering Practice and Technology,

McMaster University, Hamilton, ON, Canada

Abstract. It is generally agreed in literature that manufacturers are starting to

monitor energy consumption in some capacity, whether at site level or down to
specific processes and production lines. This monitoring is a prerequisite for
energy saving since it enables companies to make operational changes to reduce
energy consumption and costs. The main challenge to energy monitoring is the
need to integrate manufacturing, and energy monitoring and control devices that
support different communication protocols and are usually distributed over a wide
area. This paper describes how the new networking paradigm of Industrial
Internet of Things is used to show the effects of PID tuning on energy efficiency.
Moreover, the paper describes how process and energy system data is transferred
from devices using Open Platform Communication (OPC) technology over
Ethernet to business applications such as Microsoft Excel. Finally, the paper
describes how Microsoft Excel can be used to integrate process energy data with
utilities’ electricity pricing information in real-time to help plant managers to
make decisions on when and how to run manufacturing processes so as to optimize
energy use.

Keywords: Energy efficiency · PID · Industrial Internet of Things

1 Introduction

The Proportional Integral and Derivative (PID) is the most used automatic controller of
industrial processes today. This controller requires its parameters to be adjusted
according to the nature of the process. This adapting of the controller to the process is
called controller tuning. The focus of tuning is usually minimizing the error between
the desired value of the process variable (PV) and the setpoint (SP). However, it is
generally agreed in literature that most PID controllers are not properly tuned which
affects the performance as well as the energy consumption of the controlled system. In
traditional manufacturing systems, PID controllers as well as their associated data are
usually separated from the energy monitoring and control systems, making it difficult
to relate the controller performance parameters and the process energy consumption.
But modern industrial network technologies through the paradigm of Industrial Internet
of Things (IIoT) make all of the information throughout manufacturing facilities acces‐
sible to those who need it, whenever they need it, wherever they are [1]. This makes it

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_5
Using Industrial Internet of Things to Support Energy Efficiency and Management 45

possible to integrate process control and energy monitoring information into single
business automation applications such as Microsoft Excel, enabling real-time associa‐
tion of PID performance and process energy consumption.
In this paper we present an IIoT that integrates industrial process control, energy
monitoring data, and utility electricity pricing information using industrial networking
technologies. The industrial process component of the IIoT is based on PID control of
a pilot scale heat exchanger using a Micrologix PLC that has Ethernet IP communication
capability. The energy consumption of the industrial process is monitored by the energy
component of the IIoT using an IEC61850 SEL751A relay. The data from the PLC and
the relay is sent to DataHub OPC client from where it is accessed by the business appli‐
cation; in this case Microsoft Excel. In addition, this paper describes how control and
energy data is processed in a single Microsoft Excel file, in real-time, showing the effect
of PID settings on the energy consumption of the controlled system. Providing such
information to machine operators and plant managers ensures that they know the impact
of the way they operation their PID controllers on energy consumption.
The rest of this paper is arranged as follows: Sect. 2 covers the background of data
access for the PID controlled pilot heat exchanger. In Sect. 3 we present a case study
and Sect. 4 deals with the testing and results of the case study. Section 5 covers the

2 Background

2.1 Industrial Networks

Industrial networks are the backbone of Industrial Internet of Things (IIoT). In fact, IIoT
is the integration of sensors, industrial controllers and computers, cloud computing
systems, big data technologies, and advanced data analytics systems using network
(industrial networks) and web technologies. The development of IIoT is based on the
philosophy that smart machines are better than humans at accurately and consistently
capturing and communicating data, and at analyzing that data to generate actionable
information. This information should enable companies to pick up on inefficiencies and
problems sooner, saving time and money and supporting business intelligence efforts.
In manufacturing specifically, IIoT holds great potential for quality control, sustainable
and green practices, supply chain traceability and overall supply chain efficiency.
Although IIoT has the potential to improve the overall industrial supply chain efficiency,
the focus of this paper is the use of IIoT to show the effects of PID tuning and process
control on energy efficiency of industrial systems.

2.2 Open Platform Communication

Industrial networks are used in many industrial domains including but not limited to
manufacturing, electricity generation, transmission and distribution, food processing,
transportation, water distribution and waste management, oil and gas production [5].
Each industrial domain has its own slightly different networking requirements, leading
to differences in the associated network protocols. This creates a problem of integrating
46 T. Wanyama

data from different domain, since devices with different network protocols cannot
communicate with each other. The Open Platform Communication OPC is the solution
to this problem. Figure 1 shows that OPC defines the standard for the interface between
industrial data servers and clients. The clients can be Human Machine Interface appli‐
cations, MES or ERPs. In addition, the figure shows that servers have specific network
protocol drivers that enable them to communicate with industry specific controllers [3].

Fig. 1. OPC standard

2.3 Energy Efficiency

As the manufacturing sector changes its business paradigm from “maximum gain for
minimum capital” to “maximum value from minimum resources”, energy efficiency is
becoming one of the most important forms of “alternative energy”. Therefore there is
need to focus on energy saving and optimization from design to management of manu‐
facturing processes [4]. Energy saving and optimization through operational and
management techniques is like a marathon, rather than a sprint, with savings measured
in hour-to-hour and day-to-day increments. What enables energy optimization is the
continuous seeking of answers to the questions such as:
• When and why did a machine exceed typical energy draw?
• Why did equipment changeover cause startup surges?
• Why did component change extend the production cycle into a peak-draw period?
Therefore, energy as well as its quality has to be continuously monitored down to
the manufacturing and process lines.
Using Industrial Internet of Things to Support Energy Efficiency and Management 47

The ability to integrate energy and manufacturing process data facilitated by IIoT
brings about a search for answers to a new set of questions that can increase energy
efficiency and optimization. Such questions include:
• How does energy consumption change with process control strategy?
• What happed to energy consumption when controller (e.g. PID Kp, Ki and Kd) param‐
eters were changed?
• What is the difference in energy consumption during dynamic state and stead state?
• How do system dynamics affect energy quality and consumption?
Moreover, IIoT enables the posting and automatic updating of the energy pricing
information, including time-of-use. This constantly reminds plant operators and
managers the importance of production timing to the overall cost of energy. In general,
IIoT supports the integration of energy efficiency performance criteria into production
management systems such as Manufacturing Execution Systems (MES) and Enterprise
Resource Planning (ERP) applications as an enabler of energy efficient manufacturing
processes [2].

2.4 PID Controller

PID is one of the most widely used control technologies in industry. However, the tech‐
nology has a major implementation challenge of not having associated industrial stand‐
ards. This has resulted in a wide variety of PID controller architectures. In the paper
titled “Reducing Energy Costs by Optimizing Controller Tuning”, O’Dwyer [8] reports
that up to forty-six different structures for the PID controller have been identified in
literature. Therefore, controller manufacturers vary in their choice of architecture. Yet,
controller tuning methods that work well on one architecture may work poorly on
another, affecting not only the stability of the controlled systems but also their energy
Figure 2 shows a block diagram representation of the ideal PID controller with unity
feedback [7]. The figure shows that the control system has two main components,
namely: PID controller and the plant. The plant is made up of the process as well as the
components of the measurement element of the control system. Ts (t) is the desired
process output (setpoint), and Ta (t) is the actual process output (the process variable).
The difference between Ts (t) and Ta (t) is the process error e(t). f (t), given by Eq. 1, is
the manipulated variable generated by the PID controller. Some PID manufacturers refer
to this variable as the control variable. Note that the role of the PID controller is to
generate f (t) that minimizes e(t) [6].
[ ]
1 t d
f (t) = Kp e(t) + ∫ e(𝜏)d𝜏 + Td e(t) (1)
Ti 0 dt
48 T. Wanyama

Fig. 2. Ideal PID controller

Kp is the proportional gain, Kp = KC Controller gain (unit-less).
Ti is the reset time (seconds-1).
Td is the rate time (seconds).
Two hundred seventy six tuning rules have been identified for ideal PID controller
structure described by Eq. 2 [8], and each of these rules has difference energy cost
performance. The dependence of energy efficiency performance of controlled systems
on the PID architecture, and on the tuning rules, increases the importance of monitoring
energy performance of PID controllers in real time.

3 Case Study

This section presents a model IIoT system that uses a PID to control a pilot scale heat
exchanger. The PID controller is deployed on a Micrologix PLC and the energy supply
of the system is monitored using a SEL751A relay. The system data is collected into an
Excel file over Ethernet, using OPC technology.

3.1 System Control

Our pilot scale heat exchanger unit shown in Fig. 3 is heated using an ON-OFF controlled
blow dryer (see Fig. 4). A PID controlled fan is used to cool the unit to the desired
temperature that supports the transfer of heat to air flowing through a copper pipe that
is inside the exchanger chamber. Incoming air into the chamber is at room temperature,
while outgoing air is at preset temperature. The focus of this paper is the control of
temperature inside of the heater exchanger chamber.
Using Industrial Internet of Things to Support Energy Efficiency and Management 49

Fig. 3. Heater exchanger rig

Fig. 4. Process diagram of heat exchanger

Figure 4 shows the process diagram of our pilot heat exchanger system. The setpoint
(SP) Ts (t) is the desired temperature of the exchanger chamber, and the manipulated
variable (MV) f (t) is the 0–5 V analog input to the KT-5194 DC Motor PID speed
controller. But in this system, the controller is used in open loop PWM control mode,
were the 0–5 V input signal determines the value of the 0–24 V (10 A-maximum) PWM
50 T. Wanyama

output. The 0–24 V power supply to the fan DC motor is the control variable (CV) c(t)
of our PID loop.
The temperature inside the heater transfer chamber is measured using an RTD probe
whose resistance varies from 100 Ω at 0 °C to 220 Ω at 300 °C. The RTD signal is fed
in a signal conditioner that produces a proportional 0–10 V analog signal. The signal
conditioner output is the input to a Micrologix 1400 PLC that has an ADC the converts
the analog signal to 0–4095 digital variable. This variable is scaled to produce the actual
temperature of the chamber, which is the process variable (PV) Ta (t) of the PID loop [6].
The output of our PID controlled is given by Eq. 2.
[ ( )]
1 t d Ta (t)
Output = Kc e(t) + ∫ e(𝜏)d𝜏 + Td + bias2 (2)
Ti 0 dt

We set the feedforward bias to zero, and the process variable (PV) Ta (t) is equal to
the setpoint (SP) Ts (t) plus the process error e(t) for and indirect process such as cooling.
Since the setpoint is a constant, the derivative of the process variable is equal to the
derivative of the process error (see Eq. 3). With bias = 0,
( ) ( )
d Ta (t) d Ts (t) + e(t) de(t)
= = , if Ts (t) = Constant3 (3)
dt dt dt
Substituting Eq. 3 into Eq. 2 result into our PID output being given by an equation
similar to Eq. 1.

3.2 System Network

Figure 5 shows the network architecture of the IIoT system we used to study the effects
of PID tuning on energy efficiency, and to show case the ability to display process data,
energy monitoring data, and utility electricity pricing information in the same Microsoft
Excel document in real time.
Our model IIoT system has Micrologix 1400 PLC that communicates using Ethernet
IP protocol and a SEL 751A relays that uses IEC61850 communication standard. At the
application layer, Ethernet IP supports two data structures, namely: Communication and
Information Protocol (CIP) and DF1. Micrologix 1400 uses the DF1 data structure. On
the other hand, the SEL relay supports DNP3, Modbus, GOOSE and MMS data struc‐
tures at the application layer. At the physical layer, both Ethernet IP and IEC61850
support Ethernet; that is why we were able to connect the SEL relay and the Micrologix
PLC on the same LAN as shown in Fig. 5. Moreover, Fig. 5 shows that the business PC
accesses the process and energy sub-system of our IIoT system, as well as the webserver
of the electricity company over the Internet. This is enabled by a combination of web
services and VPN technologies used to support our IIoT system [6].
Using Industrial Internet of Things to Support Energy Efficiency and Management 51

Fig. 5. IIoT PID system architecture

3.3 System Data Access

The process and energy data of our model IIoT system is accessed using KEPServer
OPC server. The server has multiple drivers including Ethernet IP and IEC61850 MMS.
These drivers are configured as channels to deliver the associated data to OPC clients.
This is necessary because OPC servers usually do not poses advanced data access
features such as HMI, alarms and event handling, data logging and historian, and process
data tunneling and bridging. It is OPC clients that are normally utilized to provide these
features. In our model IIoT system, we use OPC DataHub [6] to access data from
KEPServer and provide a Human Machine Interface (HMI) for the system, as well as
Dynamic Data Exchange (DDE) to a Microsoft Excel file.
Microsoft Excel has powerful features that allow the user to directly query databases
and websites. Our model IIoT system uses the Web query feature of Excel to retrieve
refreshable information that is stored on the electricity utility company’s web site (see
Fig. 5). The pricing data is extracted from the information using Macros programmed
in Visual Basic. Then the date is analyzed and integrated with process and energy data
using the tools in Excel.

4 Testing, Results and Discussion of Results

4.1 Testing

During testing, the pilot scale heat exchange was set to maintain an operating temper‐
ature of 32 oC in the heat exchanger chamber. This temperature was to be achieved by
heating the chamber with a blow dryer, while cooling it with a PID controlled fan. Test
52 T. Wanyama

settings such PID parameters (Kp, Ki and Kd), temperature setpoint, and PID mode were
done through the HMI shown in Fig. 6. In addition, the HMI provided the means for
monitoring the following performance measures of the system: System supply voltage,
fan controlled voltage, power consumption, and actual heat exchanger temperature.

Fig. 6. HMI of the heat exchange system with PID in manual mode

The heat exchanger was tested with PID mode set to manual for a period of 5 min.
Its power consumption was sampled every 30 s and sent to a Microsoft Excel sheet in
real time, through Cogent DataHub OPC client. Thereafter, the heat exchanger was
tested with the PID mode set to automatic, and its power consumption was logged and
stored in an Excel sheet. In both test cases, the trapezoidal rule was used to calculate the
energy consumption of the heat exchanger.

4.2 Results

Figure 6 shows the HMI of the heat exchanger with the PID set to manual, while Fig. 7
shows the HMI with PID in automatic mode.
Figure 8 shows the power consumption of the 24 V DC, 1.7 A fan motor when
controlled by PID in manual and in automatic modes.
Using Industrial Internet of Things to Support Energy Efficiency and Management 53

Fig. 7. HMI of the heat exchange system with PID in automatic mode

Fig. 8. Power consumption of the PID controlled pilot heat exchanger
54 T. Wanyama

Figure 9 show the real time display of the energy data of the heat exchanger and
energy cost in a Microsoft Excel sheet. This is made possible by the use of IIoT tech‐

Fig. 9. Real time display of heat exchanger energy data in excel sheet

4.3 Discussion of Results

The trend chart in Figs. 6 and 7 show that the PID automatic mode provides smooth
system control than the manual mode. Moreover, Fig. 8 shows that the power consump‐
tion of the motor under manual model has higher variance that when the PID is in auto‐
mation mode. This is expected since the manual model in essentially ON-OFF control.
Not that ON-OFF switching (control) of machinery such as Heating, Ventilation and
Air Condition equipment courses power quality issue.
The area of the graphs in Fig. 8 represent the energy consumed by the motor in 5 min.
Our calculations show that the area under the manual mode graph is equivalent to
1.5 × 10−3 kWh, while the area under the automatic model graph is equivalent to
0.833 × 10−3 kWh. This means that automatic PID loop is over 40% more efficient than
the manual loop; leading to over 40% reduction in energy cost.
Using Industrial Internet of Things to Support Energy Efficiency and Management 55

5 Conclusion

In this paper we present an IIoT that integrates industrial process control, energy moni‐
toring data and utility electricity pricing information using industrial networking tech‐
nologies. The industrial process component of the IIoT is based on PID control of a pilot
scale heat exchanger using an Ethernet IP enabled Micrologix PLC. The energy
consumption of the industrial process is monitored by the energy component of the IIoT
that has an IEC61850 SEL751A relay. Furthermore, this paper describes how the IIoT
enables the processing of control and energy data in a single Microsoft Excel file, in
real-time, showing the effect of PID settings on the energy cost of the controlled system.
Providing such information to machine operators and plant managers ensures that they
know the impact of the way they operation their PID controllers on energy consumption.


1. Bunse, B., Kagermann, H., Wahlster, W.: Industry 4.0: Smart Manufacturing for the Future,
Germany Trade and Invest, Berlin, German, July 2014.
manufacturing-for-the-future-en.pdf. Available as of 12 April 2015
2. Bunse, K., Vodicka, M.: Managing energy efficiency in manufacturing processes –
implementing energy performance in production information technology systems. In: Berleur,
J., Hercheui, M.D., Hilty, L.M. (eds.) What Kind of Information Society? Governance,
Virtuality, Surveillance, Sustainability, Resilience. IFIP Advances in Information and
Communication Technology, vol. 328, pp. 260–268. Springer, Heidelberg (2010). doi:
3. Burke, T.J.: OPC Unified Architecture Interoperability for Industry 4.0 and the Internet of
Things, OPC Foundation.
Interoperability-For-Industrie4-and-IoT-EN-v5.pdf. Available as of November 2016
4. European Communities, ICT and Energy Efficiency: The Case for Manufacturing, Office for
Official Publications of the European Communities, Luxembourg (2009). ISBN
5. Galloway, B., Hancke, G.P.: Introduction to industrial control networks. IEEE Commun. Surv.
Tutorials 15(2), 860–880 (2013). Second Quarter
6. Kafuko, M., Wanyama, T.: Integrated hands-on and remote PID tuning laboratory. In:
Proceeding of the Canadian Engineering Education Association Conference, Hamilton,
Ontario, Canada, June 2015
7. Likins, M.: PID Tuning Improves Process Efficiency, Yokogawa Corp. of America. http://
process-efficiency/. Available as of November 2016
8. O’Dwyer, A.: Reducing energy costs by optimizing controller tuning. In: Proceedings of the
2nd International Conference on Renewable Energy in Maritime Island Climates, Dublin,
Ireland, pp. 253–258, April 2006
From Research to Education

Doru Ursuţiu1 ✉ , Andrei Neagu2, Cornel Samoilă3, and Vlad Jinga4

( )

Transylvania University of Braşov - AOSR Academy, Braşov, Romania
Transylvania University of Braşov, Braşov, Romania
Transylvania University of Braşov - ASTR Academy, Braşov, Romania
Transylvania University of Braşov - Benchmark Electronic, Braşov, Romania

Abstract. Reducing energy demand in the residential sector is an important

problem worldwide. This study is focused on the awareness of residents to energy
conservation, potential of reducing energy and the implementation of a solution
in the field of Intelligent House. This paper presents a newly designed integrated
wireless modular monitoring system that supports real-time data acquisition from
multiple wireless sensing units.

Keywords: Energy savings · Building monitoring system · Wireless sensor

network · Xbee · Smart home · Cypress · IQRF

1 Introduction

Energy usage and its resulting impacts on our environment became one of the major
concerns humanity is facing today. Depletion of fossil fuels, the impacts on the envi‐
ronment from mining those fuels, and the spectre of global warming exacerbated by
burning them are critical reasons for us to become more responsible for the energy we
As one report of The Intergovernmental Panel on Climate Change shows, the indus‐
trial activities that our modern civilization depends upon have raised atmospheric carbon
dioxide levels from 280 parts per million to 379 parts per million in the last 150 years.
The panel also concluded there’s a better than 90% probability that human-produced
greenhouse gases such as carbon dioxide, methane and nitrous oxide have caused much
of the observed increase in Earth’s temperatures over the past 50 years. According to
this report the rate of increase in global warming due to these gases is very likely to be
unprecedented within the past 10,000 years or more [1].
Generating energy requires precious natural resources, for instance coal, oil or gas
while reducing energy consumption has lots of benefits – we can save money and help

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_6

protect and preserve our environment. Therefore, using less energy helps us to preserve
these resources and make them last longer in the future.
In the light of these facts above, we believe energy must be in future a concern of
all citizens and new tools must be created in support of our environment, starting even
from every household.
It must be noted that energy service demand may also reflect changes in the level of
comfort and lifestyle requirements of households. Specific energy consumption is
defined as the energy required to maintain a particular level of energy service in house‐
holds. It is a modelled alternative to energy intensity, and takes account of changes in
demand for individual energy services (such as level of household comfort or hot water
use), and helps to remove the impact of higher and lower external temperatures on energy
In this paper we focused on energy consumption in Belgium and Netherlands.
According to VEA Flemish Energy Agency, the average energy consumption of a person
per day is 50 kWh and from that 71% represents heating [2].
Let`s take into consideration that most companies involved in reducing energy
consumption and environmental pollution try to minimize energy consumption by
raising the efficiency of their systems and improving the buildings (heating devices
manufacturers such as Daikin or Viessmann, campaigns like the US Solar Decathlon),
or increasing user’s control over their systems (Google’s Nest, Smappee, smart metering
systems of Daikin and Veissmann). The disadvantage of these systems is high cost and
significant changes in the construction of the building (Fig. 1).







Brussels region Flemish region Walloon region Belgium
natural gas 16480 20712 18733 19644
fuel oil 16604 27902 26176 26489
wood 20202 22099 21350
coal 19110 22582 21803
Propane 21234 15946 18608

Fig. 1. Average total energy consumption (kWh/year, dwelling) per principal energy source per
dwelling per region and for Belgium (survey results) [2]

Figure 2 describes the heating losses starting from the boiler till the end user. As we
can see there are three main factors that are involved in the heating process: the first two
ones are the Boiler and the Building, components that most of the companies mentioned
58 D. Ursuţiu et al.

above are dealing with; the 3rd one, and we might say the most unpredictable one is the
third one, people’s behavior. In this research we will focus to provide a technological
solution, an energy monitoring system which can engage people in a more responsible
way to save energy in the cope of lowering energy costs.

Fig. 2. Energy losses in a building

To address the above issues, monitoring the habitant’s behaviour and informing them
about their own way of using energy, this paper describes such a monitoring system
designed to validate, collect and connect the users with the status of the building. During
our study we compared different ways of building a wireless modular system (here we
focused on the wireless technologies available on the market) and with the information
collected we had identified and validate how the flow of input data collected can
contribute in habitants perception of using energy.
Regarding the hardware, our focus is to provide a reliable and also cheap solution
for first validation stage. As to the modular configuration and the limits of the system.
During our research we evaluate several scenarios and validate our assumptions by case
studies conducted in two buildings.

2 Energy Monitoring Architecture

Our aim during the research is to find the best solution for creating and testing a moni‐
toring system dedicated to student house owners. The focus of the system is on the
heating structure and heating losses, which is the most expensive cost for our market.
Because our modular system should not affect the building structure, heating pump
structure and should be easy to install, the following are required:
– a wireless communication structure
– collecting information of air flow in each room.

In order to identify the heating losses it is necessary to observe the temperature of

the heater, temperature of the room and the status open/close of the window. This infor‐
mation should be enough to predict the thermodynamic flow (Fig. 3).

Fig. 3. Room scenario

In order to be able to validate the requirements mentioned above and also to validate
the principals of creating a modular system for this purpose we defined several hardware
requirements for the alfa product: low power wireless communication protocol, two
temperature sensors and one contact sensor.

3 The Architecture of the System & Operation Pattern

Figure 4 presents the complete structure of the alfa system we are proposing. The flux
of information is marked with the blue arrow and is composed by:
1. The “Gateway” described above - the link between the building and the cloud
(formed by Xbee module, the main logic board);
2. The “Sensor” – 2 temperature sensors, 1 contact sensor, the link board and the Xbee
3. The “Cloud” – database;
4. User interface – website.
A full descriptions of the elements mentioned above and also the role of each of
them, please refer to the full paper “Evaluating the reliability and scalability of a wireless
energy monitoring system in buildings” [3].
In order validate the system capabilities and purpose we created an end to end data
flow concept. And so the working pattern of the sensors to the users interface is described
in Fig. 4. This process has 4 steps:
1. Starting with the sensors, every 30 s we collect samples from the two temperature
sensors (TMP36) and the contact sensor. Through the Xbee Explorer the data is
60 D. Ursuţiu et al.

Fig. 4. System architecture & description of the operation pattern

streamed to the Xbee module. The Xbee module packs data in frames and renders
the information ready for the transmission.
2. The frame is send wirelessly to The Gateway. Acting as a coordinator, the Xbee
module receives the frame and forwards it to the Arduino Ethernet Board through
the Arduino Xbee Shield.
3. The Arduino board unpacks the frame and pushes the raw data through the Ethernet
port to the Carriots database to be stored.
4. After each week the data is interrelated and processed manually. The result then is
placed on a website, using charts and easy to understand description.
Figure 5 Presents the first part of the System – “The Sensor”, module placed in the
rooms, the transmission node.

Fig. 5. “The sensor” setup – the room transmission node

4 Data & User Interface

In order to validate the system and the impact that it may have to the habitants we
installed it in two different buildings. We collected data during for several weeks by not
informing the habitants about the system presence and purpose and after that we inform,
let them challenge each other in ways of saving energy.
During one month of collecting and stored data from each building, with the scope
to present to the house owner, in an easy to understand way, the result of energy
consumption in each room we developed a website including some charts and few
recommendations on how to save energy was developed for that purpose.
Based on the input from our three sensors we managed not to just inform the owner
on the consumption but also to evaluate the habits of the residents. Figure 6 illustrates
three days consumption by data collected from one of the experimental rooms (Room4).
In this case we can clearly see that the room average temperature is above the normal
comfort zone of 21 °C while open window usage is not energy efficient. The tenant, in
most of the situations, turns the heater at the maximum and opens the window.

Fig. 6. Data collected from one of the devices during three days (red- heater temperature, green
– room temperature, blue – window open/close).

In order to have a clear view of those habits and based on the average of the inputs,
we created a profile of each user. For that we made an average of temperatures collected
and movement open/closed of the window based on the time line.
Figure 7 describes user profile. Furthermore, providing to users tips on how to be
more environmental friendly, we can clearly see the improvements in energy usage of
the users, represented by the green line. Even if we did not get the same involvement
for each tenant, as Error! Reference source not found. shows, it is obvious that, by
concentrating more on those tips we can get better results for the future developments.
62 D. Ursuţiu et al.

Fig. 7. User’s habits profile

In order to have a better understanding of the Fig. 7 the red line represents the usage
habit during the first three days of testing and the green one in the last days, after
providing the tips. Subtracting the two, in Fig. 6 we can see the improvement in user
habits. This result was obtained in one room, Room 4.

5 Evaluation Methodology

To evaluate system coverage and redundancy we applied several key methods to simu‐
late and test it in a real case scenario. First we used the WHIPP tool to simulate the
coverage of the system focusing on the reception sensitivity. By using this tool we were
able to have a first understanding of the distance that might be between the devices. The
next step is represented by a RSSI measurement conducted in a real environment. The
test was made using X-CTU software, allowing us to understand which links are reliable
and where an extra device is needed. The last step is a mash routing redundancy. In that
phase we evaluated the system capability of establishing a new link in case of power
loss. There are several key methods that we applied to test and also to determine the
maximum range of our system.

5.1 WiCa Heuristic Indoor Propagation Prediction Tool

WiCa Heuristic Indoor Propagation Prediction Tool is an environment for planning

wireless networks. The tool is a heuristic indoor network planner for exposure calcula‐
tion and optimization in wireless homogeneous and heterogeneous networks, with which
networks are automatically jointly optimized for both coverage and electromagnetic
exposure. It is capable to predict and optimize the coverage and expose of an indoor
wireless network (WiFi, UMTS, XBee). It is based on an advanced and experimentally
validated propagation model [4]. In the Fig. 8 is presented the layout of one of the
building /tasted side in the WiCa Heuristic Indoor Propagation Prediction Tool.

Fig. 8. WiCa prediction tool

In our case we used WHIPP tool to define the exposure limitation of the system in
the tested buildings. We started by creating a plan of the building. For that purpose we
used the preset material such as dry-wall, wood doors and the xbee sensors JN516x with
the transmitting power of 3 dBm. The elevation of the sensors, for all the simulation,
was set to 1.5 m. Taking into consideration that the position of the devices is strictly
related to the heating modules in the building we focused on prediction coverage simu‐

5.2 RSSI Measurement

Received signal strength indicator (RSSI) is the signal strength level of a wireless device
measured in dBm of the last received packet [5]. The main idea behind the RSS system
is that the detected signal strength value reduces within the distance travelled. In free
space, the RSS degrades with the square of the distance from the sender [6]. Using the
Friis transmission equation, the ratio of the received power Pr (dB) to the transmission
power Pt (dB) can be expressed as:
( )2
Pr = Pt × Gt × Gr
where, Gt(dB), Gr (dB) are gain of transmitter and gain of receiver respectively, λ is the
wavelength, and d (m) is the distance between the sender and receiver. It can be seen
that the larger the wavelength of the propagating wave is, the less susceptible it is to
path loss. The received signal strength is converted to RSSI which can be defined as the
ratio of the received power Pr (dB) to the reference power PRef (dB).
64 D. Ursuţiu et al.

RSSI = 10log

Using X-CTU software, one of the XBee modules is configured as Coordinator while
the other as a Router. After pairing with the Coordinator, the Router starts transmitting.
Once the Coordinator has received the data packets successfully, it sends back an
acknowledgment (ACK). To obtain the RSSI value the software takes the average of
100 RSSI results from 100 packets of 32 bytes each. Hence, the RSSI value is measured
after sending 100 packets of 32 bytes each, and then the average is used to generate
The distance between the Routers and the Coordinator was variable. We applied
different case scenarios depending of the building where we tested the system. As
presented in Figure Building A and Figure Building B we conducted static tests by
placing the coordinator setup in the main hallway of each floor. The average distance
between floors in case of Building A is 2.7 m and for Building B 2.3 m. For each floor
setup we tested the link to each device in the system, one by one, in order to determine
the relationship between RSSI values and the distances.

5.3 Mesh Routing System Redundancy

A ZigBee mesh network configuration is done automatically and flawlessly by the XBee
devices. The Coordinator starts a ZigBee network, and other devices then join the
network by sending association requests. As we described in the second chapter, ZigBee
networks are considered self-forming networks due to their ability of self-routing.
After forming the mesh network, to relay the message from one device to another,
the most optimized path is selected. However, if one of the routers becomes damaged
or otherwise unable to communicate due to power loss, the network can select an alter‐
native route.
One of the most important characteristics of ZigBee mesh networking has been its
self-healing capacity, the ability to create alternative paths when one node fails or a
connection is lost through mesh routing [7]. In order to test this attribute of mesh routing,
we observed the elapsed time between the elimination of one path and the search and
formation of another. To perform the test, we captured messages received on Coordi‐
nator from the network. Then we powered off one by one each Router until the link was
disconnected. This experiment determines where a repetitive (Router) device is needed
in order to guarantee the redundancy of the system.

5.4 Test Results Between the Tool and Real Conditions

Taking into consideration the results of the mesh redundancy we observed that without
the device from room 2, the network cannot communicate with the device from room
1. In order to compare the results of the WIHPP tool with the ones conducted in real
situation we simulated the scenario A without the device from room 2, Fig. 9b.

Fig. 9. (a) WHIPP simulation. (b)WHIPP simulation without device 2

Below, in Fig. 9 are presented the two scenarios: figure (a), with the device from
room 2 and picture (b) without it. There is a change in color between the two simulations.
The first one has a green – yellow color but the second one a darker green, marked with
the red oval. According to the tool this color represents a rage of sensitivity between
−44 to −50 dBm and in real situation test it will show a loss of connection.

6 Experiment Conclusions

As it was shown in this paper, the performance of the proposed system, different
scenarios were undertaken in order to measure the performance of the monitoring system
network, with a focus on the mesh routing redundancy and the RSSI level of the Xbee
modules, comparing the results with a software simulation. The results showed that the
performance of the network highly depends on distance range between devices and
indoor environment. A reliable communication is two floor distance (15 m) or the
communication can be lost or impossible to set.
As a general conclusion of the design and testing phases of the proposed alpha
monitoring system, the study shows the performance of the system as a tool to monitor
and optimize energy consumption. After one month of collecting data for the proposed
system, by installing the system in student houses, we managed to define habits of the
building users. This helped to obtain a profile for each room and the informing approach
on the students lead us to validate the system’s purpose to save energy. By using the
monitoring system we managed to obtain savings between 8% and 21% and to make the
first steps in creating a standard regarding the energy habits of the room user [3].
This result bring even more value to the scope of saving energy as the cost of using
the system is one small investment for an indefinite period of time. It comes with a
relatively simple structure and usage, having a user friendly interface and a low cost,
while the energy savings on the long term can bring significant reduction on energy cost
for the user and protection of the environment in terms of conservation of natural
resources needed to produce energy.
66 D. Ursuţiu et al.

7 Future Development

In a future development we plan to investigate the possibilities to extend the sensors

used and the modularity aspects of the entire system. We expect that for the beta version
to use batteries for the ending nodes and to implement a friendlier interface for the end
Another point of interest is to reduce as much as possible the device dimensions. In
this way the new generation should have all the sensors in one single box. As described
in the Fig. 10, we plan to integrate the entire Sensor in a layered module.

Fig. 10. Concept beta version-layered module

We are looking to explore the possibilities of using different technology like the
PSoC Analog Coprocessor provided by Cypress as a layer of processing multiple sensors
inputs. PSoC Analog Coprocessor integrates programmable analog blocks, including a
new Universal Analog Block (UAB), which can be configured with GUI-based software
components. By using this technology we aim to upgrade the ways we can design custom
analog front ends for sensor interfaces.

Fig. 11. CY8CKIT-048 PSoC from Cypress and CK-USB-04A from IQRF

More than that, due to our strong collaboration with Microrisc and to their complex
structure of collecting, cloud storage and visual online platform, we intend to integrate
into our system the IQRF transition technology. Till now the IQRF technology help us
in our need of scaling the range of the system and on top of that, form our first test we
were able to reduce to 20% the energy consumption of each node. As presented in
Fig. 11 we started our test by using CY8CKIT-048 PSoC from Cypress and CK-
USB-04A from IQRF.
Regarding the user approach we want to improve the web platform in order to give
better access to each student. We also want to create a friendly way to send real time
messages to students with possible actions. In order to protect the concept we will look
for a solution to move all the processing algorithms from the hardware. Extra features
will be also implemented in the same device, like CO2 and humidity sensors. This
information will add even more value to our system providing quality of the air status
in an indoor environment.


1. Climate Change: Synthesis Report Summary for Policymakers. (2007)

2. Energy Consumption Survey for Belgian Households: study accomplished under the authority
of EUROSTAT, Federal Public Service (FPS) Economy, SMEs, Self-Employed and Energy,
VEA Flemish Energy Agency, SPW Service Public de Wallonie, IBGE-BIM Brussels
Environment, 2012/TEM/R/
3. Neagu, A.C., Joseph, W., Deruyck, M., Haerick, W.: Evaluating the reliability and scalability
of a wireless energy monitoring system in buildings. Master thesis
4. Plets, D., Joseph, W., Vanhecke, K., Martens, L.: Exposure optimization in indoor wireless
networks by heuristic network planning. Prog. Electromagnet. Res. 139, 445–478 (2013)
5. D. International: XBEE User Manual, ed: Digi International, pp. 1–155 (2012)
6. Dargie, W., Poellabauer, C.: Fundamentals of Wireless Sensor Networks: Theory and
Practice. Wiley, New York (2010)
7. Lin, Z., Yu, C., Ting-ting, F.: Self-healing network organization and protocol implementation
based on zigbee technology. Beijing, Commun. Technol. 45(04), No. 244, Totally, (2012)
8. European Innovation Partnership on Smart Cities and Communities: Strategic
Implementation Plan; Smart Cities and Communities - European Innovation Partnership
9. Thinagaran Perumal Universiti: Development of an Embedded Smart Home System, Putra
Malaysia; itma 2006
10. Crandalla, A. S , Cook, D. J.: Smart Home in a Box: A Large Scale Smart Home Deployment.
School of Electrical Engineering and Computer Science, Washington State University,
11. Kamilaris, A.: Enabling Smart Homes Using Web Technologies. University of Cyprus (2012)
12. Ullah, M. Z.: An Analysis of the Bluetooth Technology. School of Computing Blekinge
Institute of Technology Soft Center, Sweden, June 2009
Development of M.Eng. Programs with a Focus
on Industry 4.0 and Smart Systems

Michael D. Justason, Dan Centea ✉ , and Lotfi Belkhir

( )

McMaster University, Hamilton, ON, Canada


Abstract. Master of Engineering Programs are often designed to provide skills

that can be readily used in industry. Although many M.Eng. Programs include
courses that can be selected from an existing pool of traditional engineering topics
to fulfill various specializations, this paper describes the development of new
M.Eng. Programs designed to include courses that address the new trends in
industry. This paper presents the design and implementation of new M.Eng.
Programs that focus on modern approaches in manufacturing; namely Industry
4.0 and Smart Systems. The integration of these new M.Eng. Programs with
related undergraduate programs are also described, as is the potential to provide
certain students with an accelerated pathway to professional licensure. Several
common elements of Industry 4.0 trends are contained within these new
programs. These elements include cyber-physical systems, internet of things, and
development of smart systems. This paper presents the development of three
M.Eng. Programs: Automotive, Automation, and Advanced Manufacturing.
These programs focus on real-world problems of industries in which progress is
fast and in which specialists need to provide constantly evolving, creative, and
innovative solutions. Being designed for both to full-time students and part time
students from industry, the courses developed for these programs are offered in
the evening. Students can chose between a coure-and-project option that includes
6 courses and a project and a course-only option that include eight course. The
graduates of these programs are expected to have a strong technical grounding
with broad management and industry perspectives combined with strong nontech‐
nical areas of expertise.

Keywords: M.Eng. · Industry 4.0 · Smart systems · McMaster

1 Introduction and Background

The W Booth School of Engineering Practice and Technology (SEPT) at McMaster

University in Hamilton, Ontario, Canada is a School contained within the Faculty of
Engineering. SEPT offers seven undergraduate programs which award Bachelor of
Technology (B.Tech.) degrees. The School also offers five specialized Masters programs
awarding M.Eng. degrees. The defining characteristic of the School is its focus on real-
world problems. SEPT exists as a complement and a contrast to the traditional Depart‐
ments within the Faculty of Engineering which focus on theory and discovery and award

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_7
Development of M.Eng. Programs with a Focus on Industry 4.0 and Smart Systems 69

Bachelor of Engineering (B.Eng.) degrees, Master of Applied Science (M.ASc.) degrees

and Doctorate (Ph.D.) degrees.
The undergraduate programs in SEPT differ from traditional Bachelor of Engi‐
neering Programs in several ways. Some advantages of the programs are: they are
strongly influenced by industry and their curriculums are flexible, they focus more on
experiential learning and student-centered learning, they employ more sessional/
industry instructors, and they integrate the fundamentals of business into the curriculum.
One disadvantage of the programs is that they are not ‘accredited’ by the governing body
that oversees engineering curricula in Canada. Graduates of these programs have a more
difficult, pathway to professional licensure than graduates of traditional Bachelor of
Engineering programs.
Of the seven undergraduate programs in SEPT, four of them are “degree completion
programs” (DCP). These are designed for graduates of post-secondary institutions called
“colleges of applied arts and technology” or colleges for short. McMaster University’s
degree completion programs offer a Bachelor of Technology degree upon the completion
of 24 courses above and beyond the completion of a three-year college diploma in a
related field. The four DCP programs are: Civil Engineering Infrastructure Technology,
Energy Engineering Technologies, Manufacturing Engineering Technology, and Soft‐
ware Engineering Technology. Each program contains 17 technical courses and 7 busi‐
ness/management courses. These programs also contain a mandatory 8-month co-op
work term, although this requirement is waived for the majority of students since the
program’s unique evening and weekend schedules attract a large number of working
students. There are currently 400 students in the DCP programs, the majority are enrolled
as part-time students.
The three remaining undergraduate programs in SEPT are Automotive and Vehicle
Technology, Process Automation Technology, and Biotechnology. These programs are
direct-entry from High School and are 4.5-year degrees which include 12-months of co-op
work placement. These programs are offered during regular daytime hours and are full-time
programs. There are currently 800 students enrolled in these programs.
At the graduate level, the School of Engineering Practice and Technology offers five
unique Masters programs–four granting the degree Master of Engineering, and one
granting the degree Master of Technology. These programs are: Master of Engineering
Entrepreneurship and Innovation, Master of Technology Entrepreneurship and Innova‐
tion (open to students with non-engineering/non-science undergraduate degrees),
Master of Engineering Design, Master of Engineering and Public Policy, and Master of
Engineering in Manufacturing Engineering. There are currently 150 students enrolled
in these programs.
The expansion of the Master of Engineering in Manufacturing Engineering into three
‘Industry 4.0 and Smart Systems’ focus-areas forms the subject of this paper. The three
new M.Eng. focus-areas are: Automation, Automotive, and Advanced Manufacturing.
70 M.D. Justason et al.

2 Motivation

The School of Engineering Practice and Technology is well positioned to prepare grad‐
uates for employment in the manufacturing sector. The School’s location in Hamilton,
Ontario is central to Canada’s manufacturing industry.
Producing university graduates with skills that are immediately applicable is a chal‐
lenge in many industries with rapidly-changing technology, and this is especially true
in the manufacturing industry [1]. It is particularly evident in the area of automotive
engineering [2]. SEPT has already implemented undergraduate programs that address
this challenge; namely Automotive and Vehicle Technology, Process Automation Tech‐
nology, and Manufacturing Engineering Technology. Industry 4.0-based Master’s
Programs will provide students with a continuing pathway to graduate-level programs.
It is also intended to facilitate the pathway to professional licensure for graduates of the
aforementioned undergraduate programs, but is also open to graduates of traditional
undergraduate engineering programs as well as international students. The pathways
created by the new M.Eng. programs are shown in Fig. 1.

Bachelor of Engineering Bachelor of Technology

(TradiƟonal Departments with Accredited Programs)  AutomoƟve and Vehicle Technology
 Mechanical  Process AutomaƟon Technology
 Industrial  Manufacturing Engineering Technology

Master of Engineering in Manufacturing

(Industry 4.0 and Smart Systems focus areas)
 Advanced Manufacturing

Professional PracƟce ExaminaƟon

Fig. 1. Pathways

It should be noted that graduates of the Bachelor of Technology undergraduate

program have an existing pathway to professional licensure (shown by the light-grey
patterned arrow in Fig. 1) but this involves a series of technical challenge exams admin‐
istered by the Provincial license-granting body. The number of exams can range from
as few as four, to as many as ten depending on the year of graduation. More recent
graduates are assigned fewer exams thanks to the evolution of the curriculum towards
content that is more favorable to the licensing body. Completion of an M.Eng. degree
after the B.Tech. degree can significantly reduce or in some special cases even eliminate
the need to complete any challenge exams (represented by the dark-grey patterned
arrow). Additionally, the time spent in the M.Eng. program may also count towards the
Development of M.Eng. Programs with a Focus on Industry 4.0 and Smart Systems 71

amount of work experience required for licensure. Typically, the M.Eng. program will
count as 1-year of the required 4-years of work experience.
The new M.Eng. Programs within the School of Engineering Practice and Tech‐
nology have content and delivery methods consistent with the Vision of the new school.
The School’s Vision can be characterized by the following elements: industry-driven,
hands-on, case-studies, in course projects, advanced methods and technologies, inno‐
vative teaching methods, sustainability, community-focused, professional development,
communications, management, design, problem-solving, and integration of professional
and technical skills.
With this Vision in mind the motivation for introducing these three new focus areas
to the Master of Engineering in Manufacturing Engineering (MEME) can be organized
into four main areas:
1. Opportunities for Students
2. Opportunities for Faculty
3. Opportunities for Partners
4. Opportunities for the Faculty of Engineering.
The creation of the new M.Eng. programs also provides the School with an oppor‐
tunity to educate students in areas that are complementary to the technical aspects of
Industry 4.0. Successful Industry 4.0 implementation involves aspects of a business
‘outside’ the functions related directly to the manufacturing process. Business consid‐
erations such as human resource management, accounting and finance, strategy, culture,
and leadership all play a role in the successful implementation of Industry 4.0. This
supports the idea of a T-Shaped graduate, with a broad knowledge of business that is
outside their specific technical area [3]. This concept is particularly important in the area
of human resource management and supply-chain management [4, 5].

2.1 Opportunities for Students

These new focus areas within the M.Eng. Programs in the W Booth School of Engi‐
neering Practice and Technology create the following opportunities for students:
• The chance to obtain a graduate degree in a high-demand, industry driven topic.
• Pathways to the new M.Eng. in Manufacturing Engineering (MEME) focus areas can
be streamlined by adding undergraduate elective courses that offer advanced credit
for M.Eng. programs.
• Undergraduate and Graduate students will connect inside the courses that are offered
for advance credit. This offers the potential for mentoring, and possibly even collab‐
oration on projects.
• Graduate students will have the opportunity to become Teaching Assistants for the
Undergraduate courses.
• Undergraduate students may have increased contact with industry partners engaged
in projects with Masters students. Possible co-op placement opportunities for under‐
graduate students may result.
72 M.D. Justason et al.

• A clear pathway into a Master’s program will offer Bachelor of Technology graduates
an improved pathway to professional licensure.
• Completing an M.Eng. degree offers Bachelor of Technology graduates the oppor‐
tunity to participate in the Ritual of the Calling of the Engineer (the ‘Iron-Ring’).
• The new focus areas will remain accessible to graduates of more traditional Bachelor
of Engineering programs, both from McMaster and elsewhere.

2.2 Opportunities for Faculty

It should be noted that the undergraduate programs in SEPT employ a large number of
contract faculty and teaching-track faculty. These faculty members have heavy teaching
loads and are often responsible for teaching most of the core and entry-level courses.
These new focus areas within the M.Eng. programs in the W Booth School of Engi‐
neering Practice and Technology create the following opportunities for the undergrad‐
uate faculty members:
• Opportunities to teach and mentor graduate students
• Opportunity to teach graduate level courses in their own areas of expertise
• Chance for collaborative applied research, innovation in teaching and learning, and
pedagogical research.
Faculty already teaching in the existing five M.Eng. programs may see the following
• Synergy of people with common interests
• More effective use of human resources and less reliance on sessional lecturers
• Possibility to share resources: building (labs, room bookings, meeting rooms) and
technical support staff.
Additionally, there are ‘research gaps’ between current manufacturing systems and
the potential that exists with the implementation of industry 4.0 ideas. This supports the
idea for educational programs designed specifically around an Industry 4.0 framework
and creates the potential for research opportunities for Faculty involved in these new
focus areas [5].

2.3 Opportunities for Partners

These new focus areas within the M.Eng. Programs in the W Booth School of Engi‐
neering Practice and Technology create the following opportunities for the School’s
• Feeder colleges to the DCP programs can offer students a direct pathway from college
through to an M.Eng. degree, and ultimately to professional licensure
• Community Partners (Companies, Organizations, and Government):
– Richer engagement with groups of undergraduate and graduate students
– Working with a broader spectrum of potential co-op or full-time employees
prescreened through project engagement.
Development of M.Eng. Programs with a Focus on Industry 4.0 and Smart Systems 73

2.4 Opportunities for Faculty of Engineering

• Growth in enrollment at both the undergraduate and graduate levels
• Ability to deliver specialized M.Eng. programs not currently offered in the traditional
engineering Departments
• The expansion of an already effective School focused on depth and breadth of
learning, pedagogical research, and engineering practice
• Expanding programs committed to serving community and industry needs
• Flexibility to offer undergraduate and graduate curriculum that responds quickly to
changes in community and industry needs (unlike accredited programs)
• Enhanced recognition and reputation of the Faculty
• Further the Faculty’s mission to implement the concept of sustainability into the
curriculum of all programs - embedding sustainability into the new M.Eng. focus-
areas is important in light of the great opportunities that exist in the area of Industry
4.0 [6].
An opportunity for the Faculty, and in particular the School of Engineering Practice
and Technology, is the opportunity to create a special teaching/learning/research facility
called a ‘Learning Factory’. This small-scale ‘functioning’ manufacturing facility offers
a chance for smaller companies to train employees in the skills and technology needed
to implement Industry 4.0 concepts. It also positions the university as a direct supporter
of local/regional companies through technology as well as providing a supply of appro‐
priately trained professionals [7].

3 Methodology

The activities described in this section were carried out by a SEPT committee called the
M.Eng. Task Force. This group of six SEPT faculty members, plus 1 representative from
the School of Graduate Studies, carried out monthly meetings from approximately
mid-2015 to mid-2016. This section outlines the committee’s activities.

3.1 Market Research

The first step in the development of the new M.Eng. focus areas was to engage students
and alumni in market research. The results of a survey that included responses from 354
B.Tech. students, 342 B.Eng. students, and 146 alumni are shown below. A study of
competing programs at nearby Universities was also completed.
• B.Tech. Students–50% indicated a desire to pursue an M.Eng. degree
• B.Eng. Students–more than 80% indicated a strong interest in an M.Eng. degree
• Alumni–approximately 66% of McMaster Engineering Alumni living within a 1-hour
commute of McMaster indicated a strong interest in an M.Eng. degree
• M.Eng. programs at other Ontario Universities are popular; even ‘over-subscribed’
• Based on historical enrollment numbers in the existing M.Eng. programs, there is
typically a large demand from international students (>60% of existing M.Eng.
students are international students).
74 M.D. Justason et al.

Based on the results of the market research, it was evident that the demand for M.Eng.
programs was strong among all target groups. This market research encouraged the
M.Eng. Task Force to continue its activities.

3.2 Implementation
To facilitate a January 2017 implementation of the new Master of Engineering in Manu‐
facturing Engineering (MEME) focus areas, the new focus areas needed to be structured
within the framework of the existing program. It was not possible to seek approval for
a completely new program structure as this could take up to 2-years. The existing
framework for the MEME program was as follows:
• Students can take up to two graduate level courses from the Mechanical, Materials,
and Chemical Engineering Departments.
• Each student must complete a project at a manufacturing company plus six graduate
level courses (or 8-graduate level courses without a project).
• Courses from Departments other than the three ‘approved’ Departments (Mechan‐
ical, Materials, and Chemical) must be approved on a case by case basis.
• All other courses must be taken within SEPT.
The details of the implementation suggested that it was possible to offer the new
MEME focus-areas for students starting the program in January 2017 provided they
elected to complete the 6-course plus project option. The 8-course option would need
to be implemented in the Fall of 2017 due to the requirement to create and seek approval
for additional (new) courses within SEPT.

Table 1. Actual course-offerings (‘core’ and ‘elective’)

Automation Automotive Advanced
Industry 4.0 C C C
Components, networks, C E C
Sensors and actuators C C C
Data mining & machine E E E
Cyber security E E E
Systems analysis and E C
Hybrid & electric vehicles E C
Additive manufacturing C E
Robotics E E E
Analysis & troubleshooting of C C E
Mfg operations
Real time control, advanced C E E
Development of M.Eng. Programs with a Focus on Industry 4.0 and Smart Systems 75

Other actions undertaken by the M.Eng. Task Force included:

• Preparation of an expanded list of pre-approved graduate level courses from Depart‐
ments other than the three approved Departments (Mechanical, Materials, Chemical).
• Approval to offer a selected number of SEPT undergraduate courses as possible
‘advance-credit’ courses.
• Design and approval for new SEPT Industry 4.0-themed courses for inclusion in the
Fall 2017 program start (see Table 1).

3.3 Final Program Design

• Students will be required to take 8-courses
• Students may opt for 6-courses plus a project (project subject to approval)
• Full-time and part-time studies will be possible; courses delivered in the evenings
when possible
• Some online course-offerings will be considered.

3.4 Future Developments

The launch of a fourth Industry 4.0 focus area is targeted for September 2018. This focus
area would be in ‘Digital Solutions’.
A second M.Eng. theme-area tentatively referred to as ‘Smart Cities’ is also targeted
for September 2018. Specializations may include: Civil Infrastructure, Biotechnology,
and Power and Energy.

4 Summary and Conclusions

A set of M.Eng. Programs developed at McMaster University in the School of Engi‐

neering Practice and Technology with a focus on modern real-world problems from
industry and society are expected to produce graduates well positioned for careers on
the leading edge of manufacturing engineering and technology.
The new M.Eng Programs are innovative, interdisciplinary, industry-focused, and
have a strong focus on management, leadership, and community engagement. They have
strong industry interaction and include projects meaningful to society. The new
Programs also complement the associated undergraduate programs in manufacturing,
software, process automation, and automotive and vehicle technology, yet remain acces‐
sible to graduates of more traditional engineering disciplines.
Although implementations of Industry 4.0 key elements can vary significantly within
different specializations, there are several common elements. These include cyber-
physical systems, internet of things, and development of smart systems. This paper
presented the development of three M.Eng. Programs that include these elements: Auto‐
motive, Automation, and Advanced Manufacturing. These programs focus on real-
world problems of industries in which progress is very fast and in which specialists need
to provide constantly evolving, creative, and innovative solutions. The M.Eng. programs
76 M.D. Justason et al.

offer full-time and part-time options, as well as course-and-project or course-only

The graduates of these programs are expected to have a strong technical grounding
with broad management and industry perspectives combined with strong nontechnical
areas of expertise.


1. Schuh, G., Gartzen, T., Rodenhauser, T.M.A.: Promoting work-based learning through
industry 4.0. In: The 5th Conference on Learning Factories 2015, Bochum (2015)
2. Riel, A., Tichkiewitch, S., Stolfa, S., Kreiner, C., Messnarz, R., Rodic, M.: Industry-academia
cooperation to empower automotive engineering designers. In: 26th CIRP Design Conference,
Stockholm (2016)
3. Schumacher, A., Erol, S., Sihn, W.: A maturity model for assessing industry 4.0 readiness and
maturity of manufacturing enterprises. In: Changeable, Agile, Reconfigurable and Virtual
Production, Stockholm (2016)
4. Hecklau, F., Galeitzke, M., Flachs. S., Kohl, H.: Holistic approach for human resource
management in Industry 4.0. In: 6th CLF - 6th CIRP Conference on Learning Factories (2016)
5. Huxtablea, J., Schaefera, D.: On servitization of the manufacturing industry in the UK. In:
Changeable, Agile, Reconfigurable and Virtual Production, Bath (2016)
6. Stock, T., Seliger, G.: Opportunities of sustainable manufacturing in industry 4.0. In: 13th
Global Conference on Sustainable Manufacturing- Decoupling Growth from Resource Use,
Ho Chi Minh City (2016)
7. Faller, C., Feldmuller, D.: Industry 4.0 learning factory for regional SMEs. In: The 5th
Conference on Learning Factories 2015, Bochum (2015)
Remote Acoustic Monitoring System
for Noise Sensing

Unai Hernandez-Jayo1,2(B) , Rosa Ma Alsina-Pagès3 , Ignacio Angulo1,2 ,

and Francesc Alı́as3
DeustoTech - Fundación Deusto, Avda. Universidades, 24, 48007 Bilbao, Spain
{unai.hernandez, ignacio.angulo}
Facultad Ingenierı́a, Universidad de Deusto, Avda. Universidades, 24,
48007 Bilbao, Spain
GTM - Grup de Recerca en Tecnologies Mèdia, La Salle - Universitat Ramon Llull,
Quatre Camins, 30, 08022 Barcelona, Spain
{ralsina, falias}

Abstract. The concept of smart cities comprises a wide range of control

and actuators systems aimed to improve the habitability and perception
that citizens have of cities. A smart city covers many of these systems,
ranging from applications that facilitate the governance of cities and
encourage citizens’ participation to services focused on improving their
quality of life. Among them, we can highlight those using Information
and Communication Technologies (ICT) to improve the environment of
the city. Besides deploying air quality monitoring systems, smart cities
are beginning to include other ICT-based systems, such as the work in
progress proposed in this paper, which is aimed to remotely monitor
noise levels at different points of the city using the public bus system as
mobile sensors network.

1 Introduction
According to the European Commission, a large majority of European Citizens
are living in urban environments [1], being approximately in 2010 the 50.5% of
the world’s population [2]. This trend, far from diminishing, in increasing year
by year. The United Nations Population Division reported in 1990 that in this
decade there were ten “mega-cities” with 10 million inhabitants or more. In 2014,
the number of mega-cities was 28 (representing about 12% of the world’s urban
dwellers). By 2030, the world is projected to have 41 mega-cities with 10 million
inhabitants or more [3].
From the perspective of the emerging economies, mega-cities will become
the largest markets for consuming new technology products, due to the need of
the authorities to apply the so-called ICT (Information and Communications
Technologies) to deal with problems related to the economy, buildings, mobility,
energy, citizens, planning and governance of the cities. It is in this scenario where
the concept of smart cities has been growing up for the last ten years in order
to provide solutions to the new challenges posed by these urban areas.

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 8
78 U. Hernandez-Jayo et al.

The idea of smart cities comprises a wide range of control and actuators
systems aimed to improve the habitability and perception that citizens have of
cities. A smart city covers many of these systems, ranging from applications
that facilitate the governance of cities and encourage citizens’ participation to
services specifically focused on improving their quality of life. Among these sys-
tems, we can highlight those using ICT to improve the environment of the city,
but not only from an air quality perspective, but also to control the noise levels
of the city. It is assumed by the World Health Organization that environmen-
tal noise has emerged as the leading environmental nuisance triggering one of
the most common public complaints in many Member States of the European
Union. The European Union tries to face the problem of environmental noise
with international laws and directives (as the European Noise Directive [4]) on
the assessment and management of environmental noise [5].
In this context, the work in progress presented in this paper shows the ICT-
based approach that the University of Deusto and the Ramon Llull University are
developing jointly in the frame of the Aristos Campus Mundus initiative, with
the goal of obtaining real time information about the equivalent noise levels
(Leq ) of a city. For that purpose, the developed tool will be able to monitor
remotely noise levels at different points of the city using the public bus system
as a mobile sensors network.
The paper is structured as follows. In Sect. 2, the related work regarding
mobile acoustic monitoring is detailed, and in Sect. 3, its challenges are pointed
out. In Sect. 4, the hardware approach proposed to address the problem of mobile
acoustic monitoring is explained, and in Sect. 5 the first acoustic signal processing
algorithms developed to face this challenge are described. In Sect. 6, we detail
the expected outcomes of the collaborative project, and, finally, in Sect. 7 we
focus on the conclusions and future work.

2 Related Work in Mobile Acoustic Monitoring

Traditional noise measurements in cities have been mainly carried out by pro-
fessionals that record and analyze the data in a certain location typically using
certified sound level meters. This approach allows reliable analyses but is costly,
hardly scalable and difficult to follow rapid changes of urban environments.
To address these drawbacks, in the last decade, several approaches of systems
focused on the monitoring of environmental noise have been proposed (see [6]
and references therein). Their main goal has been oriented to the development
of small equipments assuring the reliability of the acoustic measurements. More-
over, these systems have been designed to allow scalability by reducing the
cost of the hardware (i.e., low-cost acoustic sensors) and improving the net-
work data communication in order to tailor a noise map by means of mobile
acoustic monitoring.
One of the first experiences in mobile acoustic monitoring is detailed in [7],
where a mobile sensing unit (MSU) associated with a Global Positioning System
(GPS) is used to perform acoustic monitoring in Seoul (South Korea). With a
Remote Acoustic Monitoring System for Noise Sensing 79

reduced set of sensors, the Seoul ubiquitous sensing project conducted a wide
range of tests across several city locations. The MSU were even installed in cars
and buses moving around the city by following repetitive circuits. These nodes
measured temperature and humidity, and also noise level. However, no signal
processing is included for the latter (or at least detailed) neither in the sensors
nor in the network hub.
In [8], the system is based on an array of sensors carried by a vehicle driving
along the streets of the city to acquire measurements from different locations.
The goal of that piece of research is estimating the locations and the power of
the stationary noise sources of the locations of interest. For this purpose, the
data gathered by the array is post-processed before plotting the several sources
in the noise map, but no details are given about the vehicle noise treatment.
Dekoninck et al. [9] focus on the study of low density roads, including both
mobile and fixed noise monitoring platforms. The proposal is based on perform-
ing the mobile measurements by bicycle, which provides a new view on the local
variability of noise and air pollution based on computing the differences of mea-
surements along road segments [10]. This proposal is easily applicable to any
other cities to monitor both noise and air pollution at the cost of having enough
bicycle riders.

3 Challenges of Mobile Bus Acoustic Signal Processing

The first challenge associated to audio signal processing for ubiquitous noise
measurements is the correction that has to be applied to the noise generated by
the mobile vehicle transporting the acoustic sensor, in this work, a public bus.
Obviously, the bus contributes to traffic noise in the city, but this noise source is
very close to the point of measure. Therefore, it has to be detected and its con-
tribution to the city noise map has to be considered appropriately. Nevertheless,
this problem presents a counterpart: the process can take advantage of having
the audio reference of the noise source (mainly produced by the bus engine).
In [11], a signal processing system dealing with the identification and the
estimation of the contribution of different noise sources to an overall noise level
is presented. The proposal is based on a Fisher’s Linear Discriminant classifier
and estimates the contribution based on a distance measure. Later, in [12] a
similar system based on probabilistic latent component analysis is described.
This approach is based on a sound event dictionary where each element consists
of a succession of spectral templates, controlled by class-wise Hidden Markov
In [13], a review of the difficulties for appropriately measuring the perfor-
mance of polyphonic sound event detection is stated before gathering several
metrics specifically designed for this purpose. For the problem of bus noise mixed
with other traffic noise sources, the complexity of sound event recognition is sig-
nificant. On the one hand, the sounds are continuously overlapped, and on the
other hand, the type of signal to be distinguished is very similar. To this aim,
a supervised model trained to identify overlapping sound events based on unsu-
pervised source separation is presented in [14]. In [15], the authors detect sound
80 U. Hernandez-Jayo et al.

events from real data using coupled matrix factorization of spectral representa-
tions and class annotations. Finally, in [16], the authors exploit deep learning
methods to detect acoustic events by means of using the spectro-temporal local-
ity. For more references and details about these approaches, the reader is referred
to [13].
To conclude, the most challenging issue for the problem at hand is appropri-
ately integrating the bus and surrounding traffic noise levels to the noise map
due to their similarity so as to compute the Leq value correctly. This increases
the complexity of the separation system significantly, which can be addressed
thanks to having the reference signal, that is, considering as input the noise gen-
erated by the bus for the sound source identification and subsequent integration
in the noise map computation.

4 Hardware Approach
The hardware system designed to deploy the ubiquitous acoustic sensor net-
work to remotely monitor the noise pollution of a city is based on three main

Fig. 1. Scoped of remote acoustic monitoring system
Remote Acoustic Monitoring System for Noise Sensing 81

– Hardware platform: formed by an embedded system capable of sampling sig-

nals across the human audible range. This system will be provided with a
microphone which is in charge of collecting acoustic signals, a GPS for geo-
referencing the collected samples and two communication modems, one WiFi
and the other GPRS. After collecting the information, the processed samples
will be uploaded to a central server.
– Signal processing software: it will be deployed on the embedded system
firmware. The set of algorithms should be able to (a) filter conveniently the
noise generated by the public bus where the embedded system will be installed
in order to include its contribution to the urban noise appropriately; and in
the future (b) characterise the noise sources captured by the microphone in
order to classify the type of traffic depending on the street and the period of
the day.
– Server and web application: will oversee saving all the information processed
by the multiple embedded systems that could be deployed on board public
buses, and will show these data in a web-based Geographic Information Sys-
tem (GIS) to represent a dynamic noise map of the city generated across the
routes followed by the network of public buses.

The current hardware prototype developed for the remote acoustic monitor-
ing system is shown in Fig. 1, which represents the scope of the whole system,
showing the following set of subsystems:
– FRDM-KL25Z embedded system
– CMA-4544PF-W omnidirectional capsule microphone with auto gain control
based on the MAX9814 amplifier
– ESP8266 WiFi communications module
– Adafruit FONA 808, which is and all-in-one mobile communication interface
plus a GPS module

5 Mobile Bus Audio Processing Future Approach

The first approach for dealing with the mobile audio processing system will be
implemented based on the state of the art described in Sect. 3. The structure
of the proposed system is shown in Fig. 2. The audio signal corresponding to
traffic noise is captured by the microphone, and windowed every 30 ms before
being parametrized through the feature extraction block. Next, each parame-
trized audio frame enters the classification stage based on some machine learning
approach, which has been previously trained for the problem at hand.
The feature extraction procedure is detailed in Fig. 3. After choosing the
frame size and overlap, the time signal is converted to frequency by means of a
Fast Fourier Transform. Next, a Mel filter bank is applied to the signal, and a
logarithm of the absolute value of the Mel-based spectrum is obtained to emulate
the ear listening response. Finally, the inverse cosine transform is applied to the
signal and a finite number of significant coefficients are chosen to describe the
audio signal denoted as Mel Frequency Cepstral Coefficients (MFCC) [17].
82 U. Hernandez-Jayo et al.


Audio Feature Machine Recognized

Signal extrac on learning Sound Event

Fig. 2. Sound event classification diagram

Audio MFCC
Signal Coefficients
Windowing Filter bank Log
filtering |·|

Frame Size Filter bank matrix # Coefficients

Fig. 3. Feature extraction procedure - Mel Frequency Cepstral Coefficients diagram

After obtaining the MFCC of the input audio signal, they are input to a
machine learning algorithm for their classification (see Fig. 2). Although a myriad
algorithms can be found in the literature for this purpose, initially we consider
Fisher Linear Discriminant algorithm [18] following the work described in [11].
This is due to the fact that this method will give us the coefficient of participation
of each type of noise in the overall noise picture, in our case, the bus engine noise
and the road traffic noise.
We plan to test this approach using real data measurements in a bus driving
its route, recording audio samples from the traffic noise surrounding the vehi-
cle. Moreover, in order to have a general picture of the vehicle own noise, we
also plan to collect audio samples of the several specific driving sounds (brake,
engine, throttle, etc.) that occur on the bus route. Depending on the obtained
results, other feature extraction methods and machine learning algorithms could
be considered to solve the challenge in the future.

6 Expected Outcomes
From the mobile audio processing algorithms, we expect to identify the con-
tribution of the bus noise to the total traffic noise. This value will be used to
balance the noise contribution of the vehicle itself, in order to compensate the
short distance to the measuring equipment. This way, the final value of Leq cor-
responding to the traffic noise can be evaluated with the suitable contribution
of the bus noise in order to obtain reliable real-time noise maps from the routes
of the selected bus lines.
To develop a robust mobile audio processing approach, it is necessary to
properly configure and program the hardware and the firmware of the elec-
tronic embedded system deployed on the buses. This will require a good balance
between the accuracy of the signal acquisition, the number of collected samples
Remote Acoustic Monitoring System for Noise Sensing 83

and the time needed to process them. In a first approach, our design will do
it on board, sending the results of the signal processing block to the central
server. Then, these data will be integrated and displayed as a noise map of the
city through the web-based GIS. If the performance of the embedded system
is not good enough to obtain an accurate noise map, then we will consider the
possibility of conducting the signal processing on the web server, using a more
powerful processor at the cost of increasing the communications cost.

7 Conclusions and Future Work

The goal of the research under development is to obtain a remote acoustic mon-
itoring system that allows to know in real time the city noise impact. For this
purpose, the acoustic sensor (a microphone) is connected to a mobile embedded
device with signal processing and data transmission capabilities. This embedded
system is designed to be installed on a public bus, so urban noise is captured
along the route travelled by the bus. Therefore, the larger number of sensors
deployed in buses lines, the more detailed information will be obtained to gen-
erate the noise map of the city. Finally, the collected information is sent to a
central server that runs a web-based GIS application designed to display the
collected noise levels in real-time.
After validating the viability of the ICT-based approach, the research team is
planning a set of field operational tests to evaluate the accuracy and reliability
of the proposed system. These tests are designed in order to: (i) characterize
the natural noises of the bus inside and outside: engine, breaks, throttle, etc.;
(ii) improve the processing of the acquired noise by applying the necessary error
corrections according to the values obtained in the first tests and weighting the
bus noise appropriately.
In these terms, the proposed system is intended to allow making a signifi-
cant step from current noise maps of cities, which are updated with a certain
periodicity (usually, every five years) through fixed measure points in the city
by following the European Noise Directive.

Acknowledgement. The authors would like to thank project ACM2016/06 entitled

“Towards the development of low cost ubiquitous sensors networks for real time acoustic
monitoring in urban mobility” from the II Convocatoria del Programa de Ayudas a
Proyectos de Investigación Aristos Campus Mundus 2016. Francesc Alı́as and Rosa
Ma Alsina-Pagès also would like to thank the Secretaria d’Universitats i Recerca del
Departament d’Economia i Coneixement (Generalitat de Catalunya) under grant ref.
2014 - SGR - 0590.

1. European Union Transport Themes, Clean Transport and Urban Mobil-
ity. mobility/index en.htm.
Accessed Nov 2016
84 U. Hernandez-Jayo et al.

2. World Demographics Profile 2012. Index Mundi.

world/demographics profile.html. Accessed Nov 2016
3. World Urbanization Prospects: 2014 Revision. United Nations DESA’s Population
pdf. Accessed Nov 2016
4. EU Directive: Directive 2002/49/EC of the European Parliament and the Council
of 25 June 2002 relating to the assessment and management of environmental noise.
Official J. Eur. Commun., L 189/12 (2002). European Union
5. Night Noise Guidelines for Europe. World Health Organization 2009. http://www. data/assets/pdf file/0017/43316/E92845.pdf. Accessed Nov 2016
6. Basten, T., Wessels, P.: An overview of sensor networks for environmental noise
monitoring. In: Proceedings of the 21st International Congress on Sound and Vibra-
tion, Beijing, China (2014)
7. Hong, P.D., Lee, Y.W.: A grid portal for monitoring of the urban environment using
the MSU. In: Proceedings of the International Conference on Advanced Commu-
nication Technology, Phoenix Park, Korea (2009)
8. Zhao, S., Nguyen, T.N.T., Jones, D.L.: Large region acoustic source mapping
using movable arrays. In: Proceedings of the International Conference on Acoustic,
Speech and Signal Processing, Brisbane, Australia, pp. 2589–2593 (2015)
9. Dekonick, L., Botteldoren, D., int Panis, L.: Sound sensor network based assess-
ment of traffic, noise and air pollution. In: Proceedings of EURONOISE, Maastrich,
The Netherlands, pp. 2321–2326 (2015)
10. Dekoninck, L., Botteldooren, D., Panis, L.I., Hankey, S., Jain, G., Marshall, J.:
Applicability of a noise-based model to estimate in-traffic exposure to black carbon
and particle number concentrations in different cultures. Environ. Int. 74, 89–98
11. Creixell, E., Haddad, K., Song, W., Chauhan, S., Valero, X.: A method for recog-
nition of coexisting environmental sound sources based on the Fisher’s linear dis-
criminant classifier. In: Proceedings of INTERNOISE, Innsbruck, Austria (2013)
12. Benetos, E., Lafay, G., Lagrange, M., Plumbley, M.: Detection of overlapping
acoustic events using a temporally-constrained probabilistic model. In: Proceed-
ings of the International Conference on Acoustic, Speech and Signal Processing,
Shanghai, China, pp. 6450–6454 (2016)
13. Mesaros, A., Heittola, T., Virtanen, T.: Metrics for poliphonic sound event detec-
tion. Appl. Sci. 6, 162 (2016)
14. Heittola, T., Mesaros, A., Virtanen, T., Gabbouj, M.: Supervised model training for
overlapping sound events based on unsupervised source separation. In: Proceedings
of the 38th International Conference on Acoustics, Speech, and Signal Processing,
ICASSP 2013, Vancouver, Canada, pp. 8677–8681 (2013)
15. Mesaros, A., Dikmen, O., Heittola, T., Virtanen, T.: Sound event detection in real
life recordings using coupled matrix factorization of spectral representations and
class activity annotations. In: Proceedings of the IEEE International Conference
on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, Australia, vol.
817, pp. 151–155 (2015 )
16. Espi, M., Fujimoto, M., Kinoshita, K., Nakatani, T.: Exploiting spectro-temporal
locality in deep learning based acoustic event detection. EURASIP J. Audio Speech
Music Process. (2015). doi:10.1186/s13636-015-0069-2
17. Melmstein, P.: Distance measures for speech recognition, psychological and instru-
mental. Patt. Recogn. Artif. Intell. (1976). C.H. Chen, Ed., Academic, New York
18. Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann.
Eugenics 7(2), 179–188 (1936). doi:10.1111/j.1469-1809.1936.tb02137.x
Testing Security of Embedded Software Through Virtual
Processor Instrumentation

Andreas Lauber ✉ and Eric Sax

( )

Karlsruhe Institute of Technology, Engesserstr. 5, 76131 Karlsruhe, Germany


Abstract. More and more functionality that demands remote access on a vehicle
is integrated into modern cars. Fleet management, infotainment, updates-over-
the-air and the upcoming functionality for autonomous driving need gateways
that enable a car-2-x communication. Misuse is a threat. Consequently, security
mechanisms play an increasing important role. But how can we show and prove
the effectiveness of these security functions?
Therefore, in this paper we will show an approach to test security aspects,
based on virtual instrumentation. The approach is to use a framework that
executes the application under development on a virtual model of the target micro
controller. An interception library generates scenarios systematically, whereas
the effects on registers and memory are monitored. We are intercepting the
running software at vulnerable functions and variables to detect potential
malfunctions. This will detect security vulnerabilities of all internal failure even
if no malicious behavior at the interfaces occur.

Keywords: Virtual processor · Security · Testing

1 Motivation

Within the last decade mobility has undergone major changes. One is the advent of data
exchange between cars and infrastructure. Instead of a vehicle being a standalone
mechanical device it has been transformed to a mobile platform with extensive electronic
sensors and computing power. Nowadays within a car a large amount of data is available
in form of sensor data, representing the state of the vehicle as well as its understanding
of the surrounding. By exchanging such information with others, new concepts for
efficient driving, optimizing traffic flow (see Kramer in [1]), and new comfort functions
become possible.
On the other hand many new threats are generated. The increasingly technology
allows the transmission of large amounts of data in real time to transmit states for diag‐
nosis and software updates over the air will be possible. I.e. the EE topology will get
accessible via an air interface. Therefore the vehicles may offer new attacking surfaces,
as some examples already show today.
It has already been shown how modern cars can be attacked and controlled without
having physical access to these vehicle [2, 3]. These attacks allow the manipulation of
car’s brakes and driver assistance systems or remote eavesdropping on conversations

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_9
86 A. Lauber and E. Sax

held within the car. They are just a few examples of possible attacks. With an even more
connected car an even broader attack vector might be created.
To secure the vehicles against possible attacks security mechanism needs to be
implemented which has been a research focus within the last couple of years. Unfortu‐
nately not all attack surfaces can be closed by integrating security mechanism. Attack
vectors can also arise through sloppy implementations and inexperienced programming.
To overcome these issues functional testing and testing for security weaknesses are
This paper is structured as follows: First we give a short overview of state of the art
security testing in Sect. 2. Afterwards we categorize the attacks on systems and point
out the important test cases in Sect. 3. Thereafter the virtual instrumentation and
processor interception is explained in Sect. 4. The interception lead to the security testing
framework shown in Sect. 5. At the end we conclude with a summary and give an outlook
for future work.

2 State of the Art for Security Testing

2.1 Theoretical Security Analysis

In theoretical security analyzes one must distinguish between high-level design analyzes
and detailed analyzes. In design analysis, protocols, interfaces, and specifications are
analyzed by reviewers to find and resolve systematic vulnerabilities such as bad encryp‐
tion or short keys. While only a theoretical description of the system has to be present
during design analyzes, one needs explicit knowledge about the implementation of the
algorithms for the detailed analyzes.
Theoretical security analyzes cannot detect any implementation errors due to misin‐
terpretation of the specification or errors from third-party software. To protect the system
against this type of error, Bayer recommends in [4] secure software development stand‐
ards. These can be achieved by means of the various standards and coding guidelines.
Even if errors are reduced, errors by specification or errors in third-party software, can
only be found by explicit tests of the functions in the overall system.

2.2 Static Code Analysis

For static code analysis the source code is automatically analyzed by means of formal
criteria, to identify volatility errors. Static code analysis can identify implementation
errors, but functional errors or design errors cannot be found by this analysis. In addition,
Knechtel described in [5] that these kind of analyzes are unreliable. He suggests the use
of explicit code reviews for sensitive functions. Another option to find weaknesses is
the system test on the real platform or a hardware prototype.
Testing Security of Embedded Software 87

2.3 Functional Security Testing

Functional testing serves to ensure the correct execution of algorithms. Spillner
describes in [6] how software can be tested. A careful execution of the tests can detect
implementation errors and the resulting security vulnerabilities at an early stage. In order
to ensure fault-freeness of the tests, official security tests cases are usually carried out,
which also cover typical limits of the algorithms.
The algorithms are tested not only for the correct behavior according to the specifi‐
cation, but also for the robustness of these tests. In addition, the performance of security
algorithms is tested to identify potential bottlenecks that could affect overall security
performance, according to Bayer [4].
However the functional security testing will test the security algorithm as standalone
functions. An interaction with other functions of the system is not performed. Therefore
a weakness of the outer or sub function will affect the security of the system even if the
security algorithms are well tested. This means the security test should always include
functions of the overall system.

2.4 Fuzzing for Security Testing

Fuzzing is a technique that has been used for some time to test software in IP networks.
To do this, the implementations are subjected to an unexpected, invalid, or random input,
with the hope that the target will react unexpectedly to identify new vulnerabilities. The
responses to such attacks range from strange behavior of interfaces, unspecified behavior
of the system to complete crashes.
As a rule, the fuzzing can be divided into three steps. First, the input data is generated.
This can either be structured according to the specification or completely random. The
data are then fed into the system interfaces and the output is monitored. As a last step,
the recorded behavior must be analyzed by experienced programmers in order to identify
potential weaknesses. Disadvantage of fuzzing is that only the interfaces of the system
are monitored. Faulty states within the system cannot be detected.

2.5 Penetration Tests

While the above tests can be automated, the penetration test is a test method using human
testers. These tests attempt to exploit known vulnerabilities and gain access to the
system. The appropriate approach is usually based on years of experience by human
testers who perform these tests. An example of typical penetration tests is exploiting
undocumented debugging interfaces to gain access to buses and internal signals, but also
by opening the controller and directly accessing the silicon, the testers are looking for
information on possible attack vectors, according to Bayer in [4]. The knowledge
provided for the testers usually range from no information, access to the specification
to all information about the source code. Therefore, these tests can be used for black
box, gray box and white box tests.
The state of the art security testing can usually only be automated for independent
functions without the interaction of all functions in the complete system. Knechtel writes
88 A. Lauber and E. Sax

in [5] that attacks are rarely due to weaknesses of individual keys or algorithms but rather
by weaknesses of the entire system. I.e. for security testing of the overall system,
including third party software, the overall system needs to be present. Further the internal
state of this system needs to be monitored in addition to the external interfaces. This
leads us to use virtual instrumentation of a processor running the software under test.
Finally, it should be noted that all practical security tests cannot guarantee complete
coverage. Therefore a compromise between test effort, time and completeness must be
made. I.e. practical security test serve only as a complement to theoretical security
analyzes and the consideration of security in the design phase.

3 Categorization of Attacks

As Radzyskewycz writes in [7], it is not a question of whether systems are attacked, but
when. Therefore it is important to implement security mechanisms according to the state
of the art. In addition, Wheatley in [8] describes that 44% of all attacks will be done over
known vulnerabilities.
The Symantec cooperation describes in [9] the loss or theft of passwords, incidental
ties and insider knowledge as other important causes of intrusion into systems. Only a
very small part of the attacks on systems are carried out by unknown vulnerabilities at
the time of attack. A distribution of the given causes for attacks is shown in Fig. 1.

Fig. 1. Classification of attack causes

Especially due to the large number of attacks with known vulnerabilities, it is impor‐
tant to design new software in such a way that known vulnerabilities are no longer
present. To ensure this, a software must be regularly tested against known vulnerabilities
during the development cycle. This must include all known security gaps, because the
old wisdom from project management is even more important in the field of security:
“The later a problem is detected, the higher is the cost to fix it.”
In the PC world, vulnerabilities are stored in a database of the MITRE Cooperation
on behalf of U. S. Department of Homeland Security. This Common Vulnerabilities and
Exposures Database [10] saves all known security gaps in existing applications. By the
year 2016, about 100,000 attacks on various systems were recorded in this database. In
addition, the MITRE Cooperation provides a database for the overview of all known
Testing Security of Embedded Software 89

vulnerabilities in the Common Weakness Enumeration Database (CWE) [11]. There are
currently about 1,000 different vulnerabilities in this database. In the CWE, the weak
spots are divided into different categories. A classification of the attacks from the year
2015 leads to the distribution shown in Fig. 2. The most common attacks that are listed
in the CWE are so called Denail of Service (DoS) attacks. These attacks make 33% of
all known attacks on today’s systems. The goal is to get the attacked system to crash
and thus destroy the functionality of the system.

Fig. 2. Categorization of weaknesses according to CWE

More important than DoS attacks are attacks where an attacker can gain control over
the entire system. Buffer overflows with 22% and code execution with 24% have a special
significance. With so called buffer overflows (or overflows), memory areas are written
with too long data sets to overwrite the following data records in the memory, thereby
manipulating the contents of these variables. For an overview of attacks by buffer over‐
flows, see Foster in [12].
The principle of buffer overflows is also used in code execution. Whereupon not only
variables are overwritten, but the return address is set to a malicious, injected code of a
function. This not only generates influence of the behavior by changing the variables
but also take control over the entire system and execute malicious code.
Reason for the above attacks is usually a badly implemented software. Especially
the consistent check of the memory area for dynamic variables can prevent overflow in
most cases. However, due to time and memory space requirements in embedded systems,
this is often omitted. One reason for DoS attacks is often the division by zero, which is
not uniformly specified in microcontrollers and can therefore lead to different behavior
or even program termination.
In addition to the above, there is often undefined behavior in software development
when dereferencing so called null pointers that do not point to any memory, using
memory or objects after executing “free”, or read access to unallocated memory. In most
cases, the aforementioned problems can be avoided by means of consistent queries in
the programming, but the queries are rarely implemented for runtime and memory
Not all problems can be found by individual testing of the independent modules.
Security weaknesses most often arise from the interaction of different modules and
therefor the overall system needs to be tested as a whole.
90 A. Lauber and E. Sax

4 Interception of Software Running on Virtual ECUs

Instruction set simulators like Open Virtual Platforms (OVP) [13] can be used to model
a processor with the corresponding peripherals and run the cross-compiled application.
Running the cross-compiled application inside an instruction set simulator gives the
same behavior as on the target platform. The virtual prototyping of embedded systems
for OVP is described by Werner in [14]. Werner also compares OVP with other platforms
for virtual prototyping for embedded systems. He further explains in [15] the usage of
OVP for debugging cross-compiled applications to build a virtual test environment.
The Imperas binary interception tool as defined in [16] can cause the simulation to
stop the application and run the interception library at any point in time. This includes
among others the interception of the virtual platform before each instruction is morphed,
specific instructions are executed and a specific address range is read or written.
The interception technology is usually used for verification, analysis and profiling,
including detection of memory corruption, deadlocks, data races or memory usage. As
Imperas Software Limited writes in [16] this is especially useful when many different
data scenarios should be executed.
With the binary interception tool we can use our own library to examine the state of
the internal registers, instructions, memory, and other periphery. Furthermore, a replace‐
ment of the simulated behavior with a behavior defined in the interception library is
possible. This means if the interception library detects a specific behavior during simu‐
lator the corresponding instruction is either replaced or extended by the one defined in
the library.
The advantages of using the novel framework with interception libraries compared
to other debug interfaces is that no additional code needs to be inserted in the application
and no special access to the processor is needed. I.e. no resource overhead in the appli‐
cation and no additional instructions are executed. The application will be cross-
compiled as running on the real hardware platform without any additionalities. Another
advantage is that all parts of the interception technology will run in parallel to the simu‐
lation of the virtual platform.
As mentioned above we need to monitor the memory in order to find overflows and
the instructions to find zero divisions. Both can be done by running an interception
library in parallel to the main application.
An overview of the test framework can be found in Fig. 3. The platform for the virtual
processor will be described in a platform model file as described by Werner in [14]. The
virtual processor will consist of a processor with registers and memory for heap and
stack, local memory for code and variables, as well as peripherals. The interception
library will have direct access to these registers, memory, instructions, and peripherals.
The location of the variables inside the registers and memory will be configured in a
configuration file. Further, this file holds information about the intercepted instructions
and functions. We are generating this file with information of the source files from the
application. Therefore the source files are parsed and variables will be detected. The
supervision of instructions will be done during run time with the disassembled applica‐
tion code, searching for divisions and illegal memory access.
Testing Security of Embedded Software 91

Fig. 3. Overview of security test-framework with virtual processor

4.1 Monitoring of Instructions

With the interception library [16] we can monitor all instructions on assembler level and
check each of them if we need to observe the corresponding instruction. The behavior
of the interception depends on the instruction. Our novel approach will intercept only
potential vulnerabilities and directly executed all other instructions. E.g. we look at the
different assembler instructions, if we find a division (either udiv for unsigned or sdiv
for signed division) the corresponding registers will be checked for zero division. If the
denominator is zero the execution of this command will be stopped and an error message
will be displayed. If the instruction is not a division the interception library will not be
To find potential vulnerabilities by zero divisions the observation of instructions can
be implemented as static interceptions, because the instructions are well known at
compile time and will be constant for all applications.

4.2 Monitoring of Memory Access

The same concept can be done with the memory. Each memory access (read and write)
will be monitored and an error message will be displayed if data is written to the wrong
memory range. The address range of the variables are stored in the interception config‐
uration (see Fig. 3) and access to these ranges will be observed. If a write access across
the variable boarders occur (buffer overflow) an error message will be displayed.

4.3 Heap and Stack Monitoring

For the heap and stack monitoring we need the memory tracing, because the local and
dynamic variables will be stored at the local memory. Further we need the function
tracing to trigger the interception whenever a function is called and new variables are
stored on stack or heap.
The local variables will be pushed to the stack, therefore the instruction monitoring
needs to add the variables to the dynamic monitoring, thus the interception library knows
92 A. Lauber and E. Sax

the address and range of the new variables. The same will be done for dynamically
allocated or deallocated memory inside the heap. This memory is usually allocated or
deallocated with malloc and free. Another observer will find write access to the function
return address to detect illegal code executions.
Both the dynamic memory observation and the observation of the return address
needs to be done during runtime. Therefore a dynamic part of the interception library is
necessary, that can be extended by the simulation.

5 Virtual Instrumentation for Security Testing

The security testing is based on the cross-compiled code for the target platform. I.e. the
instructions order and the behavior is the same as on the real platform, since no further
or different optimization of other compilers will be done. Further the Executable and
Linkable Format (elf) file that is used for the testing can be flashed to the target device
without any additional changes. Current state of the art tests (see above) are focusing
on the source code without compiler optimization.
Even if the testing framework can check the source code using a static analysis before
cross-compiling and running the application on the virtual processor, we are not focusing
on this, since static code analysis is state of the art. This novel approach can even be
used to run the compiled application without having access to the source code of the
application. I.e. black box tests for security can be executed. But nevertheless informa‐
tion about functions and variables are needed in order to build the configuration file.
In the next step, after static code analysis, the application is checked for variables
and functions. The static variables and functions will be added to the interception
configuration. With this information the interception library is build and passed to the
instruction set simulator. If the defined interceptions occur the simulator will stop the
execution of the application and run the functions provided by the interception library.
The Imperas instruction set simulator is used to run the defined test cases and the
interception library. For this step a model of the target platform is needed. This should
include all necessary processors, memories, registers, and peripherals (see above). The
interception library will stop the running cross-compiled application in the simulator at
every predefined interception. Further if new memory is dynamically allocated the
interception library will be extended to observe this memory area as well. After deal‐
location of the memory the corresponding entry in the interception library will be
At last the results of the simulation and the test process will be shown for documen‐
tation. The total workflow of the virtual instrumentation for security testing can be seen
in Fig. 4.
Testing Security of Embedded Software 93

Fig. 4. Workflow of the virtual instrumentation for security testing

6 Conclusion and Future Work

In this paper we analyzed the different security weaknesses and derived from CWE that
the most common ones are buffer overflow, code execution and division by zero.
According to this knowledge we did a conceptual design for a security test framework
based on virtual instrumentation. We build an interception library that monitors the
memory and the instructions and reports security weaknesses if they occur.
Future work will investigate the concepts to automate the tests using virtual plat‐
forms. Further the memory observer for the variables and the test cases should be gener‐
ated automatically. The interception library should be used to generate test cases
according to the interfaces and a weakness database (CWE). These test cases will based
on the information of variables (including dynamic variables) and functions from the
Another work will be done to use the framework for black-box-testing. This means
without any knowledge of the source code. Especially the observation of dynamic vari‐
ables have to be researched.
Protection of Shared memories and multi-processor systems can be tested as well.
The virtual framework will be extended for the usage of multi-processor systems in the

Acknowledgement. This publication was written in the framework of the Profilregion

Mobilitätssysteme Karlsruhe, which is funded by the Ministry of Science, Research and the Arts
in Baden-Württemberg.
94 A. Lauber and E. Sax


1. Kramer, J., Hillenbrand, M., Müller-Glaser, K.D., Sax, E.: Connected efficiency–a paradigm
to evaluate energy efficiency in tactical vehicle-environments. In: Bargende, M., Reuss, H.C.,
Wiedemann, J. (eds.) 16. Internationales Stuttgarter Symposium. Proceedings, pp. 1451–
1463. Springer, Wiesbaden (2016). doi:10.1007/978-3-658-13255-2_107
2. Koscher, K., et al.: Experimental security analysis of a modern automobile. In: IEEE
Symposium on Security and Privacy, pp. 447–462 (2010)
3. Checkoway, S., et al.: Comprehensive experimental analyses of automotive attack surfaces.
In: USINEX Security Symposium (2011)
4. Bayer, S., Enderle, T., Oka, D.-K., Wolf, M.: Automotive security testing—the digital crash
test. In: Langheim, J. (ed.) Energy Consumption and Autonomous Driving. LNM, pp. 13–22.
Springer, Cham (2016). doi:10.1007/978-3-319-19818-7_2
5. Knechtel, H.: Methoden zur Umsetzung von Datensicherheit und Datenschutz im vernetzten
Steuergerät. ATZ Elektronik 10(1), 26–31 (2015)
6. Spillner, A., Linz, T.: Basiswissen Softwaretest: Aus- und Weiterbildung zum Certified
Tester; Foundation Level nach ISTQB-Standard, 4th edn. dpunkt.verlag (2010)
7. Radzkewycz, T.: Automotive networks can benefit from security. In: Connected Vehicle
Journal: Designing for Next-Generation Connected and Autonomous Vehicles (2016)
8. Wheatley, M.: Known vulnerabilities cause 44 percent of all data breaches. http://
breaches/. Accessed 31 Oct 2016
9. Symantec Corporation: Internet Security Threat Report. 2013 Trends, vol. 19 (2014)
10. MITRE Corporation: Common Vulnerabilities and Exposures (CVE).
Accessed 31 Oct 2016
11. MITRE Corporation: Common Weakness Enumeration (CWE).
Accessed 31 Oct 2016
12. Foster, J.C., Osipov, V., Bhalla, N.: Buffer Overflow Attacks: Detect, Exploit, Prevent.
Syngress Publishing Inc., Rockland (2005)
13. Imperas Software Limited: Open Virtual Platforms: The source of Fast Processor Models &
Platforms. Accessed 15 Dec 2016
14. Werner, S., et al.: Cloud-based design and virtual prototyping environment for embedded
systems. Int. J. Online Eng. (IJOE) 12(9), 52–60 (2016)
15. Werner, S., Lauber, A., Becker, J., Sax, E.: Cloud-based remote virtual prototyping platform
for embedded control applications: cloud-based infrastructure for large-scale embedded
hardware-related programming laboratories. In: Proceedings of 2016 13th International
Conference on Remote Engineering and Virtual Instrumentation (REV). IEEE (2016)
16. Imperas Software Limited: Imperas Binary Interception Technology: User Guide, no. V1.5.3
Virtual and Remote Laboratories
LABCONM: A Remote Lab for Metal Forming

Lucas B. Michels(&), Luan C. Casagrande, Vilson Gruber,

Lirio Schaeffer, and Roderval Marcelino

Florianópolis, Brazil,,

Abstract. This paper aims to describe the LABCONM, that is an educational

laboratory that provides remote access to one remote educational compression
testing machine (MDTEC). This laboratory was developed specifically to help in
the teaching/learning process of metal flow curves in the metal forming area.
Two different types of analysis were defined to validate the laboratory, where
the first one considerate the teaching approach and the second one the operation
of the laboratory. In the technical analysis, the researchers conducted 20
remotely operated tests, where it was verified the quality and repeatability of the
data to demonstrate metal flow. The data shown sufficient quality and repeata-
bility, so that the MDTEC could be used as an educational experiment. In the
pedagogical analysis, two classes, which were attending Metal Forming Course,
participated. In the group that did not access the LABCONM, 17% of students
had unsatisfactory result in the mathematical question. In the group that
accessed to the LABCONM, 100% of the students had satisfactory or excellent
result in the same question. Consequently, it was possible to conclude that there
is an influence of laboratory especially for those students who have more dif-
ficulties in theoretical learning.

1 Introduction

Conceptual learning in the metal forming area is complex and difficult because it
involves knowledge of symbols, equations, theories, principles and procedures. This
complexity is natural, because basically knowledge is an abstraction of reality, result of
experiments, analyzes, studies, and standards.
It is believed that observation, interaction, practice, and experimentation are edu-
cational practices that can complement and enhance the learning process in engineer-
ing, making theory more meaningful for the student. Without interaction, students are
passive and the learning process becomes slower [1]. Doubtless, experimentation
establishes a relationship between practice and theory [2]. For this reason, experiments
are essential in the teaching process, especially in engineering and experimental sci-
ences [3].
A solution to provide experimentation and interaction as a way to aid learning has
been the development of experiments and remote laboratories [4]. With this new

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_10
98 L.B. Michels et al.

concept of laboratory, it is possible to share the same equipment between users [3] and
geographically distant universities.
Considering the wide range of advantages and possibilities, research on remote
laboratories has grown increasingly in recent years. This trend aims to take advantage
of new resources and technologies to improve the technological education. Many of the
recent publications on remote laboratory are related to various engineering areas, such
as: “electrical and electronics”, “telecommunications” [5], “basic physics” [6], “bio-
medicine” [7], “automation/robotics/classic control” [8, 9], being rare to find something
related to the study of mechanical proprieties of materials, as in [10–13] and even rarer
on plastic deformation, as in [14].
Despite current advances in Information and Communication Technologies (ICTs),
there are few publications about remote laboratories that were developed aiming the
metal forming area and the representation of phenomena linked to the plastic defor-
mation of a metal. It is believed that some factors are responsible for this current
situation, such as: (a) Maintenance of mechanical parts from the remote experiment;
(b) Metal specimens production; (c) Creation of a replacement mechanism for the metal
specimens; (d) Use of higher precision and cost sensors; (e) Use of higher strength and
greater precision actuators; (f) High cost of maintenance. Despite these obstacles, the
challenge of developing a remote laboratory in the metal forming area means a big step
for research in remote experimentation, then generating knowledge and innovation in
the teaching process in engineering courses that are related to this area.
This paper represents a continuation of the outcomes that have been already pub-
lished about the LABCONM (Metal Forming Online Lab) research project. The initial
results were presented in [15, 16]. The LABCONM is an educational laboratory that
provides remote access to one remote educational compression testing machine. This
laboratory was developed specifically to help in the teaching/learning process of metal
flow curves in the metal forming area.
Instead of others publications, this paper describes the LABCONM with more
details and presents the new outcomes based in applications of this lab with Engi-
neering students. This paper aims to describe the LABCONM, that is an educational
laboratory that provides remote access to one remote educational compression testing

2 Theory About Metal Forming Area

To plan the production of a piece that will be formed, it is necessary to know basic
common aspects of metal forming. Without these aspects, it is not possible to predict or
calculate overall costs, amount of metal necessary, the total energy used in the process,
maximum strength, or total time used in manufacturing process. The flow curve is one
of these fundamentals aspects of metal forming, which in its complexity carries the-
oretical and practical elements, including concepts, equations, symbols and procedures
which are indispensable for the designer.
Flow curves are one of the main ways to analyze the mechanical behavior of a
metal and are critical to design pieces using metal forming. These curves are defined
with data from the plastic zone (above the yield stress) of the stress-strain diagram and
LABCONM: A Remote Lab for Metal Forming Area 99

they can express the mechanical behavior of metal during the plastic phase. Defor-
mation is a permanent change in the geometric shape of a metal that occurs after
applying stress (r) with intensity above the yield point. The deformation is visible
because it changes the height, length, and the depth of a metal.
Mechanical tests of strength/compression/torsion are made on metal specimens to
obtain data from stress and deformation of a metal. However, when the designer wants
to do calculations and simulations of the metal forming process, the flow curve is not
appropriate, and as a consequence, it is necessary to represent the same data in an
Eq. (1), as it follows:

kf ¼ C:un ð1Þ

kf = Yield Stress [N/mm2]

u ¼ ln hh = True Strain [-];

C = Resistance Coefficient [N/mm2];

n = Work Hardening Coefficient [-].

3 LABCONM Description

The LABCONM is an educational laboratory that was created to support the theoretical
learning of metal forming with remote experiments. As shown in Fig. 1, LABCONM is
distributed in four main parts connected to the internet, which are:
• Learning Management System (LMS);
• Remote Experiments;
• Web Server;
• Device with internet access for the student;
The LABCONM central portal is a Learning Management System (LMS), which is
actually a web page. Remote experiments are physical experiments that are connected
to the Web Server and are managed by the LMS. It is important to describe that, despite
the “Student Device” is not an item developed for the laboratory, without it the
LABCONM would not be “complete” or “formed”.
In this version of LABCONM was implemented only one remote experiment
(Remote experiment 1), called remote educational compression testing machine

3.1 Remote Educational Compression Testing Machine (MDTEC)

The MDTEC (Remote Educational Compression Testing Machine) is capable of
generating flow curves remotely. As described in Fig. 1, the MDTEC is composed of
four main parts: (a) Compression Structure; (b) Hydraulic Unit; (c) Motor Control
Panel; (d) Control Panel and Data Processing. In addition to these described items,
MDTEC uses a Web Server, that is a shared device with other experiments from
100 L.B. Michels et al.

Fig. 1. Overview of the LABCONM
LABCONM: A Remote Lab for Metal Forming Area 101

Compression Structure - MDTEC

In this subchapter is detailed the item “a” from Fig. 1. The Compression Structure (see
Fig. 2) actually is a set of electromechanical devices that have been joined to form a
system capable of performing a compression test remotely.

Fig. 2. Compression structure

The main features of this equipment are: storage, positioning, measurement, com-
pression and disposal of metal specimens. All these features and processes are auto-
mated and remotely operated through preprogrammed commands in the minicomputer
Raspberry Pi and managed and operated by the virtual control panel in the access page.
In MDTEC, the operating speed vf depends on machine characteristics and the
resistance that the machine meets during the test, being impossible to test with strain
_ constant. The MDTEC performs the compression of the specimen and stores
rate ðuÞ
about 8 readings per second (strength and height variation of the metal specimen).
In MDTEC, there is a series of essential elements for its operation (see Fig. 3),
which can be classified into three mechanisms (Accordingly to its function):
102 L.B. Michels et al.

Fig. 3. Remote educational compression testing machine detailing

• Compression Mechanism: It is composed of a hydraulic actuator, compression

region (base) and upper die. It is function is to deform the specimen;
• Measurement System: It is composed of a displacement sensor (potentiometric ruler
model) and force sensor (load cell model). The main function of these sensors is to
provide data about the compression process of the metal specimens, as well as assist
in determining the specimen height and in the positioning of the hydraulic cylinder.
• Feeding, positioning and disposing mechanism: These mechanisms are composed
of an electromechanical actuation structure driven by belts, which is driven by
stepper motor;
The most important data from the experiment is height variation Dh (mm) and
force (N) applied to the metal specimen. They are the base for the construction of flow
LABCONM: A Remote Lab for Metal Forming Area 103

3.2 Learning Management System

The Learning Management System (LMS) used in the LABCONM is the virtual part of
the laboratory (see Fig. 4). It is a web page developed for learning management, which
allows the access to the laboratory from anywhere via the internet. The main menus are
detailed below:

Fig. 4. Overview of the metal forming remote laboratory webpage

• Schedule: This menu link serves to define the user’s time in the experimentation
• Use: This menu displays a page with a summary of the student’s situation. It is a
report of the mandatory learning tasks and online exams proposed.
• Activities: On this web page there are several activities with videos, texts and
images related to the MDTEC experiment and the flow curves.
• Experiments: In the experiments menu the users will find submenus to access the
control panel of the experiments. The control panel is where the students
perform/control/monitor the experimentation process. Currently there is only the
panel of MDTEC (see Fig. 5).
• Scores: In this item, questionnaires are used by students to evaluate the laboratory
or to send the requested activities in the Activities Menu;
104 L.B. Michels et al.

Fig. 5. View from a smartphone of the MDTEC virtual control panel
LABCONM: A Remote Lab for Metal Forming Area 105

4 Methodology

Two different type of analysis were defined to validate the laboratory considering the
teaching approach and the operation of the laboratory. In the technical analysis, the
researchers conducted 20 remotely operated tests, where it was verified quality and
repeatability of the data to demonstrate metal flow. In the pedagogical analysis, two
classes, which were attending Metal Forming course, participated. During four weeks,
LABCONM was available for one of these two groups to use the remote laboratory and
to complete the proposed activities related to the experiment. After this period, both
groups attended a calculus exam related to the course. The group that had the
opportunity to access the remote laboratory also completed a satisfaction questionnaire,
where the questions were about various aspects of the laboratory.

4.1 Technical Analysis

In the technical analysis, the results of the flow curves from 20 trials were plotted and
overlaid using the yield stress and true strain data calculated for each metal specimen,
as shown in Fig. 6 the data shown sufficient quality and repeatability, so it is possible
conclude that the MDTEC could be used as an educational experiment.

Fig. 6. Results of 20 trials in MDTEC

4.2 LABCONM Analysis by Students

A group of students that accessed the LABCONM answered a questionnaire composed
by objective questions, which should be answered with scores from 0 to 10. This
questionnaire evaluated the laboratory and the experiment with the following ques-
tions: (a) What is your satisfaction level in using the remote lab for metal forming area?
106 L.B. Michels et al.

(b) How easy was it to use and operate the LABCONM? (c) How was the experi-
mentation process? (d) How objective and understandable were the proposed ques-
tions? (e) How the activities helped to understand the content of the Mechanical
Conformation course? The answers were presented in the Fig. 7.

Fig. 7. Average evaluation scores from questionnaires

The mean value between all the answers presented in the Fig. 7 was excellent,
ranging between 8.8 and 9.33. The maximum standard deviation reached between the
five questions was 1.9 in objectivity and understanding of the activities, which
achieved the lowest average as well. This point was expected somehow, because it was
presented to the students a new environment with a lot of information. The idea was
that they should try the whole process of interpretation alone and ask questions via
email, thus stimulating the exchange of information. It is believed that this difficulty
was not an issue for the students, because they did not enter in this merit in the
questionnaire for complaints and suggestions.
The most important question to highlight with respect to the Fig. 7 was the average
score of 9.07 obtained by the item related to “How the activities helped to understand
the content of Metal Forming course”. This score confirms once again the focus of the
remote laboratory for metal forming area to aid in learning process of Metal Forming
theory and it offers good arguments to validate the LABCONM as an educational

4.3 Validation Based on Exam Results

To perform the analysis and comparison between the two groups of students, first was
defined that the answers would be classified into three types according to the
LABCONM: A Remote Lab for Metal Forming Area 107

Table 1. Results of the question about extrusion calculation

Groups Representation Grades Total Average Standard-deviation
Unsatisfying Satisfactory Excellent
(below of 50%) (between 50% (between 75%
and 75%) and 100%)
Without Amount 9 11 32 52 6,6 2,46
access to Percentage 17% 21% 62%
With Amount 0 10 8 18 7,1 2,43
access to Percentage 0% 56% 44%

correctness of the answer, being them: unsatisfactory, satisfactory and excellent. Based
on this criteria, the students were grouped into three subgroups for map the range of
scores, mean values and standard variation, resulting in the data presented in the
Table 1. Through this table, it can be observed that the group that accessed the
LABCONM had an average score of 7.7, which is 7% higher than the average grade for
the group that did not access the remote laboratory. It is important to explain that the
class who did not access the LABCONM is formed by a group of students with better
performance than the group that accessed the LABCOM, as shown in Fig. 8. This
figure shows the result of the groups in the calculus question in the first semester exam.
From this figure, it is possible to conclude that 17% students did not achieve an average
score (above 50%).


17% 17% 17%

Unsatisfactory (less than Satisfactory (Between Excelent (Between 75%

50%) 50% and 75 %) and 100%)

Group without LABCONM access group with LABCONM access

Fig. 8. Results from calculus question – 1st semester exam

However, by the Table 1, where it is possible to verify the results of the second
exam that was applied after the use of the remote laboratory, it is possible to conclude
that none of the students from the group that accessed the LABCONM had an
unsatisfactory performance.
On the other hand, in the group that did not access the LABCONM, 17% of
students in this group had some problem to solve this kind of mathematical question.
108 L.B. Michels et al.

Therefore, there is an influence of laboratory especially for those students who have
more difficulties in theoretical learning.
Therefore, in a general analysis, it may be affirmed that the class with access to the
laboratory had a larger group of students in the satisfactory and excellent level.

5 Final Considerations

The development of remote laboratory in the metal forming area has some difficulties,
as financial, technology, and maintenance, which are obstacles for the expansion of this
field of study. However, in this article an innovative remote laboratory was described,
which has a Remote Educational Compression Testing Machine capable of performing
real remotely compression tests on metal specimens.
The LABCONM validation showed good quality and repeatability in the MDTEC
results, and as a consequence, it is possible to conclude that the machine can be used
for the generation of flow curves in an educational way. Furthermore, considering the
educational nature of experimental activity proposed, it was verified by the students’
opinion that the laboratory presents a potential to improve learning in the Metal
Forming course. This fact was presented more clearly in the exam scores of the stu-
dents group that had access to LABCONM, because despite of the difficulties that they
have in calculus, none of them obtained an unsatisfactory result. Therefore, based on
these analyzes, it is considered that the LABCONM is a great advance in the field of
remote engineering and virtual instruments and to support the learning in the metal
forming area.

1. Fabregas, E., et al.: Development a remote laboratory for engineering education. Comput.
Educ. 57 (2011)
2. Pimentel, A.: A Teoria da Aprendizagem Experiencial como alicerce de estudos sobre
Desenvolvimento Profissional. Estudos de Pscicologia, 159–168 (2007)
3. Ikhlef, A., et al.: Online temperature control system. In: International Conference on Interactive
Mobile Communication Technologies and Learning (IMCL). [S.l.]: [s.n.], pp. 75–78 (2014)
4. Valls, M.G., Val, P.B.: Usage of DDS data-centric middleware for remote monitoring and
control laboratories. IEEE Trans. Indus. Inf. 9(1), 567–574 (2013). Fevereiro
5. Vlasov, I., et al.: Global navigation satellite systems (GNSS) remote laboratory at BMSTU.
In: 2013 2nd Experiment@ International Conference ( 2013), 2013, Coimbra,
pp. 64–67 (2013)
6. Ožvoldová, M., Špiláková, P., Tkac, L.: Archimedes´ principle remote experiment. In: 11th
International Conference on Remote Engineering and Virtual Instrumentation (REV),
Porto-Portugal: [s.n.] (2014)
7. Barros, C., et al.: Remote physiological signals acquisition: didactic experimnets. In: 11th
International Conference on Remote Engineering and Virtual Instrumentation (REV),
Porto-Portugal: [s.n.] (2014)
LABCONM: A Remote Lab for Metal Forming Area 109

8. Ghorbel, H., et al.: Remote laboratory for control process practical course in eScience
project. In: International Conference on Interactive Mobile Communication Technologies
and Learning (IMCL), Thessaloniki, Greece: [s.n.] (2014)
9. Ayodele, K.P., Inyang, I.A., Kehinde, L.O.: An iLab for teaching advanced logic concepts
with hardware descriptive languages. IEEE Trans. Educ. 58(4), 262–268 (2015)
10. Restivo, M.T., et al.: A Remote Laboratory in Engineering Measurement, vol. 56, no. 12,
pp. 4836–4843, December 2009
11. Marcelino, R., et al.: Extended immersive learning environment: a hybrid remote/virtual
laboratory. Int. J. Online Eng. (IJOE) 6, 46–51 (2010)
12. Michels, L.B., et al.: Using remote experimentation for study on engineering concepts
through a Didactic press. In: 2nd Experiment@ International Conference - Exp’at, Coimbra:
[s.n.], pp. 191–193 (2013)
13. Nasri, I., Ennetta, R.: Determination of resonance frequency and estimation of damping ratio
for forced Vibrations modules using remote lab. In: International Conference on Interactive
Mobile Communication Technologies and Learning (IMCL), Thessaloniki, Greece: [s.n.]
14. Terkowsky, C., et al.: Developing tele-operated laboratories for manucfacturing engineering
education. In: International Conference on Remote Engineering and Virtual Instrumentation,
REV2010, Stockholm, pp. 60–70 (2010)
15. Michels, L.B., et al.: Educational compression testing machine for teleoperated teaching of
the metals flow curves. In: Exp’at 2015, Ponta Delgada: [s.n.] (2015)
16. Michels, L.B., et al.: Remote compression test machine for experimental teaching of
mechanical forming. Int. J. Online Eng. 12(04), 20–22 (2016)
A Virtual Proctor with Biometric Authentication
for Facilitating Distance Education

Zhou Zhang, El-Sayed Aziz, Sven Esche ✉ , and Constantin Chassapis

( )

Department of Mechanical Engineering, Stevens Institute of Technology, Hoboken, NJ, USA


Abstract. The lack of efficient and reliable proctoring for tests, examinations
and laboratory exercises is slowing down the adoption of distance education. At
present, the most popular solution is to arrange for proctors to supervise the
students through a surveillance camera system. This method exhibits two short‐
comings. The cost for setting up the surveillance system is high and the proctoring
process is laborious and tedious. In order to overcome these shortcomings, some
proctoring software that identifies and monitors student behavior during educa‐
tional activities has been developed. However, these software solutions exhibit
certain limitations: (i) They impose more severe restrictions on the students than
a human proctor would. The students have to sit upright and remain directly in
front of their webcams at all times. (ii) The reliability of these software systems
highly depends on the initial conditions under which the educational activity is
started. For example, changes in the lighting conditions can cause erroneous
In order to improve the usability and to overcome the shortcomings of the
existing remote proctoring methods, a virtual proctor (VP) with biometric authen‐
tication and facial tracking functionality is proposed here. In this paper, a two-
stage approach (facial detection and facial recognition) for designing the VP is
introduced. Then, an innovative method to crop out the face region from images
based on facial detection is presented. After that, in order to render the usage of
the VP more comfortable to the students, in addition to an eigenface-based facial
recognition algorithm, a modified facial recognition method based on a real-time
stereo matching algorithm is employed to track the students’ movements. Then,
the VP identifies suspicious student behaviors that may represent cheating
attempts. By employing a combination of eigenface-based facial recognition and
real-time stereo matching, the students can move forward, backward, left, right
and can rotate their head in a larger range. In addition, the modified algorithm
used here is reliable to changes of lighting, thus decreasing the possibility of false
identification of suspicious behaviors.

Keywords: Distance education · Virtual proctor · Face detection · Facial

recognition · Stereo matching

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_11
A Virtual Proctor with Biometric Authentication 111

1 Introduction

The distance education market keeps growing rapidly [1]. While several research threads
(i.e. on real-time creation of virtual environments for virtual laboratories [2, 3], augmen‐
tation of virtual laboratories [4], creation of smart sensor networks [5], etc.) have
contributed to the continued adoption of distance education approaches, the lack of
efficient and reliable proctoring is slowing this adoption process down.
At present, the most popular solutions in distance education for monitoring an
experiment or an examination are human proctors. Human proctors used in distance
education can be teaching assistants, instructors, laboratory administrators and faculty
members. There are also companies that provide the service of monitoring examinations
from a distance (e.g. ProctorU [6]). In most remote proctoring cases, the students take
examinations and perform experiments on a computer and the proctor(s) watch(es) them
from another computer through video cameras. The human proctors must monitor a
screen throughout the entire process. The basic requirement for operating a remote
proctor is that it needs a remote surveillance camera system mounted at the student’s
site. The advantage of this method is that it is similar to traditional classroom education
and therefore provides fewer challenges than technology-assisted methods such as VPs.
However, this method also has two shortcomings. One disadvantage is that the opera‐
tional costs are high since such proctoring services currently usually charge over $60
per student per course while the cost for setting up the surveillance system is high.
Another disadvantage is that the proctoring process is laborious and tedious.
With the further development of computer vision technology, VPs appeared. VPs
are integrated software-hardware solutions that have the potential to contribute to
bringing academic integrity to distance education. They were enabled by the prolifera‐
tion of the high-speed Internet and advanced computer peripherals. They first perform
the authentication of the students by scanning either their faces [7] or their fingerprints
[8]. Then, a camera monitors the environment and/or a microphone records the sounds
within it. Virtual proctoring software used in distance education includes Remote
Proctor Pro [9], Instant-InTM Proctor [10], Proctortrack [7], Proctorfree [9] and
Securexam Remote Proctor [11]. Virtual proctoring has three advantages over human
proctors. First is its low fixed cost. The students only need to set up a webcam and install
the VP software which then performs the authentication of the student and the proctoring
of the educational activity. Typically, the cost of the VP (including webcam, microphone
and software kit) will not exceed $15 per student. The second advantage of virtual proc‐
toring lies in its convenience. There is no need for human proctors, and thus the educa‐
tional activity to be proctored can take place at anytime and anywhere. The third
advantage is in the accurate authentication of the students. The utilization of biometric
technologies enables the accurate recognition of the students, thus ensuring a reliable
authentication [10]. It should be noted that virtual proctoring is still evolving and current
systems are often attracting complaints from the students, mostly because of two short‐
comings exhibited by these systems. First, they impose more severe restrictions on the
students than a human proctor would. The students have to sit upright and remain directly
in front of their webcams at all times. Second, the reliability of the VP highly depends
on the initial conditions under which the proctoring is started. For example, changes in
112 Z. Zhang et al.

the lighting conditions can cause mistakes in the verification of suspicious behav‐
iors [12].
In order to improve the usability and to overcome the shortcomings of the existing
remote proctoring methods, a VP with biometric authentication and facial tracking
functionality is proposed here. This VP is designed to authenticate the students and
capture suspicious behaviors based on facial recognition and facial tracking. The work‐
flow of this VP is depicted in Fig. 1 and is composed of two main parts: authentication
and supervision. In addition, there is a database which stores the enrolled students’ face
templates indexed with their campus ID. When they use this VP, the students are first
required to scan their face using a webcam. Second, the scanned frame is processed, and
the part of the frame containing the face is cropped out. Third, the face is then compared
with the face template that was stored in the face database and could be retrieved by the
index of student’s campus ID. If the face and the template match, the student is authen‐
ticated and can continue the educational activity. Otherwise, the student is logged out.
After authentication, the frame used for authentication is stored in a newly allocated
memory address and is taken as the new template. Then, the subsequent matching is
based on this new template instead of the template stored in the face database. Then, the
educational activities are monitored by the webcam. During the monitoring period, the
VP samples the live video of the student with a sample rate of 30 frames per second. If
the mismatching percentage between the sampled face and the new template exceeds a
pre-configured threshold value, a suspicious behavior is identified and a video clip is
recorded, which is then used for further verification by the instructor of the examinations
or experiments.

Fig. 1. Workflow of virtual proctor based on facial recognition

2 Design of Virtual Proctor Based on Facial Recognition

2.1 Overview of Proposed Virtual Proctor

The proposed VP was designed based on facial recognition techniques. In order to reduce
the computational cost of the facial recognition while keeping its reliability, the process
A Virtual Proctor with Biometric Authentication 113

was divided into two stages. The first stage is the detection of the face in a sampled
frame of the student’s live video. Once a face has been detected, the area of the detected
face is cropped out from that frame and used for the following facial recognition. The
second stage is the recognition of the detected face. The cropped face area is compared
with the face template as illustrated in Fig. 1. Because the cropped face area is much
smaller than the whole image area, the computational cost of recognizing the face is
reduced considerably.
Below, the reason why facial recognition was selected instead of other biometric
methods is explained first. Following that, various facial detection algorithms are
discussed, including the algorithm selected here and the method employed to crop out
the face area. Subsequently, a modified facial recognition algorithm based on stereo
matching is introduced that overcomes some of the shortcomings of other facial recog‐
nition algorithms based on template matching or eigenfaces. Finally, the results of some
benchmarks are presented to confirm that the proposed facial recognition algorithm is

2.2 Advantages of Facial Recognition for Virtual Proctor

A VP should have the functions of both authentication and real-time monitoring. In the
authentication process, the method used to verify the students’ identity can be based on
their biometric information (such as face snapshot [10], fingerprint [8], palm print [13],
hand geometry [14], iris [15] and/or retina [16]). Following the authentication, the
student’ activities are monitored by consecutively sampling the webcam video.
Compared with other biometrics-based methods, the facial recognition method
employed here has three advantages:
• The hardware is available and affordable. A common webcam, instead of special
biometrics data readers, can meet the hardware requirements for real-time facial
recognition and tracking.
• The algorithms used to implement facial recognition are much simpler than those
used in biometrics-based methods, thus allowing for a higher sampling frequency.
The features used for facial recognition are so notable that they can be identified very
easily. Therefore, the algorithms for the facial recognition are more robust and
simpler than other biometrics-based algorithms.
• Facial recognition is more practical than iris or retina tracking. Facial recognition is
macroscopic in scale while iris recognition and retina scanning are based on micro‐
cosmic features which have strict requirements related to the distance between the
scanners and the eyes.
Based on the above discussion, facial recognition was used to design the proposed VP.
114 Z. Zhang et al.

2.3 Facial Detection

2.3.1 Selection of Facial Recognition Algorithm
Facial detection is the first step that precedes facial recognition. In fact, there are many
facial detection algorithms with different complexities. The most common methods in
facial detection include [17]:
• Detecting faces in images with a controlled background. The most common approach
in this method is to use the green screen algorithm [18] to crop out the faces. Although
this method is the simplest one, it is not practical to employ it in VPs because one
cannot expect the students to provide a green background.
• Detecting faces by color. This method uses a typical skin color to find face segments.
Obviously, it is not robust when the environment lighting condition is changed. In
addition, it is not universally effective for all kinds of skin colors.
• Finding faces by motion. This method assumes that the face is the only moving object
in consecutively acquired images. Thus, it is not effective in scenarios where there
are other moving objects in the background.
• Finding faces in unconstrained scenes. This method removes the constraints imposed
on the background (for example an intended green background) or the face itself (for
example the markers on faces). Hence, it represents a general and convenient method.
In addition, it can be further divided into tracking based on models (e.g. model-based
facial detection [19], edge-orientation matching based facial detection [20], Hausdorff
distance facial detection [21]) and weak classifier cascades (e.g. boosting classifier
cascades [22], asymmetric AdaBoost and detector cascades [23]).
Obviously, unconstrained methods are more appropriate for VPs. Tracking based on
models is not robust because of the lack of generalization in the definition of human
facial expressions [24]. On the other hand, facial detection using weak classifier cascades
is based on the analysis of human expressions, and hence, it is more general and robust.
The algorithms used here represent a modification of weak classifier cascades.

2.3.2 Innovative Method to Crop Out Face Based on Facial Detection

A basic cascade is a degenerate decision tree. The training process is implemented by
going through a sequence of weak classifiers expressed as functions (h1, h2, …, hn) with
binary outputs (true = 1 and false = 0) as illustrated in Fig. 2 [25]. The set of training
samples X1 is sent to classifier h1, and a large number of samples which make the output
of h1 equal to zero are rejected. Subsequently, the remaining samples are sent to h2, and
so on. After n stages, the number of samples is significantly smaller. The new classifier
is composed of (h1, h2, …, hn). Then, the remaining samples can be taken as the input of
other cascade processing or another detection system. After the training process
described above, a series of strong and accurate classifiers was obtained. For facial
detection, Haar features were used to train the classifier [26, 27]. In addition, the
Adaboost (adaptive boosting) algorithm was employed to find the best threshold for the
Haar features. For convenience reasons, the pre-trained classifier from OpenCV [28]
was used to implement the facial detection. More details can be found elsewhere [29].
A Virtual Proctor with Biometric Authentication 115

Fig. 2. Schematic depiction of detection cascade

The process described above only finds an estimated area of the faces. It renders the
recognition process difficult since the information provided by the face area is insuffi‐
cient for the subsequent facial recognition. In fact, the estimated area of the faces results
in the loss of the entire background and part of the outline of the face. In order to
compensate for the loss of information and to facilitate the following facial recognition
process, a modification for the facial detection algorithms based on the localization of
the mouth and eyes was implemented. First, the coordinates of the eyes are set as
E1(x1, y1) and E2(x2, y2), and the coordinates of the mouth are set as M(x3, y3). All
coordinates represent the center of the area of the eyes and mouth. The cross product of
vectors E⃖⃖⃖⃖⃖⃖⃖
⃗ ⃖⃖⃖⃖⃖⃖⃖⃗
1 E2 and E1 M is positive if M is above the line E1E2. Based on the golden ratio
of the human face [30], the face area forms a golden rectangle with the eyes at its
midpoint. Then, the vertical ratio equals the distance between the pupils and the mouth
in relation to the distance of the hairline to the chin, i.e. E1 M/H C = 0.36. The horizontal
ratio equals the distance between the pupils in relation to the width of the face, i.e.
E1 E2/L R = 0.46 (see Fig. 3 [30, 31]). The obtained rectangle should be L R × H C, but
the rectangle actually used to facilitate the face recognition is increased by 10 pixels in
each direction in order to avoid loss of the facial information. The part of the code based
on the cascade of OpenCV is listed in Fig. 4.

Fig. 3. Golden ratio of human face
116 Z. Zhang et al.

Fig. 4. Part of code used to detect face, eyes and mouth

2.4 Facial Recognition

2.4.1 Limitations of Template Matching and Eigenfaces in Facial Recognition

Face recognition methods for the images can be divided into feature-based methods [32]
and holistic methods [33]. Feature-based methods lead to robust results, but they make
the automatic detection of the features difficult to achieve. Obviously, these methods
are inappropriate for the VPs. Holistic methods have the advantage that they concentrate
on the limited regions or points of interest without the distortion of the information of
the images [34]. Their shortcoming is the hypothesis of equal importance of all pixels
in the image. These methods are not only costly but also sensitive to the relationship
between the training samples and the test data, to changes of the pose and to the illu‐
mination conditions. Typical holistic methods include temple match, eigenfaces, eigen‐
features, the combination of eigenfaces and eigenfeatures [35] and 2D matching.
In the template matching algorithm, the selected patch that is taken as the template
traverses the target image. Then, an error function is defined as:

R(x, y) = f [T(x′ , y′ ), I(x + x′ , y + y′ )] (1)

where R is the resulting error, T is the template, I is the target image, (x, y) are the
coordinates of the image in pixels, and (x′, y′) are the coordinates of the template in
Different error functions can be specified depending on the prevailing conditions.
After comparison between the template and the target image, the best matches can be
found as global minima or maxima [36].
The eigenface method is an efficient approach for recognizing a face [37]. A high
recognition rate can be achieved with a low dimension d of the eigenvector space since
the recognition rates are stable when the dimension of the eigenvector space equals 8.
A Virtual Proctor with Biometric Authentication 117

In order to identify the eigenvectors, a principal component analysis was used to find
the directions with the greatest variances of the components of a given dataset. These
variances are called principal components (and are also the eigenvalues associated with
the eigenvectors used in the eigenfaces). Then, a high-dimensional dataset is described
by such a series of correlated variables. The algorithm can be described as follows.
Let X = {x1 , x2 , … , xn }, xi ∈ Rd be a random vector wherein xi are the observations.
The expected value μ of the observations is:

𝜇= x (2)
n i=1 i

The covariance matrix S can be expressed as:

S= (x − 𝜇)(xi − 𝜇)T (3)
n i=1 i

The eigenvalues λi and eigenvectors νi of S are defined by:

Svi = 𝜆i vi , i = 1, 2, … , n (4)

If there are k principal components and the corresponding eigenvectors are labelled
in descending order based on the values of the principal components, then the k principal
components of the observed vector x are given by:

y = W T (x − 𝜇), W = {v1 , v2 , … , vk } (5)

The reconstruction of the eigenvectors from the principal component analysis (PCA)
is given by:

x = Wy + 𝜇 (6)
Following the procedures outlined above, three more steps realize facial recognition.
In the first step, all training samples are projected into the PCA subspace composed of
eigenvectors. In the second step, the query image (i.e. the target image that will be
identified) is projected into the PCA subspace. In the last step, the nearest neighbor
between the projected training images (i.e. the former training samples) and the projected
query image is determined.
Template matching is reliable and simple when the contexts of the images are
constrained. The eigenface method is designed based on a generalization of the faces,
and therefore, it is robust and accurate under conditions that vary only mildly.
Unfortunately, both the template matching method and the eigenface method are
vulnerable to changes in the environment and pose variations of the face. Therefore, the
method of facial recognition employed here is a modification of stereo matching based
on the facial detection results.
118 Z. Zhang et al.

2.4.2 Facial-Detection-Based Facial Recognition with Stereo Matching

The proposed method used to recognize and track faces is based on stereo matching.
Stereo matching refers to comparing two images taken by nearby cameras and
attempting to map every pixel in one image to the corresponding location in the other
image. The proposed method can be described as follows:
• Detect the facial, eye and mouth area in the captured frame
• Crop out the facial area
• Mark the eyes, mouth as landmarks
• Rectify the captured frame based on the landmarks both in the template and the
captured frame
• Compare the template stored in the face database to the rectified frame (i.e. the frame
with the same row coordinates as the corresponding points in the template) using the
stereo matching algorithm
• Compute stereo correspondence which illustrates the relationship of the points
between the pair of images, and the matching cost which measures the similarity of
• Identify the students by a pre-set threshold used to control the matching cost, and the
value of this threshold is selected according to the desired accuracy.
The core of these procedures is the stereo matching algorithm. The basic approach of
the matching algorithm can be described as follows:
The template image and the captured images are expressed in grayscale instead of
in R, G, B values. Given the intensity I(x, y) of a point in the template and the intensity
I′(x, y) of the assumed corresponding point in the captured image, the absolute intensity
disparity d(x, y) of the two points can be computed as follows:

d(x, y) = ‖ ′ ‖
‖I(x, y) − I (x, y)‖ (7)

Then, the sum of the absolute intensity differences (SAD) of the intensity in the
template and captured image is:


SAD(d) = ‖I(x + i, y + j) − I ′ (x + i + d, y + j)‖
‖ ‖ (8)
y=−W x=−W

If the SAD is used directly to obtain the disparity maps (which refer to the apparent
pixel difference or motion between a pair of stereo images), the noise in the disparity
maps is very large since the signal-to-noise ratio is too low. In order to optimize the
stereo matching accuracy, a box filter based on the cross-correlation in the window areas
(of size 2W × 2W) around the landmarks is used:


SC (x, y, d) = [I(x + i, y + j) − I ′ (x + i + d, y + j)] (9)
j=−W i=−W
A Virtual Proctor with Biometric Authentication 119

∑ ∑
SC (x, y + 1, d) = [I(x + i, y + 1 + j) − I ′ (x + i + d, y + 1 + j)] (10)
j=−W−1 i=−W

Denoting the sum of a row in the windows as:

[ ]
AC (x, y, d, j) = (I(x + i, y + 1 + j) − I ′ (x + i, y + 1 + j − d) (11)

Then, the cross-correlation in a 2W × 2W window centered at point (x, y) becomes:

SC (x, y + 1, d) = SC (x, y, d) + AC (x, y, d, W + 1) − AC (x, y, d, −W) (12)

Here, d is the possible floating of the actual location of the captured image (i.e. the
lag in the definition of the cross-correlation).
In Fig. 5, the comparison of the disparity maps obtained with SAD and SAD with
window filter is depicted. It can be seen that the noise can be reduced efficiently with
the window filter.

Fig. 5. Comparison of SAD and SAD with window filter

After obtaining the disparity between the template image and the captured image,
the facial recognition of the users can be performed.
In order to test the proposed method used to recognize the students, a benchmark
analysis was conducted which compares the proposed method with the template
matching and eigenface methods. In this benchmark, “the Sheffield (previously UMIST)
face database” [38, 39] was used to test the performance of different methods with
respect to the pose translation. 20 sample sets from this database were tested from
different view points (0 to 90 in 10 intervals). The frontal view was defined as the 0
viewpoint, and it was taken as the template. The test results are shown in Fig. 6. It is
120 Z. Zhang et al.

seen that the stereo matching method provides for a higher reliability than the other two
methods when the pose translation is large.

Fig. 6. Template matching, eigenfaces and stereo matching about pose translation

In order to test the reliability of the proposed method under variable illumination
conditions, a benchmark analysis with the “extended Yale face database B” [40, 41] was
conducted. The frontal faces of 15 individuals under 50 different illumination conditions
were examined. The results of the test are illustrated in Fig. 7. The eigenface method is
better than the template matching method, the reliability of which is critically affected

Fig. 7. Template matching, eigenfaces and stereo matching under various illumination conditions
A Virtual Proctor with Biometric Authentication 121

by the illumination conditions. In contrast, the stereo matching method ensures a high
reliability even under poor illumination conditions.
As mentioned above, the final implementation of the stereo matching method focuses
on three areas: two eye-centered areas and one mouth-centered area. Decreasing the
sizes of the areas used for stereo matching does not only increase the efficiency of the
stereo matching algorithm but also improves the reliability of the facial recognition. In
addition, ‘C++ accelerated massive parallelism’ [42] was used in the implementation
of the stereo matching algorithm, and therefore the speed of the execution of the program
was increased significantly.

3 Definition of Suspicious Behaviors and Test Results

It is difficult to define all possible suspicious behaviors that may represent cheating
attempts. Therefore, a small set of such behaviors was defined in the pilot implementa‐
tion of the proposed VP.
In order to understand this definition, a coordinate system was chosen as depicted in
Fig. 8. First, rotations of the head about the Z axis are considered normal. Suspicious
behaviors of “rotating head” correspond to rotations about either the X or Y axes.
Second, “moving relative to webcam” corresponding to a translation along the X, Y or
Z directions is also suspicious. The suspicious behavior of “rotating head” is judged by
the matching percentage between the captured frame and the template stored in the face
database. The essence of this method is that the face in frontal view is taken as the facial
recognition object. Thus, rotations about the X and Y axes generate obvious differences
between the captured frame and the template. In addition, these differences cannot be
eliminated by face alignment. Therefore, the rotation angle can be estimated according
to the face matching percentage. The calculation of translations is much easier than that
of rotations. The method used here is to track the location of the face and the size of the
face area. Then, the relative location of the face can be determined.

Fig. 8. Definition of suspicious behaviors
122 Z. Zhang et al.

In order to evaluate the performance of the VP, 50 intentional cheating attempts per
criterion were tested. The test results corresponding to different criteria are listed in
Table 1. These results show that the proposed VP is reliable with respect to pose trans‐
lations and illumination changes.

Table 1. Test results corresponding to different criteria based on 50 cheating attempts

Head rotation and translation ∣Rθ∣ ≤ 10 ∣Rθ∣ ≤ 30 ∣Rθ∣ ≤ 50 ∣Rθ∣ ≤ 60 ∣Rθ∣ > 70
(rotation Rθ in deg, translation ∣Td∣ ≤ 20 ∣Td∣ ≤ 40 ∣Td∣ ≤ 60 ∣Td∣ ≤ 70 any Td
Td in cm)
Accuracy 94% ≤87% ≤76% ≤54% ≤30%

For further assessment of the proposed algorithms used in the VP, a pilot test with
three volunteers was conducted. The details can be found elsewhere [43]. The difference
between this pilot test and the prior experiment involving 50 cheating attempts is that
the illumination conditions were changed while the volunteers were asked to bow their
head. Despite the changing conditions, the VP based on the proposed algorithms worked
well with respect to recognizing and tracking the users.

4 Conclusions and Future Work

In this paper, a VP based on facial recognition was introduced. In order to overcome the
shortcomings of existing facial recognition methods, a stereo matching method was
proposed to improve the reliability of the facial recognition. In order to evaluate the
reliability of this method, two benchmark analyses were conducted. The first analysis
was designed to determine the impact of the stereo matching method on the reliability
for large pose translations. The second analysis was aiming to test the performance of
the stereo matching method under variable illumination conditions. The results proved
that the proposed method has a higher reliability than the existing facial recognition
methods (template matching and eigenfaces).
In the future, the performance of the VP will be enhanced by adding voice identifi‐
cation and recognition functions, adding screen monitoring functionality, targeting more
complicated suspicious behaviors and optimizing the recognition algorithms.
Although the proposed VP still has certain limitations, it performed well under labo‐
ratory conditions. In addition, it has the potential to replace human proctors in both
distance education and traditional classroom settings.


Accessed Nov 2016
2. Zhang, Z., Zhang, M., Chang, Y., Esche, S.K., Chassapis, C.: A smart method for developing
game-based virtual laboratories. In: Proceedings of the ASME International Mechanical
Engineering Conference and Exposition, IMECE 2015, Houston, Texas, 13–19 November
A Virtual Proctor with Biometric Authentication 123

3. Zhang, Z., Zhang, M., Chang, Y., Esche, S.K., Chassapis, C.: Real-time 3D reconstruction
for facilitating the development of game-based virtual laboratories. Comput. Educ. J. 7(1),
85–99 (2016)
4. Zhang, Z., Zhang, M., Tumkor, S., Chang, Y., Esche, S.K., Chassapis, C.: Integration of
physical devices into game-based virtual reality. Int. J. Online Eng. 9, 25–38 (2013)
5. Qureshi, F., Terzopoulos, D.: Smart camera networks in virtual reality. In: Proceedings of
First ACM/IEEE International Conference on Distributed Smart Cameras, Vienna, Austria,
25–28 September 2007
6. Accessed Oct 2016
7. Accessed Oct 2016
8. Accessed Oct 2016
9. Accessed Oct 2016
10. Accessed Oct 2016
11. Accessed Oct 2016
softwares-uneasy-glare.html. Accessed Sept 2016
13. Rasmussen, K.B., Roeschlin, M., Martinovic, I., Tsudik, G.: Authentication using pulse-
response biometrics. In: Proceedings of Network and Distributed System Security
Symposium 2014, San Diego, California, USA, 23–25 February 2014
14. Bača, M., Grd, P., Fotak, T.: Basic principles and trends in hand geometry and hand shape
biometrics. In: New Trends and Developments in Biometrics. INTECH Open Access
Publisher (2012)
redundant. Accessed Oct 2016
16. Proctor, R.W., Lien, M.C., Salvendy, G., Schultz, E.E.: A task analysis of usability in third-
party authentication. Inf. Secur. Bull. 5(3), 49–56 (2000)
17. Accessed Oct 2016
18. Horprasert, T., Harwood, D., Davis, L.S.: A robust background subtraction and shadow
detection. In: Proceedings of 4th Asian Conference on Computer Vision, Taipei, Taiwan, 5–
8 January 2000
19. Accessed Oct 2016
20. Fröba, B., Külbeck, C.: Real-time face detection using edge-orientation matching. In:
Proceedings of International Conference on Audio- and Video-Based Biometric Person
Authentication, Halmstad, Sweden, 6–8 June 2001
21. Jesorsky, O., Kirchberg, K.J., Frischholz, R.W.: Robust face detection using the Hausdorff
distance. In: Proceedings of International Conference on Audio- and Video-Based Biometric
Person Authentication, Halmstad, Sweden, 6–8 June 2001
22. Vasconcelos, N., Saberian, M.J.: Boosting classifier cascades. In: Proceedings of Advances
in Neural Information Processing Systems 23, Vancouver, British Columbia, Canada, 6–9
December 2010
23. Viola, P., Jones, M.: Fast and robust classification using asymmetric adaboost and a detector
cascade. In: Proceedings of Advances in Neural Information Processing System 14,
Vancouver, British Columbia, Canada, 3–8 December 2001
24. Gokturk, S.B., Bouguet, J.Y., Tomasi, C., Girod, B.: Model-based face tracking for view-
independent facial expression recognition. In: Proceedings of Fifth IEEE International
Conference on Automatic Face and Gesture Recognition, Washington D.C., USA, 20–21 May
25. Viola, P., Jones, M.: Robust real-time object detection. Int. J. Comput. Vis. 57(2), 137–154
124 Z. Zhang et al.

26. Wilson, P.I., Fernandez, J.: Facial feature detection using Haar classifiers. J. Comput. Sci.
Coll. 21(4), 127–133 (2006)
27. Accessed Nov 2016
28. Accessed Nov 2016
29. Zhang, Z., Zhang, M., Chang, Y., Esche, S.K., Chassapis, C.: A virtual laboratory system
with biometric authentication and remote proctoring based on facial recognition. In:
Proceedings of the 2016 ASEE Annual Conference and Exposition, New Orleans, LA, USA,
26–29 June 2016
30. Accessed Nov 2016
31. Accessed Nov 2016
32. Brunelli, R., Poggio, T.: Face recognition: features versus templates. IEEE Trans. Pattern
Anal. Mach. Intell. 15, 1042–1052 (1993)
33. Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3, 71–86 (1991)
34. Jafri, R., Arabnia, H.: A survey of face recognition techniques. J. Inf. Proces. Syst. 5(2), 41–
68 (2009)
35. Pentland, A., Moghaddam, B., Starner, T.: View-based and modular eigenspaces for face
recognition. In: Proceedings of IEEE Computer Society Conference on Computer Vision and
Pattern Recognition, Seattle, WA, 21–23 June 1994
36. Accessed Nov 2016
37. Menezes, P., Barreto, J.C., Dias, J.: Face tracking based on Haar-like features and eigenfaces.
In: Proceedings of IFAC/EURON Symposium on Intelligent Autonomous Vehicles, Técnico,
Lisboa, Portugal, 5–7 July 2004
38. Accessed Nov 2016
39. Graham, D.B., Allinson, N.M.: Characterising virtual eigensignatures for general purpose
face recognition. face recognition, pp. 446–456. Springer, Heidelberg (1998). doi:
40. Accessed Nov 2016
41. Georghiades, A.S., Belhumeur, P.N., Kriegman, D.J.: From few to many: illumination cone
models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach.
Intell. 23(6), 643–660 (2001)
42. Accessed Nov 2016
43. Zhang, Z., Zhang, M., Chang, Y., Esche, S.K., Chassapis, C.: A virtual laboratory combined
with biometric authentication and 3D reconstruction. In: Proceedings of the ASME
International Mechanical Engineering Conference and Exposition, IMECE 2016, Phoenix,
Arizona, USA, 11–17 November 2016
From a Hands-on Chemistry Lab to a Remote Chemistry
Lab: Challenges and Constrains

San Cristobal Elio ✉ , J.P. Herranz, German Carro, Alfonso Contreras,

( )

Eugenio Muñoz Camacho, Felix Garcia-Loro, and Manuel Castro Gil

UNED, DIEEC, Madrid, Spain

Abstract. The spread of remote labs in Universities is a current reality. They are
strong e-learning tool which allow students to carry out online experiments over
real equipment and Universities to have e-learning tools for learning methodol‐
ogies such as Blended learning and Distance learning. These remote labs are
developed for many science fields such as electronic, robotic and physic. Never‐
theless it is very difficult to find chemistry remote labs. This paper wants to show
the difficulties of choosing a chemistry lab which can become a remote chemistry
lab, and a first approach of converting a hands-on chemistry lab to remote one.

Keywords: Blended and distance learning · E-learning tools · Hands-on and

remote labs

1 Introduction

Traditionally, Students learnt theoretical knowledge in face to face classrooms and

acquired skills from hands-on laboratories. But in the last decades, this idea has
shifted from face to face classrooms and hands-on labs to online courses and virtual
and remote labs.
Nowadays many universities provide virtual and remote labs where its students can
carry out experiment from anyplace and anytime.
• Virtual labs are simulation programs which allow students carry out online experi‐
ments. There are a great number of them over Internet (Fig. 1), such as:
• Chemistry Labs. For instance Acid-bases solutions from
• Physics Labs. For instance Newton’s Cradle from
• Electronic Labs. For instance basic digital labs from
• Remote labs are software programs which allow students to carry out experiments
with real equipment at anytime and anyplace. In contrast to virtual labs, remote labs
are working with real instruments [1–3]; therefore, the vast majority of them must
control the access to lab (only one person at the same time). To do this, a set of
services are created around the remote labs, such as: Control of users and Calendar.

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_12
126 S. Elio et al.

Fig. 1. Virtual labs or simulation web programs

These remote labs also cover a great number of science fields (Fig. 2), such as:
• Robotic labs. For instance i-robot lab from The Labshare Institute in Australia. This
remote lab was designed to allow students to explore the concepts of teleoperation
of robots, accuracy of sensors, localization and mapping [4]. Or the Robotic arm
From UNED which allow students to work with a real robotic arm [5].
• Physic remote labs. For instance Archimedes remote labs from Deusto University
where students of secondary school learn the Archimedes’ Principle [6].
• Electronic remote labs. There are a lot of remote labs in this field. But, only two of
them are going to be described:
– The first one, is VISIR, this labs is really interesting for several reason: more than
one user can access it at the same time, several universities has implemented this
labs and are working together in project such as VISIR+ [7] and PILAR [8]. VISIR
allows wiring and measuring of electronic circuits remotely on a virtual work‐
bench that replicates physical circuit breadboards.
– The second one is the Microelectronics Device Characterization remote lab. This
Measures the DC current-voltage characteristics of microelectronics devices such
as diodes and transistors [9].
This section showed several examples of virtual and remote labs in different science
fields. The next sections is going to focus on describe briefly the architecture of remote
labs and the difficulties to create chemistry remote labs due to its nature.
From a Hands-on Chemistry Lab to a Remote Chemistry Lab 127

Fig. 2. Remote labs (real equipment)

2 Architecture of Remote Labs and Chemistry Experiments

The vast majority of remote labs are based on a same architecture. This is composed by:
• Web Server contains services such as control of users, calendar and user interfaces.
It also communicates with user and the lab server.
• Lab server contains the program to act with the real equipment and send the result
to web server.
• Real equipment depends on the remote labs. In the above section, it was shown real
instrumentation such as robotic arm, electronic circuits, motors and pipettes.
• Web cam allows students to see the results of acting with the real equipment.
Depending on web cam students can zoom in or out real instrumentation.
This is the hardware architecture but there are global phases that a remote lab should
fulfill (Fig. 3). These Phases are:
• Initial state. Students must find the instrumentation in a state initial. For instance, in
the Archimedes lab of Fig. 2 the balls must be out of the water or in VISIR lab the
entries of the circuits must be in its initial state.
• Experimentation. This phase can be divides into other, such as action over the labs,
storing student’s actions over the equipment, storing results of these actions, etc.
• Results. Lab should show a report of results of the experiments
• Visualization. All what happens during the experimentation process must be watched
by students through web cams, user interfaces, etc.
128 S. Elio et al.

Fig. 3. Simplification of phases of remote lab

In the case of chemistry experimentation, several constrains are found in some of

them phases. The following subsection will describe them briefly.

2.1 Constrains to Create Chemistry Remote Labs

Chemistry labs work with liquids, solids and gases. These resources are combined to
create new ones. These experiments need a set of requirements that are really difficult
in by phases of a remote lab.
• State initial:
– Many chemistry laboratories works with fluids. These fluids are mixed and some‐
times evaporated, therefore when students finish their experiments the fluids must
be replaced and the instrumentation must be cleaned.
– Many chemistry labs works with solids and liquids. These can vary his weight or
volume. These can also mix giving as a result other chemical compound. There‐
fore, it is really difficult to give back an initial state without human help.
• Experimentation
– Chemistry labs need handle and weigh solid material. For instance, the experiment
of Reaction of zinc with iodine needs Zinc powder, about 0.5 g, sulfuric acid,
about 20 cm3, etc. Implementing the mechanics to do these measurements in an
automatic way is really complicated.
• Visualization
– Chemistry labs which work with gasses and transparent liquids are difficult to
watch with web cams.
– In some Chemistry labs, odors are also important for students. Remote labs are
not able to provide this sense, although it is possible to use gas sensors.
From a Hands-on Chemistry Lab to a Remote Chemistry Lab 129

All these reasons show the difficulties of designing and developing a chemistry
remote laboratory.

3 Selecting Chemical Experiment

Once all these constrains were keeping in mind, the department of Electrical and
Computer Engineering Department and Chemistry applied to engineering department
from UNED decide to work in staring the conversion of the hands-on Hydrogen-solar
This equipment allows students to carry out hydrogen-solar energy cycle experi‐
ments. To do this, equipment provides a set of elements to convert water to hydrogen
and oxygen, to store these in graduated Cylinders and to consume them in a fuel cell
and produce electrical energy and water. This energy can be used to switch on a bulb or
start a motor.

Fig. 4. Hydrogen-solar equipment.

As it has been told, the equipment is a set of hardware elements which allow
performing this chemical process (Fig. 4). Among them:
• Light source. Sun is replaced by a lamp. This lamp simulates renewable energy.
Students and teacher can move closer and farther the lamp to solar panel. This allows
simulating the variation of light radiation on solar panel.
• Solar panel converts the light luminous energy, which is supplied by the lamp, into
electrical energy. Students and teacher can vary the solar panel orientation and simu‐
late different inclinations.
• Electrolyzer decomposes water into hydrogen and oxygen by using the electrical
energy supplied by the solar panel.
130 S. Elio et al.

• Fuel cell. It consists of two PEM fuel cells that can be connected in series or in
parallel. They are used to generate electricity from the hydrogen and oxygen
produced by the electrolyzer.
• Load module. It consists of an engine, a lamp and a set of resistors that allow using
the electric energy generated by the fuel cell.
• Measuring devices. It is composed by a voltmeter and an ammeter to visualize the
different voltages and intensities of the electric energy produced and consumed in
each of the processes.
Although this lab requires to be filled with water for the initial state, the rest of the
experimentation can be automated for blended and distance learning.

4 Hands-on Hydrogen-Solar Equipment to Remote Lab

This hand-on lab can be converted to remote labs. In this first step the department of
electrical and computer of UNED has been focused on load module which can be
replaced by an IoT device, such as Arduino and/or raspberry pi. These devices can
manage a dimmer which can control the intensity of a lamp (Fig. 5).

Fig. 5. Remote control of load module

Arduino and raspberry pi allow remote labs programmers to create a web page where
students can change the intensity of the lamp.
Along with the modification of Load module a web cam connected directly to
Ethernet will allow students to watch the real instrumentation and chemical process.

5 Conclusion

This paper shows the difficulties of creating chemistry labs. To do this, papers describes:
• A state of art of virtual and remote labs and some of the science field where are
• Global architecture of remote labs and the phases of a remote lab.
• Constrains that have to be considered if someone wants to develop a chemistry remote
From a Hands-on Chemistry Lab to a Remote Chemistry Lab 131

• The selection of a chemistry lab that can minimize these constrains and will become
a remote lab.
• And finally, the initial steps of the department of Electrical and Computer Engi‐
neering Department and Chemistry applied to engineering department from UNED
to create a chemical remote lab.
Although a long road lies ahead, the first steps have been done.

Acknowledgement. The authors acknowledge the support of the eMadrid project (Investigación
y desarrollo de tecnologías educativas en la Comunidad de Madrid) - S2013/ICE-2715, VISIR+
project (Educational Modules for Electric and Electronic Circuits Theory and Practice following
an Enquiry-based Teaching and Learning Methodology supported by VISIR) Erasmus+ Capacity
Building in Higher Education 2015 nº 561735-EPP-1-2015-1-PT-EPPKA2-CBHE-JP and PILAR
project (Platform Integration of Laboratories based on the Architecture of visiR), Erasmus+
Strategic Partnership nº 2016-1-ES01-KA203-025327.


1. García-Zubia, J., Orduña, P., López-de-Ipiña, D., Alves, G.R.: Addressing software impact in
the design of remote laboratories. IEEE Trans. Industr. Electron. 56(12), 4757–4767 (2009)
2. Gomes, L., Bogosyan, S.: Current trends in remote laboratories. IEEE Trans. Industr. Electron.
56(12), 4744–4756 (2009)
3. Tawfik, M., Sancristobal, E., Martin, S., Diaz, G., Peire, J., Castro, M.: Expanding the
boundaries of the classroom: implementation of remote laboratories for industrial electronics
disciplines. Ind. Electron. Mag. 7(1), 41–49 (2013). IEEE
4. Labshare Labs.
Accessed 9 Nov 2016
5. Carro, G., Plaza, P., Sancristobal, E., Castro, M.: A wireless robotic educational platform
approach. In: 13th International Conference on Remote Engineering and Virtual
Instrumentation (REV) (2016)
6. Garcia-Zubia, J., et al.: Archimedes remote lab for secondary schools. In: 3rd Experiment@
International Conference, 2015 (2015)
7. VISIR+ Project: Accessed 16 Nov 2016
8. PILAR Project.
Accessed 16 Nov 2016
9. Microelectronics Device Characterization Lab (MIT).
Accessed 16 Nov 2016
Advanced Intrusion Prevention for Geographically
Dispersed Higher Education Cloud Networks

C. DeCusatis1 ✉ , P. Liengtiraphan1, and A. Sager2

( )

Marist College, Poughkeepsie, NY, USA
BlackRidge Technologies, Reno, NV, USA

Abstract. We present the design and implementation of a novel cybersecurity

architecture for a Linux community public cloud supporting education and
research. The approach combines first packet authentication and transport layer
access control gateways to block fingerprinting of key network resources. Exper‐
imental results are presented for two interconnected data centers in New York.
We show that this approach can block denial of service attacks and network
scanners, and provide geolocation attribution based on a syslog classifier.

Keywords: Authentication · Identity management · Attribution

1 Introduction

Higher education institutions in the U.S. are expected to spend about $10.8 billion on
information technology (IT) in 2016 (up from $6.6 billion last year), primarily driven
by investments in enterprise networks [1]. Globally, the higher education market is
expected to spend over $38.2 billion on IT in 2016 alone [2]. According to EduCause,
a nonprofit organization of IT leaders from higher education [3], the leading issue driving
upgrades for these organizations is information security. Security concerns among
higher education institutions appear to be well justified; the environment in which higher
education institutions operate, and the data which they store, has made them prime
targets for cyberattacks. Recent survey data indicates that 35% of all security breaches
take place in higher education [3]. Among those institutions suffering a breach, over
46% verified advanced persistent threat (APT) activity taking place in their environment
[4]. Higher education institutions collect and retain valuable data such as student,
alumni, and faculty personally identifiable information (PII) including medical records;
research data which may be subject to export control regulations; financial and
accounting data including student tuition, loans, and institution accounting records; and
critical infrastructure or intellectual property information including analytic systems
used for grading and research. This type of information is subject to various local,
national, and international security and privacy compliance regulations, including the
NIST 800 series of security guidelines [5]. In some ways, higher education can be
considered a large enterprise; despite this, higher education is not currently classified as

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_13
Advanced Intrusion Prevention 133

a “mission critical” application by the U.S. federal government [5]. In fact, many large
enterprises employ security policies based on the principle “exclude everything, allow
specific”, while the nature of higher education is just the opposite, and often implements
policies such as “allow everything, exclude specific” in an attempt to promote shared
academic research and education. This can make it particularly challenging to develop
effective security policies for higher education institutions.
A recent example involves the Linux One Community Cloud, a collaboration
between industry and academia to provide free access to an open source Linux devel‐
opment environment [6]. In August 2015, IBM announced a series of enterprise-class
servers which run only the Linux operating system. The Linux One platforms currently
support SUSE, Red Hat, and Ubuntu Linux distributions, along with a variety of
supporting tools such as Apache Spark, MongoDB, and Chef. In order to promote
development and research on this platform, the Linux One Community Cloud makes it
possible for anyone to request a free instance of the Linux One servers and toolsets. This
environment is hosted at the New York State Center for Cloud Computing and Analytics
(CCAC) at Marist College (a private, 6,000 student institution in upstate New York),
and is managed from a IBM development location in Poughkeepsie, NY. However, this
open innovation initiative also means that the cloud hosting Linux One is subject to
continuous cyberattacks from bad actors who attempt to exploit the open access privi‐
leges in this environment. There is a need for an intrusion prevention and authentication
solution which limits access to the cloud development code to only authorized users,
while at the same time preventing malicious reconnaissance attempts to fingerprint the
cloud infrastructure or launch denial of service (DoS) attacks.
In this paper, we present results of a cybersecurity testbed deployed in production
for the Linux One community cloud. Our research addresses the unique cybersecurity
requirements of this environment, including improved authentication as well as identity
and access management within a cloud data center. The key points of novelty for this
work include the use of network-based identities in a hybrid public/private cloud;
specifically, we demonstrate a combination of BlackRidge Technology first packet
authentication and transport layer access control (TAC) technologies. We experimen‐
tally demonstrate user identity management in the Linux One community cloud,
including the novel ability to prevent unauthorized fingerprinting of key network
resources. Further, we have developed original software to parse the logs from these
appliances and related honeypots, performing geolocation and botnet classification. This
work is intended to address the leading concerns expressed in recent surveys of chief
information security officers in academia, and enable replication of our security solution
at other colleges and universities. We deploy BlackRidge Technology TAC virtual
appliances throughout the network which manage user identity based on the first packet
used in transport connection requests. This solution including software developed
specifically for this project which performs geolocation and attribution for all unau‐
thorized access attempts, and enables collection of analytic data on attempted attacks
which can be processed into actionable threat intelligence. Experimental results are
presented, demonstrating that our approach detected and blocked 1,161 unauthorized
access attempts in the first twelve hours of production deployment. Over a period of ten
days, our approach successfully blocked over 18,000 attacks, which we have attributed
134 C. DeCusatis et al.

to locations in China, Korea, Brazil, Vietnam, and elsewhere. We also demonstrate the
ability to identify insider threats by running our authentication technology inside the
college firewall (an essential enabling feature for a NIST zero trust network [7]). We
present data demonstrating that this approach successfully prevents IP Spoofing and
Denial of Service attacks, and identifies network scanners such as Nessus if they are
operating on the cloud network. This functionality was not possible using conventional
network security approaches.
The paper is organized as follows. Section 1 provides an introduction and motivation
for this work, and an overview of our novel contributions. Section 2 describes TAC and
first packet authentication technologies in more detail. Section 3 provides experimental
results obtained from the Linux One higher education cloud deployment over a 30 day
period. Section 4 includes a summary and conclusions.

2 BlackRidge Technology Transport Access Control (TAC)


Our approach is based on a novel combination of two technologies, namely transport

access control and first packet authentication. In our proposed explicit trust model, each
network session is independently authenticated at the transport layer before any access
to the network or protected servers is granted. Unauthorized traffic is simply rejected
from the network, and there is no feedback to a potential attacker attempting to finger‐
print the system. Explicit trust is established by generating a network identity token
during session setup. The network token is a 32 bit, cryptographically secure, single use
object which expires after four seconds. Tokens are associated with identities from
existing Identity Access Management (IAM) systems and credentials, such as Microsoft
Active Directory or the IAM system used by Amazon Web Services [8]. Explicit trust
is established by authenticating these identity tokens on the first packet of a TCP
connection request, before the conventional 3-way TCP handshake is completed and
before sessions with cloud or network resources are established.
Tokens are generated for each unique entity requesting access to a network resource;
these entities are generally a user or device. An in-line virtual security gateway is then
implemented between the equipment being protected and the rest of the network. The
approach is illustrated in Figs. 1 and 2, which show a conventional security architecture
before addition of the TAC gateways and our new approach following addition of the
TAC gateways. In Fig. 1, a conventional security architecture would simply place a
commercially available intrusion prevention system (IPS) such as the Juniper 3600
platform between the untrusted Internet and resources connected to an education
network (for example, the three Linux servers). However, conventional IPS systems
cannot block network reconnaissance and scanning attempts, or perform first packet
authentication when a user requests a secure session. To improve on the conventional
approach, Fig. 2 shows the placement of two BlackRidge Technologies TAC gateways
within a higher education cloud network architecture. A TAC gateway appliance is
connected in the path between this user and the remaining network, and a second gateway
is positioned before the protected resources. The first gateway inserts an identity token
Advanced Intrusion Prevention 135

in the first packet of the TCP connection request. The second gateway enforces the
network access policy by extracting the token, resolving the token to an identity, and
determining the identity’s authorizations. Trusted users (attempting to access the educa‐
tion network) have identity tokens inserted by Gateway A; untrusted users receive no
such authentication tokens. The TAC gateways are configured to protect sensitive
resources, such as the cluster of Linux servers. When the second gateway receives a
connection request, it extracts and authenticates the inserted identity token and then
applies a security policy (such as forward, redirect, or discard) to the connection request
based on the received identity. This gateway acts as a policy enforcement point trans‐
parent to the rest of the system architecture and backwards compatible with existing
network technologies. Trusted users will be authenticated by Gateway B, allowing them
full access to the Linux server cluster. Untrusted users are not recognized by Gateway
B, and their first packet requesting a new session is dropped, along with all responses
at or below the transport layer. In this manner, the untrusted user is unable to determine
that the Linux server cluster exists, and cannot begin to mount an attack. The attempted
access is logged in an external syslog server, which allocates enough memory to avoid
wrapping and over-writing log entries. Existing security information and event manage‐
ment (SIEM) tools can still be used to analyze the logs or generate alerts of suspicious
activity. We note that continuous logging of all access attempts is consistent with the
approach of a zero trust network (i.e. not allowing any access attempts to go unmoni‐
tored). Conventional denial of service (DoS) and port scanner attacks from an untrusted
user are similarly blocked, effectively cloaking the presence of the Linux server cluster
in this example. Note that the conventional IPS platform is no longer required, but may
remain in place since it is transparent to the TAC gateway authentication process. We
may also add features such as honeypots which accept redirect requests from a failed
access attempt at the TAC gateway (for example, SSH honeypots may be configured in
this manner). This enables the collection of attack data which may subsequently be used
to craft actionable threat intelligence, such as attack signatures. Both the identity inser‐
tion gateway and identity authentication gateway appliances can be implemented as
virtual network functions (VNFs) hosted on a virtual server, router, or similar platform.
This approach has several advantages, including separation of security policy from
the network design (i.e. network addresses and topologies) [7]. This approach works for
any network topology or addressing scheme, including IPv4, IPv6, and networks which
use the Network Address Translation (NAT) protocol and is compatible with dynamic
addressing often used with mobile devices. This approach extracts, authenticates, and
applies policy to the connection requests, not only protecting against unauthorized

Fig. 1. Conventional network IPS
136 C. DeCusatis et al.

Fig. 2. Deployment of TAC gateways in the education network

external reconnaissance of the network devices but also stopping any malware within
the protected devices from calling home (exfiltration). Security policies can be easily
applied at the earliest possible time to conceal network attached devices from unau‐
thorized awareness. By preventing unauthorized scanning and reconnaissance, TAC
disrupts the attacker’s kill chain, blocks both known and unknown attack vectors, and
stops lateral attack spreading within a data center. This approach is low latency and high
bandwidth since packet content is not inspected. Since the network tokens are embedded
in the TCP session request, they do not consume otherwise useful data bandwidth. The
combination of transport access control and a segmented, multi-tenant network imple‐
ments a layered defense against cybersecurity threats, and contributes to non-repudiation
of archival data. These techniques are also well suited to protecting public and hybrid
cloud resources, or valuable, high performance cloud resources such as enterprise-class
mainframe computers and higher education data centers. Further, this approach can be
applied to software defined networks (SDN), protecting the centralized SDN network
controller from unauthorized access, and enabling only authorized SDN controllers to
manage and configure the underlying network. Further, our implementation of TAC uses
an innovative identity token cache to provide high scalability and low, deterministic
latency. The token cache is tolerant of packet loss and enables TAC deployments in low
bandwidth and high packet loss environments.

3 Experimental Results

The Linux One geographically distributed community cloud (Phase One production
environment) created for these experiments is shown in Fig. 3. This cloud interconnects
two physical data centers, namely the Linux One cloud data center hosted at Marist
College near Poughkeepsie, NY; the IBM data center hosted in their Poughkeepsie, NY
facility. The Marist College and IBM Poughkeepsie data centers are located approxi‐
mately 8.5 km apart in upstate New York.
Advanced Intrusion Prevention 137

Fig. 3. Linux one community cloud architecture

Users connect to the Linux One Community Cloud via a secure Internet portal to
an Apache web server at the Marist College data center. Content management
servers in this data center host instances of OpenStack (Liberty and Juno releases),
Maria database server, IBM Java Development Kit (JDK), and IBM BlueMix
DevOps Build Engine. These applications are hosted on virtual machines (VMs)
partitioned in an IBM z Systems 113 enterprise server. It is necessary to securely
authenticate the long distance connection between the Marist College data center and
IBM Poughkeepsie data center, (which houses a processing server and content fulfill‐
ment engine), To authenticate traffic between these two data centers, BlackRidge
appliances implementing TAC and first packet authentication were implemented
between these locations as shown in the figure. A physical appliance was installed
at the edge of the IBM Poughkeepsie data center network, and a virtual appliance
hosted in an IBM z13 enterprise server Zvm virtual partition was installed at the
corresponding edge of the Marist College data center network.
To determine the effectiveness of the TAC appliances at cloaking attached systems,
we performed nmap scans of both the Marist College and IBM Poughkeepsie data center
networks before and after implementing the TAC appliances. Representative scans from the
Marist College and IBM Poughkeepsie data centers before implementing TAC are shown
in Figs. 4 and 5, respectively. From these scans, an attacker can clearly see the open port
22 on the Marist network, running OpenSSH 6.6.1, and a traceroute showing network hops
within the IBM network, among other reconnaissance data that would be useful in plan‐
ning an attack on these systems.
A representative scan after implementing TAC on this network is shown in Fig. 6
(results are equivalent for both the IBM and Marist network segments). Note that we
can no longer detect any open ports, including the exposure previously reported on port
22. All attempts to scan these hosts were successfully blocked by first packet authenti‐
cation, and all responses from the host due to these scans were successfully blocked by
TAC. The scan is now unable to determine the host operating systems, port or IP
addresses, or services running on the host. These results show that we can effectively
block fingerprinting of all devices located behind the TAC gateway.
138 C. DeCusatis et al.

Fig. 4. Marist network scan prior to implementing TAC

In order to better understand the attack vectors being used against this higher educa‐
tion cloud, we created a script in Python 2.7 to parse the syslog from a TAC appliance.
This script uses the Python regular expression operator ReGex to retrieve data from the
syslog including source and destination IP address and port numbers. This data is subse‐
quently processed through a geolocation module which we created for a related project
[7] to generate a report of the ISP, ASN, hostname, latitude, longitude, country, state/
province, and city of each attacker in JSON format. The TAC appliance was
programmed to automatically blacklist any IP address which attempted more than 100
accesses to the network within 30 s. The log parser which we have created also classifies
blacklisted IP addresses as potential DoS attacks or port scanners. We also collect data
on the number of attacks generated from unique IP addresses. All of this data is used to
create a profile of the attacker, which can be correlated with known botnets or hacker
Advanced Intrusion Prevention 139

Fig. 5. Marist/IBM network scan after implementing TAC gateways

For example, during the first 12 h of monitoring the Linux One cloud after installing
the TAC appliance, there were numerous unauthorized attempts to access the system.
At this point the TAC system was placed into enforce mode, and successfully blocked
all subsequent unauthorized access attempts. The TAC appliance remained in enforce
mode for the next 10 days; a list of the top attacking IP addresses, and the top 10 attacking
countries, is shown in Figs. 6 and 7, respectively.
For example, analysis of the TAC appliance logs revealed a DoS attack against port
23 (originating from the Shangdong provide in China). We configured the TAC appli‐
ance to block unauthorized access attempts after 10 s of continuous attempts from a
given site, and to keep these sites blacklisted for one hour. Using this technique, we
successfully blacklisted the DoS attacker while continuing to collect log information on
the attack. In this manner, we have demonstrated that the TAC appliance provides
improved protection by identifying and blocking attacks which were previously unde‐
tected on the education network.
140 C. DeCusatis et al.

Fig. 6. Number of attacks attempted from the top attacking source IP addresses

Fig. 7. Number of attacks attempted by each of the top attacking nations

Further, we assessed the performance logs of the IBM z Systems enterprise server
in the Marist College data center before and after these attacks. Prior to implementing
the TAC appliance, the server attempted to block unauthorized attacks using network
appliances (such as intrusion prevention systems). This approach was replaced with a
single TAC gateway, protecting all VM’s on the server at the point of entry. We further
demonstrated that the TAC appliance was able to block IP spoofing on the network. By
comparing nmap scans of the network before and after implementing the TAC appliance,
we can show that attempts to perform IP spoofing are effectively blocked by the TAC
appliance. A scan of the network using the Spoofer tool (part of BGP-38 recommended
Advanced Intrusion Prevention 141

by the National Science Foundation [7]) confirmed that both IPv4 and IPv6 packets
attempting to spoof the network were blocked (including private and routable addresses).
In a related test of egress filtering depth, the BGP-38 tracefilter test found the network
unable to spoof valid, non-adjacent source addresses through even the first IP hop.
Additional statistical data on attacks against this system was obtained using Long‐
Tail, an open source botnet classifier which we developed for a related project at Marist
College [7]. This classifier was used to identify SSH brute force botnet attacks against
the Linux One educational network, and to evaluate the effectiveness of blocking these
attacks using a conventional intrusion prevention system and the TAC appliance. For
this test, we first monitored the total number of attacks against an SSH honeypot
deployed in the Marist College network ingress from the IBM Poughkeepsie site; stat‐
istical analysis of these attacks is shown in Table 1. We then evaluated a commercially
available intrusion prevention system, the Juniper SRX 3600, under the same conditions;
results are shown in Table 2. We can see that the IPS helped reduce the number of attacks,
but did not eliminate them completely. Finally, we deployed the TAC appliance under
the same conditions; results are shown in Table 3. In this case, the combination of first
packet authentication and transport access blocking was able to successfully block all
brute force SSH attacks against the network, and demonstrated a significant improve‐
ment over the commercial IPS system alone.

Table 1. Total number of attacks against the Marist education network

Time frame Number Total SSH Average Standard Median Max Min
of days attempts per day deviation
Past day 1 4394 N/A N/A N/A N/A N/A
This month 18 183649 10202.72 5447.31 7022.5 20352 5294
Last month 30 165593 5519.77 7196.19 1194 24666 0

Table 2. Attacks against the Marist education network mitigated by conventional IPS
Time frame Number Total SSH Average Standard Median Max Min
of days attempts per day deviation
Past day 1 30 N/A N/A N/A N/A N/A
This month 18 897 49.83 39.75 35.5 124 0
Last month 30 369 12.30 12.22 10 43 0

Table 3. Attacks against the Marist education network mitigated by TAC gateways
Time frame Number Total SSH Average Standard Median Max Min
of days attempts per day deviation
Past day 1 0 N/A N/A N/A N/A N/A
This month 18 0 0 0 0 0 0
Last month 30 0 0 0 0 0 0
142 C. DeCusatis et al.

We have also demonstrated that a TAC gateway placed just inside the Marist College
firewall is useful in nonrepudiation of insider threats. When a bad actor inside the Marist
firewall is detected, efforts to trace the source of the attack traditionally stop at the Marist
NAT gateway. It can be a difficult, time consuming process to trace the IP address which
originated such an attack. However, a TAC gateway placed behind the Marist firewall
(on the Marist side of the NAT) can be used to authenticate the attacker’s source IP
address much more quickly and efficiently. This new functionality should be helpful not
only in discouraging insider threats, but also in helping the college comply with requests
and subpoenas from law enforcement agencies investigating such attacks.

4 Conclusions

Recognizing the importance of cybersecurity for higher education, we have developed

a novel approach to intrusion prevention and authentication for multi-site, multi-tenant
educational cloud computing environments. In particular, we have designed, tested, and
implemented this approach for a Linux community public cloud supporting education
and research, spanning two locations in New York. The approach combines BlackRidge
Technology first packet authentication and transport layer access control gateways to
block fingerprinting of key network resources. We have shown experimentally that this
approach can block denial of service attacks and network scanners, and provide geolo‐
cation attribution based on a syslog classifier. Further, this design offers lower server
utilization compared with conventional alternatives. We have also demonstrated that a
TAC gateway placed just inside the higher education institution’s network firewall is
useful in nonrepudiation of insider threats.

Acknowledgments. The authors gratefully acknowledge support of the National Science

Foundation grant Cloud Computing – Data, Networking, Innovation (CC-DNI), area 4, 15-535,
also known as “SecureCloud”.


1. McCarthy, S.: Pivot Table: U.S. Education IT Spending Guide, version 1, 2013–2018. IDC
publication GI255747, April 2015.
2. Lowendahl, J., Thayer, T., Morgan, G.: Top ten business trends impacting higher education.
Gartner Group white paper, January 2016.–
3. Grama, J.: Data breaches in higher education. Educause Center for Analysis and Research,
May 2014.
breaches- in-higher-education
4. Fireye white paper: Cyber threats to the education industry, March 2016. https://
5. Stoneburner, G., Goguen, A., Feringa, A.: Risk management guide for IT systems. NIST
special publication 800-30, September 2012.
Advanced Intrusion Prevention 143

6. Guilen, A., Rutten, P.: Driving Digital Transformation through Infrastructure Built for Open
Source: How IBM LinuxONE Addresses Agile Infrastructure Needs of Next Generation
Applications. IDC white paper, December 2016.
lu/en/lul12345usen/LUL12345USEN.PDF. Last accessed 22 Oct 2016
7. DeCusatis, C., Liengtiraphan, P., Sager, A., Pinelli, M.: Implementing zero trust cloud
networks with transport access control and first packet authentication. In: Proceedings of IEEE
International Conference on Smart Cloud, New York, NY, 18–21 November 2016
8. Amazon Web Services Identity and Access Management, April 2016. https:// Last Accessed 20 May 2016
9. BlackRidge white paper: Dynamic network segmentation, August 2012. http://www.
Remote Laboratory for Learning Basics
of Pneumatic Control

Brajan Bajči ✉ , Jovan Šulc, Vule Reljić, Dragan Šešlija,

( )

Slobodan Dudić, and Ivana Milenković

Faculty of Technical Sciences, University of Novi Sad, Novi Sad, Serbia

Abstract. In this paper, a remote laboratory for learning the basic principles of
pneumatic control and realizing pneumatic control schemes is described. Goal is
to develop a remote system for our laboratory through which remote participants
(students, engineers, etc.) would be able to learn some basic principles of pneu‐
matic control. The first stage of developing a unique complex pneumatic scheme
with which several smaller, simpler tasks can be realized, as well as a user inter‐
face for the remote laboratory are shown.

Keywords: Distance learning of pneumatics · Remote pneumatic control


1 Introduction

Following the constant technological progress and the increasing of electronic and
informatic literacy of the new generation of students, growing number of faculties and
universities around the world are introducing distance learning [1]. Distance learning
greatly increases the quality of teaching activities [2] on it, because the students have
the opportunity to organize their timetable and their activities. In the paper [3], is
described one example of a system that enables distance learning from the field of elec‐
trical engineering. Connecting different smaller electronic schemes are carried out using
one complex scheme in that system. The aim of this paper is to develop a remote labo‐
ratory that will enable learning of the basic principles of pneumatic control for the remote
Pneumatic systems are often finding their application in various branches of industry
due to a large number of advantages of compressed air [4]. For this reason, drawing of
pneumatic schemes and pneumatic control are studied in secondary schools and at the
universities as well. The remote participants, clients, can be students or engineers from
the industry. The great advantage of using remote laboratory for students is that they
can make a practical exercise in case they are absent from regular classes. The great
advantage of using remote laboratory for engineers who are the carrier power of modern
industry is that they can improve their skills throughout their lives (Life Long Learning
- LLL), without being absent from work. In addition to aforementioned, for all the clients
will be available the descriptions of basic principles of pneumatic control as well as
individual components.

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_14
Remote Laboratory for Learning Basics of Pneumatic Control 145

At Faculty of Technical Sciences in Novi Sad, a basic of pneumatic control are

studied on study programs of Mechatronics and Industrial Engineering. In addition to
its core activities, our faculty is the authorized didactic center of the German company
FESTO for the Western Balkans since 1984. During this period, Faculty of Technical
Sciences organized a number of licensed seminars in the field of pneumatic and electro-
pneumatic control as well as programmable logical control (PLC) programming for
participants from industry. In the last few years it was observed a trend of slight decrease
in the number of participants of these seminars. By analyzing the causes of this trend it
was concluded that due to the increasing of work obligations, potential participants are
less able to be absent from work and attend seminars that are organized in this center.
It became necessary to develop a remote laboratory that will make experiments available
to those users who, for these reasons, are unable to attend seminars in person.
On the other hand, the Faculty of Technical Sciences, in the last three years, was a
member of the TEMPUS project Building Network of Remote Labs for strengthening
university-secondary schools collaboration – NeReLa whose one of the aims was the
development of remote laboratories. Thanks to the knowledge acquired during the
participation in this project, and bearing in mind the need for making available experi‐
ments of pneumatic control to remote users, a laboratory for distance learning of basic
principles of pneumatic control, which is shown below, is developed.

2 Basics of Pneumatic Control

Pneumatic systems consist of an interconnection of various groups of components.

Those groups of components forms a control path for compressed air flow, starting from
the input or signal components (such as, for example, push-button valves) through
processing components and up to the actuating or power components (such as, for
example, pneumatic cylinders). Pneumatic control schemes are composed of five basic
levels (Fig. 1) which are: 1. Power components (actuators); 2. Control elements; 3.
Processing elements; 4. Input elements; 5. Energy supply elements.
A special group of elements, that are not necessarily a component of every pneumatic
system, are elements that enable the regulation of velocity or pressure of the actuators
and they are marked as a special level with 1a.
The course of basics of pneumatic control, on our faculty, consists of eleven short
examples. All these examples are represented in a scope of a simple pneumatic system.
In that way, students have the opportunity to learn direct and indirect control of single-
acting or double-acting pneumatic cylinder, an application of 2/2, 3/2, 4/2 or 5/2
command valves (mechanically, pneumatically or electrically actuated etc.), and an
application of logic components such as AND or OR module etc. In this paper, all eleven
examples are connected in one pneumatic control scheme, shown in Fig. 1.
During the current way of traditional learning, practitioners are connecting the
system components to each other using pneumatic tubes. In order to transform such
system in remote learning system, clients activates or deactivates a 2/2 electrically actu‐
ated command valves, which are marked red on the control scheme (Fig. 1) and simulate
the physical interconnection of the components in that way.
146 B. Bajči et al.

Fig. 1. Developed pneumatic scheme
Remote Laboratory for Learning Basics of Pneumatic Control 147

For better understanding, one example will be shown in extension. If a client want
to indirectly control a single-acting pneumatic cylinder, it is necessary to go through the
next steps (Fig. 1):
1. By activating the 0V1 valve (2/2) is allowed the supply of compressed air to the
service unit (0Z),
2. By activating 1V2 and 1V3 valves (2/2) is allowed the flow of compressed air from
service unit to the electrically activated 3/2 valve (1S2) and to pneumatically acti‐
vated 3/2 valve (1V6),
3. By activating the 1V5 valve (2/2) is allowed the flow of compressed air from elec‐
trically activated 3/2 valve (1S2) to pneumatically activated 3/2 valve, to its control
connector 12 (1V6),
4. By activating the 1V9 valve (2/2) is allowed the flow of compressed air from pneu‐
matically activated valve (1V6) to the single acting pneumatic cylinder (1A),
5. After the simulation of the physical interconnection of the components, by activating
electrically activated 3/2 valve (1S2) the single-acting pneumatic cylinder (1A) will
6. By deactivating electrically activated 3/2 valve (1S2) the single-acting pneumatic
cylinder (1A) will retract to its initial position.

3 Remote Control of the System

Previously is already mentioned that with using the developed pneumatic scheme in this
paper, it is possible to realize eleven different, smaller exercises related to pneumatic
control. As it can be seen from Fig. 1, a large number of electro-pneumatic command
valves are used. Precisely for this reason, for the realization of remote control of this
system, a controller with large number of digital output signals is required. In this paper,
a controller of modular type, CompactRIO is used for this purpose.
A client can access to our laboratory through the CEyeClon platform [5, 6] and needs
to have installed only the CEyeClon Viewer software. It is necessary to request from
administrators an access key for the experiment. In Fig. 2, ways of communications in
our system are shown. When the client logs into the system, he/she connects to a remote
computer through the internet. That computer is physically connected to the controller.
Communication between the PC and the CompactRIO controller is accomplished by
using TCP/IP protocol. The electro-pneumatic command valves are connected to the
digital outputs of the controller. Live monitoring is enabled via web camera. When the
client logs into the system, it is necessary to launch a file called “Remote Laboratory
for Learning Basics of Pneumatic Control” from the desktop.
The homepage of the user interface then opens in internet browser with a list of
exercises. Exercises on this page are divided into two groups. The first group relates to
the direct control of a pneumatic actuator and consists of two exercises, one for single-
acting cylinder and one for double-acting cylinder. The second group relates to the indi‐
rect control of a pneumatic actuator. Within this group are classified nine different exer‐
cises. The user can choose between two languages, English or Serbian. By selection of
one exercise, a new window opens in the browser. In Fig. 3 the user interface for the
148 B. Bajči et al.

Fig. 2. Ways of communications

first exercise of indirect control is shown. The first thing that the client can notice on
this page is the title of the exercise. Below the title, the text and the sketch of a physical
realization of the exercise are located.
On the left side of the user interface, below the text of the exercise, is located an area
provided for drawing the pneumatic scheme. Within this area, at the beginning, are
placed only the basic components, necessary for the realization of the selected exercise,
such as cylinders, valves, sensors, etc. A pop-up window appears by pressing the mark

Fig. 3. User interface – indirect control: the first exercise
Remote Laboratory for Learning Basics of Pneumatic Control 149

of one of the components. That window contains the description and the picture of the
selected component, which enables the client to better understand the basic function of
it. In the top left corner a legend is located that explains the meaning of the color of the
pneumatic tubes on the scheme. These tubes are represented with lines. The line is red
when the tube is under pressure and it is black when the tube is not under pressure. Green
line represents a control signal and a blue line represents an exhausted tube. An area
with the commands for connecting certain components is located below the legend. The
mentioned commands appears and disappears, by pressing them, one after another. By
pressing the command, for example, “Connect the push-button with the command
valve” a line between these components is drawn. Also, at the same time, sending a
command to the controller is carried out. Certain 2/2 valve is activated, for this example,
1V6 showed in Fig. 1, and the push-button and the command valve are physically
connected. A turned on light, on the valve, can then be noticed on the camera.
Once the components are connected, it is necessary to press the 1S1 button at the
bottom of the user interface. A simulation is executed on the pneumatic scheme. All the
components will be activated and the cylinder will extract. The color of the pneumatic
tubes will be changed depending on the flow of the compressed air. Also, the extracting
of the cylinder will be seen on the camera. By releasing the push-button, in this exercise,
the cylinder will retract. The user interface works on the same principle for the other
exercises. At the top of the interface are located two buttons, for changing the exercises.
The basis for the realization of the user interface was JavaScript programming language
while LabView was used for programming the controller.

4 Conclusions

In this paper a remote laboratory for learning basics of pneumatic control is shown. A
unique pneumatic control scheme, used for the realization of several smaller exercises
is developed and presented. The use of the developed user interface is explained. The
CEyeClon platform is used for the realization of remote control. In this way is provided
a complete control over client access to the system. This system can be used as an integral
part of distant learning. Development of remote laboratories, like the one described in
this paper, is very important for the improvement of learning activities. In this way, it
is possible to attract the attention of a large number of new students and engineers, on
the study programs where the laboratories are used as well as enable a further education
for anyone interested in pneumatic control.


1. Horvat, A., Dobrota, M., Krsmanovic, M., Cudanov, M.: Student perception of Moodle
learning management system: a satisfaction and significance analysis. Interact. Learn. Environ.
23(4), 515–527 (2015)
2. Rodríguez-Sevillano, A.A., Barcala-Montejano, M.A., Tovar-Caro, E., López-Gallego, P.:
Evolution of teaching tools and the learning process: from traditional teaching to edX courses. In:
13th International Conference on Remote Engineering and Virtual Instrumentation (REV), UNED,
Madrid, 24–26 February 2016, pp. 42–49. IEEE (2016). ISBN 978-1-4673-8245-8 (2016)
150 B. Bajči et al.

3. Bjekić, M., Božić, M., Rosić, M., Antić, S.: Remote experiment: serial and parallel RLC circuit.
In: 3rd International Conference on Electrical, Electronic and Computing Engineering,
IcETRAN 2016, Zlatibor, Serbia, 13–16 June 2016. ISBN 978-86-7466-618-0
4. Šešlija, D., Milenković, I., Dudić, S., Šulc, J.: Improving energy efficiency in compressed air
systems – practical experiences. Thermal Sci. (2016). ISSN 0354-9836
5. Zurcher, T.: Distance education in energy efficient drive technologies by using remote
workplace. In: 11th International Conference on Remote Engineering and Virtual
Instrumentation (REV), 26–28 February 2014. IEEE, ISBN 978-1-4799-2024-2
6. Zurcher, T., Rojko, A., Hercog, D.: Education in industrial automation control by using remote
workplaces. In: 3rd Experiment@ International Conference Online Experimentation (
2015), University of the Azores, 2–4 June 2015, Ponta Delgada. IEEE, ISBN
The Augmented Functionality of the Physical
Models of Objects of Study for Remote

Mykhailo Poliakov1(&), Karsten Henke2, and Heinz-Dietrich Wuttke2

Zaporizhzhya National Technical University, Zaporizhia, Ukraine
Ilmenau University of Technology, Ilmenau, Germany

Abstract. Remote laboratory is an important and rapidly growing component

of distance learning systems for engineering specialty. The labs allow for remote
users to enter the data of technical experiment that is transmitted to the server
where it is converted into control signals of physical and (or) virtual model of
the object of the experiment. The level of remote laboratories in engineering
education largely depends on the level of models of objects of study that they
use. The use of physical models in remote laboratories has identified a number
of issues for the creators and operators: they have a limited range of experiments
with the physical model, the complexity of modernization and the high cost of
new models, and others. The aim of the present work: to improve, to extend the
scope of existing physical models. The goal is to be achieved through
increase/add functionality of the physical models through the use of augmented
reality, augmented virtuality and augmented behavior of the object of study. The
work describes the variety and the advantages of hybrid models and interfaces to
enhance the functionality, lists examples of added functionality.

Keywords: Remote laboratories  Physical models  Augmented functionality

1 Introduction

The advantages of distance education stimulate the improvement of its components [1],
among which in the last decade, the rapid development of remote laboratories
(RL) took place [2–4]. These laboratories include a server with a set of physical models
of the object of study. For example, in the laboratories of the Grid of Online Laboratory
Devices Ilmenau (GOLDi) there are physical models (PM) Elevators, 3-Axis-Portal,
Production Cell with devices for their control [5]. Users enter the data of technical
experiment from a remote computer. This information is sent over the Internet to the
server of the laboratory, where it is converted into control signals of physical and
(or) virtual model of the object of the experiment. The progress and results of the
experiment are perceived by the user by means of the user perceptual interface [6]. The
outputs of this interface are a user perceptual image and the flow of user commands. An
example of the User Perceptual Image (UPI) is a computer screen with a WEB – the
image of the physical model and the visual part of the virtual model of the object of the
© Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_15
152 M. Poliakov et al.

experiment [6]. However, as from the user’s point of view and from the point of view
of the designer of a remote laboratory, the object of the experiment is represented by a
system with physical and virtual elements that interact with each other and with the
environment. For description of such objects in a number of cases the term CPS –
Cyber Physical System is used, which declares the connection of physical objects with
computational algorithms [7]. Despite the importance of solving the issues of managing
remote users of RL and real-time students’ interaction with the model of the object, a
necessary condition for effective use of RL is a sufficient number of experiments with
models of the object of study, as well as the quality and informative value of the User
Perceptual Image.
In Sect. 2 of the article gives an overview of publications on technologies of the
augmented reality that used by the authors to extend the functionality of the PM of RL.
Section 3 describes the models of object of study and interfaces which are involved in
the formation of the image perceived by the user. Conclusions and Acknowledgements
set forth in the Sect. 4 and Acknowledgment section of the article.

2 State of Art

Contemporary researches are aimed primarily at improving the quality and informa-
tional content of the User Perceptual. For this Augmented Reality (AR) technology is
used. AR is a scientific discipline, the essence of which is disclosed in numerous
publications (e.g. [8–12], which holds regular scientific conferences [13], and pub-
lished by the journal [14]).
AR themes are reflected in previous sessions of the International Conference on
Remote Engineering and Virtual Instrumentation. So in works [15, 16] the LEDs’ state
change of physical model of a traffic light is displayed on the remote user’s screen in
the video window. Software RL detects certain changes in the shots of this image,
which are classified as events. Events control the induced overlay video on video
elements of the physical model. In this case, the images of moving vehicles are
The focus of this work is to use the added functionality of the RL models to
increase the quantity and quality of the experiments without significant changes in the
physical models.
The concept of functional models of objects of study was mentioned in [17] in
connection with the analysis of the structure of the hybrid model. The work presents an
example of the behavior of the physical model of the traffic light in RL RELDES [18].
The experiment is carried out with a hybrid model, which includes a physical model of
a traffic light, complemented behavior of fault diagnosis and simulation of defects of
the traffic light lamps.
In [19] it is proposed to add virtual physical model of the Elevator in the lab
GOLDi [5]. This model was originally used to explore the construction of the FSM
controls. By adding virtual model of flow control commands of users and the virtual
reality of a queue of users waiting for service, experiments with more complex control
algorithms, the experiments on the theories of queues and performance evaluation of
real-time systems become possible. Below are the new results in this direction.
The Augmented Functionality of the Physical Models of Objects 153

3 Added Functionality for Improving the User Perceptual


Models and interfaces RL involved in the formation of the User Perceptual Image are
shown in Fig. 1.

Fig. 1. Models and interfaces RL involved in the formation of the user perceptual image

The physical interface includes the flow of information from the physical external
environment and physical model of the object of study to their hybrid model, as well as
the flow of control actions on the object of study by a hybrid model.
The virtual interface includes information from the virtual external environment
and the virtual models of the object of study to their hybrid model, the flow of control
actions on virtual objects of study and the flow of information to synchronize the
physical and virtual models of the object of study.
The role of hybrid models is in the selection, switching and integration of the
information coming through the physical and virtual interfaces depending on the
selected operation mode RL. In addition, the hybrid model generates streams of source
data for the work of the media model of the experiment and receives a stream of user
commands for the control of the experiment.
The network interface performs the standard functions of information exchange
between the server and the remote RL user’s computer.
On the remote user’s computer the media (visual, audio, etc.) model of the
experiment performs and the user command to manage a hybrid/virtual/physical model
of the object of study is processed.
Finally, the user interface implements interaction “sensors user – output device”
and “user input device”.
A physical model of the object of study is the means for the study of the control
system, which also includes the control device and the environment.
The interface of the physical model with the rest of the system (physical interface)
is implemented using sensors and actuators composition of which is shown on Fig. 2.
We will distinguish physical parameters sensors and image sensors. An example of the
first are sensors of electric currents and voltages, the speed of movement of the object
154 M. Poliakov et al.

Fig. 2. Structure of the physical interface

of study, the temperatures of object elements, an example of the second is WEB

camera. The camera receives an image of the physical model and the environment
during the experiment. Although other images are also possible. For example, it can be
acoustic, thermal, magnetic fields images.
If the environmental parameters are significant for control, environment parameters
sensors are applied.
Actuators form a control action on the control object. As a result of these control
actions the values of the signals at the inputs of sensors and (or) image of the control
object are changed. For example, if the actuator is a heater, it will change the tem-
perature and possibly other parameters of the CO. But if the actuator (e.g., a lighting
device, a WEB-camera rotate/move device) operates in conjunction with image sen-
sors, it will change the image of the control object.
In control theory, the values obtained from the sensors, are called observed vari-
ables, and the signals at the outputs of the actuator are referred to as controlled vari-
ables. In General, the control object may have a multitude of unobservable and
uncontrollable variables.
Hardware of the physical model control unit such as programmable logic con-
trollers (PLC) and microprocessor (MP) boards perform analog-digital conversion of
the parameters and control actuators.
Hardware of the image control unit performs a similar operation relative to the
image of the object of study.
A virtual interface is a set of services that complement and in some cases replace,
information about the object of study and the environment received through a physical
interface. There are also services that generate a stream of user commands. A virtual
interface is implemented as a set of software models running on the server RL or the
The Augmented Functionality of the Physical Models of Objects 155

remote user’s computer. Each module implements a specific set of virtual functions in
the course of the experiment the object of study.
The object and the form of the generated functionality describe the module of
virtual interface. The object of functionality is the object of study, its physical model,
the external environment, including the flow of user’s commands, as well as technical
and software of RL. The types of the generated functionality are the image of the
object, UPI, parameters, and behavior of the object.
Scale of evaluating the conformity extent of the object’s functionality and the
interface’s module can have the following gradation: absent (f0), reduced (f1), equiv-
alent to model (f2), advanced (f3), equivalent to the object (f4), added (f5), and new (f6).
Feature comparison is shown in Fig. 3.

Fig. 3. A comparison of the functionality of the virtual interface: fCO, fPM - functionality of the
controlled object and its physical model

The bases of the coordinates of the simulated object parameters of the generated
functionality are: observable and controlled variables; unobservable and uncontrollable
variables; full range of observed, unobserved, controlled and uncontrollable variables.
Bases of simulated behavior regarding the objectives of the experiment and modes
of use of the object of study: the behavior in a normal mode; the behavior in emergency
mode; the technical state control of the elements of the object of study; the external
environment control.
The bases of the simulated behaviors with respect to the selected type of controls:
the behavior of discrete control on the basis of the FSM; the behavior of continuous
control based on the structure of the control system, the transfer functions of the
elements of the object of study (or its physical model) and regulators; the behavior of a
hybrid control in which the current state of the system formed by means of discrete
control, and the actions in this state are determined by the model of continuous control.
Time basis of simulated images, parameters, and behaviors includes the current
time; historical trend and forecast.
Virtual image generated by the modules of the virtual interface is dependent on the
modules that define the current virtual behavior, and the parameters of the object of
study and the external environment.
156 M. Poliakov et al.

The main categories that characterize a virtual image are realism/metaphor; degree
of coverage (whole/part), scale; dimension (2D/3D mono/stereo) and the format in
pixels; type of media (text/photo/video); view direction (on the object/from the object);
illumination angle, the number of survey points; visualization object (object of
study/visual model/external environment/trend/design models (FSM graph, chart, UML
diagram, control program text); consistency with the sensors of the user (“visible”/
“invisible visualization”).
The virtual image must satisfy the requirements of ergonomics and technical
As mentioned above, virtual interface via the generated functionality is transmitted
in a hybrid model. Structural scheme of the hybrid model is shown in Fig. 4.

Fig. 4. Structural scheme of the hybrid model

The configuration of the hybrid model is controlled by RL. Images, parameters, and
status are connected through the source selector. Using the destination selector setting,
the involved modules receive input information, necessary for initialization and
A physical model (PM) and software image control units, the corresponding FSM,
take up important places in the structure of the hybrid model. The presence of archive
data of experiments on the machine carrier can expand the information basis of the
researcher and allows the use of statistical methods of research.
The results of the hybrid model in the form of a stream of images and values of tags
of the media model are transmitted via the network interface to the remote user’s
computer. A standard network is the Internet. In the context of its use in the RL, it must
meet the requirements in the exchange rate, especially if the UPI contains complex
images. The details of the transformation of tag values of the objects display of the
model in the elements of the UPI image are given in [17].
Examples of additional features that provide augmented functionality are given in
the table for the known physical models of an Elevator and Traffic light.
The Augmented Functionality of the Physical Models of Objects 157

4 Conclusions
1. The specificity of the RL experiments is that their results are perceived by the user
remotely from the object of study. Therefore, the content and technology of creating
the UPI are key to improve the quality and diversity of experiments. Today the main

Table 1. Examples of additional features that provide augmented functionality

The physical UPI type Functionality type The object of
model/RL study
Elevator 4 Video: the elevator model Functionality in the normal FSM control for
floors/GOLDi moving. (UPI1) mode (f2) normal mode
Animation: the lamp
indication, the cabin door
opening/closing (UPI2)
UPI1+(UPI2+animation Functionality in the normal FSM control for
trigger emergency brake and emergency modes (f3 or normal and
and the cable termination f4), defects simulator (f6) emergency modes
process (UPI3))
UPI1, UPI2+augmented Functionality in the normal
Technical state of
visuality users command mode (f2) and the simulation
the equipment of
stream (UPI4) the physical
thread calls the Elevator
from floor. model.
FSM for the
optimal operation
of the Elevator in
the call flow.
Queues of
passengers and
The interaction of
components in
Traffic The sequence of switching Functionality in the normal C – programming
light/RELDES the LEDs (video and mode (f2) of control FSM
animation) (UPI5)
UPI5+augmented visuality Functionality in the normal C – programming
with the time displays mode (f3) of control FSM
UPI5+emergency modes Functionality in the normal FSM control for
indication (UPI7) and emergency modes (f3 or normal and
f4), defects simulator (f6) emergency modes
UPI5+augmented virtuality Functionality in the normal Traffic light FSM
of the cars stream [16] mode (f2)+the functionality base on the cars
of the highlight events of position and
the video (f6) stream
158 M. Poliakov et al.

varieties of UPI are “live” WEB – a picture of the object in the experiment and
animated image controlled by the tags of the virtual model.
2. Key technology of improving the UPI is “augmented” technology. Used varieties of
this technology - augmented reality, augmented visuality, as a rule, do not affect the
behavior of the object of study. The behavior of the object supposes the dependence
of the output response from the internal state. Discrete behavior is specified using
the FSM. The continuous behavior is specified using the structure of backward
linkages and transfer functions of the regulators. Moreover, the new behavior leads
to new functionality of the object of study that allows us to speak about “augmented
3. The implementation of augmented functionality in RL is connected with the
interaction of a number of interfaces (physical, virtual, network, and perceptual user
interface) and models (physical, virtual, hybrid and visual). Added functionality is
synthesized with modules virtual interface managed by the hybrid model.
4. The following concepts are associated with added functionality: the concepts of
object functionality, add gradations, the basis of the coordinates of the simulated
parameters, the basis of simulated behavior regarding the objectives of the study/use
of the object, the basis of the simulated behaviors with respect to the selected type
of control and the basis of the time the simulated images, parameters, and behaviors.
These and other categories of functionality were analyzed.
5. It is proposed to use the term “The Media model of the object of study in the
external environment” instead of the term “visible model”. Categories of images,
which are reflected in the UPI while using added functionality, were analyzed.
The proposed methods enhance the functionality of the models object of study is to
be used for expanding the range of experiments with models of remote laboratories
GOLDi in Ilmenau University of Technology and Zaporizhzhya National Technical
University (Table 1).

Acknowledgment. This work was partially carried out within the European Community Project
“Tempus” ICo-op: Industrial Cooperation and Creative Engineering Education based on Remote
Engineering and Virtual Instrumentation 530278-TEMPUS-1-2012- 1-DE-TEMPUS-JPHES.
The authors are grateful to Ilmenau University of Technology (Germany) and the Zaporizhzhya
National Technical University (Ukraine) for the opportunity to work with remote laboratory

1. Azad, A.K.M., Auer, M.E., Harward, V.J. (eds.): Internet Accessible Remote Laboratories:
Scalable E-Learning Tools for Engineering and Science Disciplines, Engineering Science
Reference, 645 p. (2012)
2. Gravier, C., et al.: State of the art about remote laboratories paradigms - foundations of
ongoing mutations. Int. J. Online Eng. (iJOE) 4(1), 1–9 (2008)
3. Remote and virtual tools in engineering: monograph/general editorship, Karsten Henke,
Dike Pole, Zaporizhzhya, Ukraine, 250 p. (2015). ISBN 978–966–2752–74–8
The Augmented Functionality of the Physical Models of Objects 159

4. Gomes, L., Bogosyan, S.: Current trends in remote laboratories. IEEE Trans. Industr.
Electron. 56(12), 4744–4756 (2009). doi:10.1109/TIE.2009.2033293
5. GOLDi-labs cloud Website:
6. Richir, S., Fuchs, P., Lourdeaux, D., Millet, D., Buche, C., Querre, R.: How to design
compelling virtual reality or augmented reality experience? Int. J. Virtual Reality 15(1), 35–
47 (2015)
7. Terkowsky, C., Jahnke, I., Pleul, C., Licari, R., Johannssen, P., Buffa, G., Heiner, M.,
Fratini, L., Valvo, E.L., Nicolescu, M., Wildt, J., Tekkaya, A.E.: Developing tele-operated
laboratories for manufacturing engineering education. Platform for eLearning and Telemetric
Experimentation (PeTEX). Int. J. Online Eng. (iJOE) 6, 60–70 (2010).
3991/ijoe.v6s1.1378. REV2010, Vienna, IAOE, Special Issue 1
8. Wikipedia 2016:
9. Cao, M., Li, Y., Pan, Z., Csete, J., Sun, S., Li, J., Liu, Y.: Creative educational use of virtual
reality: working with second life. IEEE Comput. Graph. Appl. 34(5), 83–87 (2014)
10. Hughes, C.E., Stapleton, C.B., Hughes, D.E., Smith, E.M.: Mixed reality in education,
entertainment, and training. IEEE Comput. Graph. Appl. 25(6), 24–30r (2005)
11. Schaf, F.M., Pereira, C.E.: Integrating mixed-reality remote experiments into virtual learning
environments using interchangeable components. IEEE Trans. Industr. Electron. 56, 4776–
4783 (2009)
12. Milgram, P., Colquhoun, H.: A taxonomy of real and virtual world display integration. In:
Ohta, Y., Tamura, H. (eds.) Merging Real and Virtual Worlds, pp. 5–30. Ohmsya Ltd.,
Springer (1999)
13. Vlada, M., Albeanu, G.: The potential of collaborative augmented reality in education. In:
The 5th International Conference on Virtual Learning, ICVL 2010. Targu – Mure, Romania,
29–31 October 2010, pp. 39–43 (2010)
14. Int. J. Virtual Reality.
15. Maiti, A.A.K., Maxwell, A.: Variable interactivity with dynamic control strategies in remote
laboratory experiments. In: International Conference on Remote Engineering and Virtual
Instrumentation, REV2016, Madrid, Spain, 24–26 February 2016, pp. 399–407 (2016)
16. Smith, M., Maiti, A., Maxwell, A.D., Kist, A.A.: Augmented and mixed reality features and
tools for remote laboratory experiments. In: Int. J. Online Eng. (iJOE) 7, 45–52 (2016). Vienna, IAOE
17. Poliakov, M., Larionova, T., Tabunshchyk, G., Parkhomenko, A., Henke, K.: «Hybrid
models of studied objects using remote laboratories for teaching design of control systems».
Int. J. Online Eng. (iJOE) 9, 7–13 (2016). IAOE,
18. Parkhomenko, V., Gladkova, O., Ivanov, E., Sokolyanskii, A., Kurson, S.: Development and
application of remote laboratory for embedded systems design. Int. J. Online Eng. (iJOE) 11
(3), 27–31 (2015). IAOE, Vienna
19. Poliakov, M., Larionova, T., Wuttke, H.-D., Henke, K.: Automated testing of physical
models in remote laboratories by control event streams. In: 2016 International Conference on
Interactive Mobile Communication, Technologies and Learning (IMCL), 17–19 October
2016, San Diego, CA, USA. 94 p., pp. 10–13. IEEE 978-1-5090-1197-1/16/$31.00 ©2016
More Than “Did You Read the Script?”
Different Approaches for Preparing Students for Meaningful
Experimentation Processes in Remote and Virtual Laboratories

Daniel Kruse1 ✉ , Robert Kuska1, Sulamith Frerich1, Dominik May2,

( )

Tobias R. Ortelt2, and A. Erman Tekkaya2

Ruhr Universität Bochum, Bochum, Germany
TU Dortmund University, Dortmund, Germany

Keywords: Preparational activities · Remote lab · Interactive training · Online


1 Introduction

Project ELLI (Excellent Teaching and Learning in Engineering Science) is a joint project
of the three German universities RWTH Aachen, TU Dortmund University and Ruhr-
University Bochum. Considering teachers’ and learners’ perspectives, the project aims
to improve existing concepts in higher engineering education and to develop new inno‐
vative approaches. In the past years, a pool of remote and virtual labs has been developed
and set up in order to gain flexibility in the usage of experimental equipment in different
pre-set scenarios. Teachers can either use these virtual and remote laboratories in class
for demonstrating engineering practice whereas the labs can support students to indi‐
vidually discover scientific concepts.

2 Virtual Learning Environment

The use of labs in general can be distinguished in research, development and training
purposes. In engineering education, labs are often used to introduce students to experi‐
mental work or explain a phenomenon in a realistic way. The project ELLI aims for
several improvements in the field of teaching and learning in engineering science. A
main aspect is to establish remote learning experiences.

2.1 Remote Labs Setting Local Bochum

The Project ELLI started a virtual and remote lab project in 2011. At the Ruhr Universität
Bochum, a call for ideas enabled interested professors to hand in their ideas of a lab with
the use of remote or virtual technology. Within all suggested ideas, 10 were selected
through an independent jury and received an investment support. These ideas had to
describe a concept in which the lab should be used. The selected ideas usually focused

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_16
More Than “Did You Read the Script?” 161

on a specific group with a well-known educational background. Nowadays, the Project

ELLI at the Ruhr Universität Bochum provides a pool of more than 10 remote or virtual
labs in different disciplines and teaching environments [1]. Each lab was built under the
responsibility of a scientific chair with differences in the curriculum of the three partic‐
ipating faculties on three different universities. Therefore, the authors defined a remote
learning processes in its different steps to allow a more modular and interchangeable
development of the provided resources. With labs in material science, e-mobility or
process technology, each lab was built under at least one certain idea of its usage. This
usually aimed for a specific target group of users, in a specific relation to the lab’s

2.2 Remote Labs Setting Local Dortmund

At TU Dortmund University, another approach has been followed. In a strong cooper‐
ation between the Institute for Forming Technology and Lightweight Construction and
the Center for Higher Education, a remote lab for manufacturing technology has been
developed. This work is based on successful outcomes achieved within a prior project
called PeTEX [2]. The developed laboratory gives both students and teachers the oppor‐
tunity to conduct experiments in the field of manufacturing technologies especially for
material characterization. Figure 1(right) shows the laboratory with two testing
machines for sheet metal forming and tensile tests. In addition to that, the lab is equipped
with an industrial robot with several grippers for the specimen handling and the needed
equipment for the experiment’s automation and control. In the recent years, the tensile
test has been in focus for its implementation into educational contexts. This test is one
of the most common and efficient tests to get the material properties of the tested
specimen [3]. The determined properties describe the behavior of such material. Further‐
more, the properties can be used in forming applications like FEM-Simulations (e.g.
simulation of forming processes or production processes). This is why it is a very basic
but also an important test in the context of manufacturing technology. The developed
remote lab has been introduced in several educational contexts so far. Now it is used in
lectures as well as in practical training courses and even in completely online delivered

Fig. 1. Remote laboratory at TU Dortmund (right) with the graphical user interface for user-
experiment-interaction (left)
162 D. Kruse et al.

courses. In order to perform experiments in the remote lab the user can use the specially
designed graphical user interface. Using this interface, it is possible to prepare, start,
pause, stop, watch, and even analyze the ongoing experiment (Fig. 1, left part).

3 Learning Processes in Remote and Virtual Experimentation


Introducing experimentation exercises with the help of remote or virtual equipment into
educational processes is different to the instruction on classical hands on labs. Whereas
in hands on labs normally a scientific assistance guides or supervises the experimentation
process (and the learning process, too), in some of its parts, the essential of a virtual or
remote lab learning is the non-guided and non-supervised process. This process can be
seen in the following stages:
(1) Orientation
(2) Preparation
(3) Performing an experiment
(4) Report experimental results.
Before performing any type of experiment, preparation is needed [4]. A classic hands
on lab preparation is often based on a scriptum or any kind of document that has to be
read by the students before coming to the lab. Such a scriptum contains the theoretical
background and used methodology as well as technical characteristics and at least the
task that should be performed. The students have to get familiar with the content and to
be prepared to be tested on the experimental content. In virtual or remote labs, things
are a bit different. As the whole experience is meant to be highly independent, all aspects
of the process must work in an intuitive and helpful way, without lowering the necessary
effort for the student’s performance. One of the main differences is the feedback on the
student’s preparation. In a classic hands on lab, this is ‘assessed’ by a supervisor during
a short interview, a discussion or the observation of the physical preparation of the
experiment. Whereas these aspects are fitting for the hands on lab, the lack of a supervisor
in a remote or virtual setting leads to new challenges [5]. Here the two main challenges
are the examination of the necessary preparation and the option of giving feedback about
the process of flexibly setting up an experiment. The following approaches are dealing
with these challenges. As the remote laboratories are developed independently at the
two locations, the student’s preparation will be explain separately.

3.1 Preparation for VRL (Bochum)

While offering remote learning resources, there is the question if a scriptum is still the
best way of preparing students for a remote experiment. The balance between a chal‐
lenging task and a guided experience is crucial for the whole remote learning process.
Therefore, the preparation phase has to be rethought. A setup for performing experiments
in the field of process technology can contain several apparatus and instruments. For
gaining experience in setting up an experimental plant, it must be possible to change
More Than “Did You Read the Script?” 163

different designs. In a process technology experiment, a pressure drop should be meas‐

ured in different flow states. If this task is performed in a classic hands on lab, the students
may be able to choose the necessary equipment like pumps, pipes and pressure gauges.
Offering a similar experiment in a remote environment leads to the problem that the
plant hast to be put together in advance and standby till it is accessed [6]. In this simple
case, the student would not be able to work out their own setup, maybe make mistakes
and learn about the physical relations between the separate parts of the setup. To remove
this lack of experience in a remote scenario and allow the students to reflect their knowl‐
edge about the upcoming task, a virtual work bench was considered to be helpful to
develop a virtual process scheme.
Process schemes contain a lot of information about a technical setup. There are
several disciplines and different layers that could lead to one main scheme, which
describes the whole setup. As the users of the ELLI remote lab pool are engineering
students, they are familiar with process schemes from different lectures or trainings.
Even if they do not create a scheme on their own, they know about the symbols, connec‐
tion types and how to read flow schemes [5]. Based on the remote scenario and the
assumed fact that a student is already familiar with the task of measuring the pressure
drop of a fluid, the virtual process scheme is used to reflect the student’s state of under‐
standing and preparation.
The virtual process scheme is built to be a virtual experiment developing environ‐
ment (see Fig. 2), like an interactive content object described in [7]. A number of devices
and equipment can be chosen by its schematic symbol. The number of available equip‐
ment is larger than required by the aligned task. First, the students have to identify the
devices necessary for their respective task. In case of a flow testing rig, a pump, pipes
and pressure gauges are needed as well as a device to cause the pressure drop needed to
observe different states of operation. If a schematic symbol is chosen to be used in the
virtual process scheme, it can be located all over the virtual workbench. At least two
symbols need to be placed on the virtual workbench before a connection can be created.
The symbols can accept four connections but only one at each side. A connection is

Fig. 2. A virtual workbench for developing, the repository area at the bottom and the connection
control area on the right side.
164 D. Kruse et al.

created by choosing the type of connection, choosing its starting point and selecting its
end point. During the process of creating a connection, the symbols placed in the work‐
bench indicate their ability to accept or decline a connection on one of the four sides
with green or red dots, respectively (see Fig. 3).

Fig. 3. Symbols on the virtual workbench during the connection creation process.

In case of a student needing assistance or having finished the setup of a virtual process
scheme, a consistency test runs and checks the flow scheme created. For a virtual process
scheme of a flow testing rig there should be at least one suitable pump, pressure gauges
and a regulation valve connected in one loop. Open connections or missing equipment
is recognized and can be displayed to the student with a hint about how to complete the
setup. A flow scheme containing all necessary equipment with correct connections is
reviewed and reported as complete.
This consistency test can be adjusted in its complexity by adding more information
to each symbol available. Parameters like flow direction, generated pressure, pressure
drop or process fluid parameters can be reviewed in the consistency test. The more
information is respected, the more complex the review process is. However, the accuracy
of the enabled feedback can be enhanced and individualized with this enlarged infor‐
mation [5]. The results of such a consistency test can be used to allow the student’s
access to a real remote lab control or give them advice to review several parts of the
experiment’s documentation [4, 8].
The virtual process scheme is created by using an html 5 framework called phaser
( A more common use for this framework is known to develop computer
games for web or mobile applications. Therefore, the functionality of this framework
was highly useful to develop the virtual process scheme. As the code works well on
mobile devices, the virtual process scheme can easily be adapted to mobile use for even
more flexibility.
The explicit example of preparing the remote learning experiment about the meas‐
urement of pressure drop at flow testing rig can be easily adapted to other experiments.
The idea of the virtual process scheme works in each discipline that uses schemes or
drawings to show interaction and connectivity. Examples of use can be electrical circuit
More Than “Did You Read the Script?” 165

drawings or drawings of mechanical balance of forces. The virtual process puzzle elim‐
inates some of the drawbacks of remote experiments in the field of independent experi‐
ment development and reflection about the state of the student’s preparation. While
eliminating some challenges, it also creates new ones, especially with the consistency
test and its usage to create a meaningful feedback.

3.2 Preparation for VRL (Dortmund)

The Remote Lab of the TU Dortmund University was developed on basis of a classical
hands on lab. As all these achievements are based on this existing lab, the preparation
procedure will be explained in the following in order to show how preparation for non-
remote labs worked so far. In this lab, the students had to determine material parameters
for different materials (steel or aluminum) with a uniaxial tensile test. Therefore, the
students were divided into groups of four students each. Each group was supported by
a research assistant. The lab experience was divided in four steps. In the first step “Prep‐
aration”, a script (up to 20 pages) was given to the students. This script provided a small
repetition of the basic facts and theoretical background in material characterization. With
this in mind the students were able to conduct the experiment and understand its context
as well as its application. The second step “Experiment” started with a little oral assess‐
ment in form of a discussion. During this discussion, the students’ knowledge about the
basic facts was tested. In addition to that, the students were questioned about safety
concepts of the used machines due to safety reasons. After this oral exam, the students
were introduced to the machine and the used software. With this information, the
students conducted their experiments basically on their own, if needed with the help of
a student assistant. During the experimentation, they tested different materials in
different rolling directions to determine material parameters. Afterwards, the data was
stored on an USB-stick. In the next step “Analysis and Interpretation”, the students
determined the material parameters on their own using their own personal devices, like
pc or laptop. The calculated material parameters and their interpretation were the basis
for the next task, a lab report. This written reports consisted of up to ten pages and an
appendix with different plots for example. The report had to be handed in to the super‐
visor three weeks after the lab session. The last step “Examination – Presentation” started
with a check of the lab report by the supervisor. In a short presentation, the students
presented the main output of the experiments to the supervisor and a second examiner.
After a final discussion the lab was over and the final grades were announced to the
The explained procedure could not be adopted one-to-one to the remote lab context.
Especially the face-to-face contact between the students and the supervisor is missing
in the remote context or has to be organized differently. Therefore, new procedures were
developed. On the one hand, an online scenario was developed. On the other hand, a
combination between online and offline preparation was developed.
As indicated above, the remote lab is used in different educational settings so far.
The above-explained type of preparation was put into practice in context of classical on-
campus training courses. The combination of online and offline preparation is divided
into several steps. In this case, the use of the remote lab is shown in the lecture. The task
166 D. Kruse et al.

of this lab is the determination of material parameters. This material parameters are
needed to conduct a FEM simulation in the next step. Therefore, a first run of the experi‐
ment using the remote lab is conducted in a lecture or exercise. The lecturer controls the
experiment during lecture in front of the audience. The students can ask questions and
discuss their needs in interaction with the lecturer. In a second step, the students need
to book a time slot to conduct the experiment using the ilab server. In order to help the
students and make a smooth start using the remote lab possible, an online video explains
the most important steps. This video is available without any registration to the ilab
server (see: With this information and help, the students can
conduct their experiments using the remote lab. After conducting the experiments, the
data can be downloaded and the material parameters can be calculated on their own
Another course context, in which the remote lab plays a crucial role, is a completely
online delivered course for international students, which is taken in advance of their stay
in Germany for the master study program [9]. Part of this course is to conduct online
experiments in internationally mixed students groups using the universal testing
machine in the remote lab for a tensile test. As the students are coming from all over the
world, one of the challenges is that their knowledge and competence in experimentation
theory and practice may differ significantly. Whereas for some of the students inde‐
pendently performed experimentation processes may be normal and largely trained, this
is not the case at all for others. It may be even the case that some students are introduced
to experimentation equipment for the first time in their lives. Nevertheless, for the
experimentation with the remote lab at TU Dortmund, it is important to bring the students
on an adequate level of competence in experimentation and material characterization.
The Authors decided to make use of the differences and build heterogeneous and inter‐
nationally mixed students group for the important preparation phase. Within these phase,
the students did not receive a written script with all the important information, but they
were asked to do their own individual research about material characterization, based
on guiding questions. Figure 4 shows pictures given to the students as a starting point
for their research.

Fig. 4. Pictures used to guide students in their research process for material characterization

Taking the pictures shown in Fig. 4 as a starting point, the students are asked to
answers questions as follows:
More Than “Did You Read the Script?” 167

1. In the first picture, you see the universal testing machine used at the IUL.
1.1. What are important parts?
1.2. How does such a machine work?
1.3. What is the theoretical background of the tensile test?
1.4. What is it used for?
2. The following pictures show stress strain diagrams.
2.1. What do they show?
2.2. What is the difference between the two diagrams?
2.3. How are they worked out?
2.4. What are important areas?
2.5. Which material properties can be gained through the connected data and how?
Using this approach for experimental preparation, it is possible on the one hand that
students themselves can directly develop knowledge about the respective experimenta‐
tion process. On the other hand, they see and learn where they may have important gaps
in knowledge, especially in comparison to other students. Furthermore, and this may be
the most important aspect, they can learn from each other. As they do have totally
different educational backgrounds, they recognize while answering these questions in
their respective group how far their personal concepts in experimentation differ from
the others’ concepts. With the help of each other, the students can leverage their indi‐
vidual knowledge about tensile testing and are finally on the same and needed level for
successful experimentation. To make sure that all students really are on the same level,
they have to present their research results in the following course meeting, and the most
important aspects are discussed again in the whole group. Observing the students during
the following experimentation process and assessing their results, it becomes clear that
they are well prepared for the experimentation by going through the procedure explained
above. Especially during the discussion of the experiment’s results, they benefit from
their former research in advance of the experimentation. Furthermore, they show good
abilities to connect their results with the explanations given in the literature.

4 Actual and Anticipated Outcomes

Using remote learning processes in higher engineering education allow a flexible and
individual learning process. Due to the boundary conditions of the physical pre-set setup,
a creative discovery of scientific concepts lying behind the experimentation process is
limited. With different approaches for student activation and preparation, such as virtual
process schemes (VPS), static remote laboratory setups can be used in scenarios that
give a more flexible experience. With this, students are asked to take more personal
responsibility for their research, their personal learning process and gained knowledge.
They can prove it with several creations in VPS, getting feedback about their ideas by
the system. In a next step, such approaches even can be used for organizing the access
to the laboratory environment based on students’ performance during the preparation
process. For example, the access to the remote lab could be allowed only to those students
who received an adequate reflection/feedback to the tasks before the experimentation.
168 D. Kruse et al.

Even if there is existing research on the usage of preparation activities, there is still
work to be done. Since the ELLI project starts its second runtime of five years in 2016,
the presented approaches are put into practice and evaluated within the next two years.
Hence, research results are expected to be looking at the question how different students
react on different preparation activities and in which intensity different kinds of such
activities are more or less suitable for different types of remote labs.

5 Summary

With a combination of different preparation activities, the experience of pre-set remote

experiments can be improved in the area of individual, flexible and/or research based
learning. Tools like virtual process schemes allow flexible usage and individual feedback
on the student’s process of learning and understanding. The absence of a procedural
manual for the experiment with all information ready to use triggers a scientific way of
approaching necessary information by research and self-learning. The paper presented
different remote laboratories at the ELLI universities, explained the different preparation
activities, their respective grades of implementation and first evaluation results about
their success.


1. Frerich, S., Kruse, D., Petermann, M., Kilzer, A.: Virtual labs and remote labs: practical
experience for everyone. In: Proceedings: IEEE Global Engineering Education Conference
(EDUCON), pp. 312–314 (2014)
2. Terkowsky, C., Jahnke, I., Pleul, C., May, D., Jungmann, T., Tekkaya, A.E.: Pe-TEX@Work:
designing CSCL@Work for online engineering education. In: Goggins, S.P., Jahnke, I., Wulf,
V. (eds.) Computer-Supported Collaborative Learning at the Workplace - CSCL@Work,
Computer-Supported Collaborative Learning Series, vol. 14, pp. 269–292. Springer, New York
(2013). ISBN 978-1-4614-1739-2
3. Tekkaya, A.E.: Metal forming. In: Grote, K.-H., Antonsson, E.K. (eds.) Handbook of
Mechanical Engineering, Chap. 7.2, pp. 554–606. Springer, Heidelberg (2009)
4. Bochicchio, M.A., Longo, A.: The importance of being curricular: an experience in integrating
online laboratories in National Curricula for High Schools. In: Proceedings of 11th
International Conference on Remote Engineering and Virtual Instrumentation (REV), pp. 450–
456 (2014)
5. Graven, O.H., Samuelsen, D.A.H.: Remote laboratories with automated support for learning.
In: Proceedings of 10th International Conference on Remote Engineering and Virtual
Instrumentation (REV), pp. 1–5 (2013)
6. Kruse, D., Frerich, S., Petermann, M., Ortelt, T.R., Tekkaya, A.E.: Remote labs in ELLI: lab
experience for every student with two different approaches. In: Proceedings of IEEE Global
Engineering Education Conference (EDUCON), pp. 469–475 (2016)
7. Wuttke, H.D., Hamann, M., Henke, K.: Integration of remote and virtual laboratories in the
educational process. In: Proceedings of 12th International Conference on Remote Engineering
and Virtual Instrumentation (REV), pp. 157–162 (2015)
More Than “Did You Read the Script?” 169

8. Dias, F., Matutino, P.M., Barata, M.: Virtual laboratory for educational environments. In:
Proceedings of 11th International Conference on Remote Engineering and Virtual
Instrumentation (REV), pp. 191–194 (2014)
9. May, D., Tekkaya, A.E.: Using transnational online learning experiences for building
international student working groups and developing intercultural competences. In:
Proceedings of American Society for Engineering Education’s 123rd Annual Conference and
Exposition “Jazzed about Engineering Education”, 26th–29th June 2016, New Orleans,
Louisiana, USA (2016). doi:10.18260/p.27171
Collecting Experience Data from Remotely
Hosted Learning Applications

Félix J. Garcı́a Clemente1(B) , Luis de la Torre2 , Sebastián Dormido2 ,

Christophe Salzmann3 , and Denis Gillet3
Departament of Computer Engineering and Technology,
University of Murcia, Murcia, Spain
Departament of Informatics and Automatics, Computer Science School,
UNED, Madrid, Spain
Institute of Electrical Engineering,
Swiss Federal Institute of Technology Lausanne (EPFL),
Lausanne, Switzerland

Abstract. The ability to integrate multiple learning applications from

different organizations allows sharing resources and reducing costs in the
deployment of learning systems. In this sense, Learning Tools Interop-
erability (LTI) is the main current leading technology for integrating
learning applications with platforms like Learning Management Systems
(LMS). On the other hand, the integration of learning applications also
benefits from data collection, which allows learning systems to implement
Learning Analytics (LA) processes. Tin Can API is a specification for
learning technology that makes this possible. Both learning technologies,
LTI and Tin Can API, are supported by nowadays LMS, either natively
or through plugins. However, there is no seamless integration between
these two technologies in order to provide learning systems with expe-
rience data from remotely hosted learning applications. Our proposal
defines a learning system architecture ready to apply advanced LA tech-
niques on experience data collected from remotely hosted learning appli-
cations through a seamless integration between LTI and Tin Can API. In
order to validate our proposal, we have implemented a LRS proxy plug-in
in Moodle that stores learning records in a SCORM Cloud LRS service,
and a basic online lab based on Easy JavaScript Simulation (EjsS). More-
over, we have tested our implementation using resources located in three
European universities.

Keywords: Learning Management System · Learning Tool Interoper-

ability · Experience API · Learning Analytics

1 Introduction
Nowadays, organizations, companies and universities are collaborating in the
deployment and integration of learning applications. These applications range

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 17
Collecting Experience Data from Remotely Hosted Learning Applications 171

from simple tools like interactive assessment applications, to others for domain-
specific learning environments like remote laboratories. Thus, students may com-
monly access learning tools which are hosted in other universities or organiza-
tions, and rarely use applications that are actually deployed in their university’s
servers. The ability to integrate multiple learning applications from different
organizations allows sharing resources and reducing costs in the deployment
of learning systems. In this sense, Learning Tools Interoperability (LTI) [7] is
the main current leading technology for integrating learning applications with
platforms like Learning Management Systems (LMS), portals, learning object
repositories, and other educational environments, including Massive Open Online
Course (MOOC) platforms.
On the other hand, the integration of learning applications also requires data
collection as well as tools interoperability. Data collection allows learning sys-
tems to implement Learning Analytics (LA) processes. LA are useful to measure,
analyze and report about learners in order to optimize their learning. For exam-
ple, interactions and steps followed by learners in a remote lab could be used
to analyze the learning experience. Regarding the data collection, Tin Can API
(sometimes known as Experience API or xAPI) [10] is a specification for learn-
ing technology that makes this possible. This API captures data in a consistent
format about learners activities and enables dynamic tracking of activities from
any learning system. In addition, Tin Can API uses a Learning Record Store
(LRS), a data store system that serves as a repository for learning records.
Both learning technologies, LTI and Tin Can API, are supported by nowa-
days LMS, either natively or through plugins. Several works have previously
used Tin Can API in order to apply LA, for example, SmartKlass, [8] a multi-
platform solution that enables data tracking through a dashboard and that can
be embedded in Moodle or any other LMS. Other works have used LTI to get
interoperability between LMS and remote tools, for example, [11] shows how
to develop an external tool for e-Assessment. Also we can find specific solu-
tions that provides ad-hoc integration of both technologies, among them, [1]
describes an e-learning architecture with analytic capabilities aimed at training
Unmanned Autonomous Vehicles (UAV) operators. However, there is no seam-
less integration between these technologies in order to provide learning systems
with experience data from remotely hosted learning applications.
In this sense, our proposal defines a learning system architecture ready to
apply advanced LA techniques on experience data collected from remotely hosted
learning applications through a seamless integration between LTI and Tin Can
API. The key outcomes of our proposal are:

– Collecting learning experience data from remotely hosted learning applica-

tions using well-known standard learning technologies.
– Seamless integration of the learning technologies Learning Tools Interoper-
ability (LTI) and Tin Can API in a Learning Management System (LMS).
– Deployment of the proposed architecture with a remote lab example in Moo-
dle, using a SCORM Cloud LRS service for storing the experience data, and
Easy JavaScript Simulations (EjsS) to build the remote lab.
172 F.J. Garcı́a Clemente et al.

Following this Sect. 1 on the objective and structure of this paper, the Sect. 2
presents the motivation example, which is used throughout all the paper to intro-
duce the concepts related to our proposal. In the Sect. 3, we describe our pro-
posal, and how to get a seamless integration of the learning technologies. Based
on this, the Sect. 4 shows an implementation of our proposal. Subsequently, the
fifth section discusses a specific deployment, which shows the integration using
resources located in three European universities. Finally, conclusions and future
work are drawn in the Sect. 6.

2 LMS Interoperability and Experience Data

LTI and Tin Can API technologies are widely used to incorporate advanced func-
tionality to e-learning system. The LTI standard aims to deliver a single frame-
work for integrating any LMS (which takes the role of the so called Tool Con-
sumer) with any learning application (the Tool) remotely hosted by another LMS
or learning system (called Tool Provider). The nature of the relationship estab-
lished between a Tool Consumer and a Tool Provider is that the Tool Provider
delegates responsibility for the authentication and authorization of users to the
Tool Consumer. The Tool Consumer will provide the Tool Provider with data
about the user, the user’s current context and the user’s role within that con-
text. This data is secured by the OAuth protocol, so that the Tool Consumer
may trust its authenticity. The Tin Can API standard provides a REST/JSON
web service that allows software clients to read and write learning experiential
data in the form of statement objects. In their simplest form, statements are
in the form of “I did this”, or more generally actor − verb − object. Learning
experiences are recorded in a Learning Record Store (LRS) that can exist within
an LMS or on their own.
In order to illustrate how our solution integrates both technologies, we present
a use case that is composed of two LMS, where one takes the role of Tool Provider

Fig. 1. Use case where the LTI and Tin Can API technologies are used.
Collecting Experience Data from Remotely Hosted Learning Applications 173

and the other acts as the Tool Consumer, as shown in Fig. 1. The Tool Provider
shares learning applications from simple Javascript-based physics simulations to
complex remote laboratories. The Tool Consumer uses LTI services to provide
local learners with access to remote tools. Learners use their learning space
through web browser and their interactions are stored into an LRS via Tin
Can API.
The learner’s LMS can store learning experience data like the time at which
the learner logged in and out, the time she spent connected, or a session count
into the LRS. LMS collected data could be analyzed by LA software to report
useful information in order to assess and evaluate the learning experience. How-
ever, the data collected by the LMS is of no use if we want to analyze the
learning experience in depth. Other learning data is absolutely necessary, such
as the learner’s interactions with the learning applications, mouse and keyboard
events, button clicks or changes in input elements. Advanced LA software based
in data mining and data analysis can process this kind of interactions for classi-
fication and/or clustering. For example, LA software could automatically group
learners and identify who find more difficulties to interact or solve a task defined
in the learning application.
Tin Can API provides a mechanism to collect learner’s interactions in learn-
ing applications, but it is not currently supported by LMS in this way. LMS
usually use Tin Can API to store data related to learner’s experience extracted
from their own database. Therefore, LMS lack a mechanism to collect experience
data from learning applications. Moreover, the collection of learning experience
data becomes more complex when learning applications are remotely hosted.
Especially, proper authentication and right management mechanisms between
the LMS, the tools, and the LRS are missing.

3 Seamless Integration via LRS Proxy

Our solution allows the learner’s LMS to store its collected data as well as the
learner’s interactions by using an LRS proxy. We define an LRS Proxy in charge
of getting learner’s interactions and translating them to Tin Can statements
as well as sending those statements to the LRS. In this sense, the tool must
include the parameters required to connect to the LRS proxy in order to send the
learner’s interactions. These parameters are included into the LTI configuration
of the Tool Consumer as custom parameters. The Tool Provider provides tools
with these LTI parameters and then the Tool can access the LRS proxy without
any additional authentication process. LRS stores all learners’ interactions and
so that any LA software can use its learning records to analyze the learning
Considering the previous use case, Fig. 2 shows the proposed architecture
where the user can see the integration between the LTI launch process and the
storage of the experience data via the LRS proxy. We introduce below the key
aspects to manage seamless integration between LTI and Tin Can API.
174 F.J. Garcı́a Clemente et al.

Fig. 2. Proposal for seamless integration between LTI and Tin Can API.

3.1 Custom Parameters into LTI Configuration

The LTI link requires a manual configuration process that consists in the
exchange of OAuth credentials. Teachers or course managers are in charge of
the LTI configuration and so, they can decide when a tool is included or shared
in the LMS. When a tool is included in a course, a learner can launch it. The
LMS internal process consists in a Basic LTI Launch Request where the Tool
Consumer provides the learner’s browser with all LTI parameters (OAuth para-
meters, context information, user identification and other learning information)
and then the browser uses them to get access to the tool delivered by the Tool
Provider. When the Tool is loaded in the learner’s browser, it can connect to a
remote lab if it is necessary. In this case, the tool must include the access cre-
dentials, required to get camera images and interact with actuators and sensors.
The LTI parameters below are taken from a Basic LTI sample launch data.

user id = 288816824
resource link id = 18551-bb669-e1e416
resource link title = System Activity
context id = 456434513
launch presentation document target = iframe
launch presentation return url =
lis person name full = “Felix J. Garcia”
lis person contact email primary =
lti message type = basic-lti-launch-request
lti version = LTI-1p0
tool consumer instance guid =
tool consumer instance name = UNILABS
tool consumer instance description = UNILABS (LMS Moodle)
tool consumer instance url =
Collecting Experience Data from Remotely Hosted Learning Applications 175

oauth consumer key = 12345

oauth signature = QWgJfKpJNDrpncgO9oXxJb8vHiE=
oauth signature method = HMAC-SHA1

The parameter user id uniquely identifies the user, while the lis parameters
contain information about the user account that is performing the LTI launch
request. The specific meaning of the content in these fields is defined by Learning
Information Services (LIS) [6]. The parameters launch presentation describe
the kind of browser window/frame where the Tool Consumer has launched the
Tool. The fields tool consumer instance give details of the Tool Consumer and
the ouath are produced by the signing process. The oauth consumer key para-
meter identifies which Tool Consumer is sending the message allowing the Tool
Provider to look up the appropriate secret for validation.
In addition to the standard LTI parameters, the creator of a LTI link can
add custom key/value parameters to a launch, which are to be included with
the launch of the LTI link. When there are custom parameters, each custom
parameter is included into POST data when a Basic LTI launch is performed.
Creators of LTI links should limit their parameter names to lower case and to
use no punctuation other than underscores.
Our solution proposes to include a set of key/value pairs into the optional
custom section in the LMS that originally authored the link (i.e. Tool Consumer).
These custom parameters define the Tin Can connection between the Tool and
LRS Proxy. For example, the following LTI custom parameters complete the
previous Basic TLI sample launch data.

custom TinCan base endpoint:

custom TinCan activity id:
custom TinCan verbs: changed, clicked, moved

The field custom T inCan base endpoint contains the LRS proxy endpoint
service, while custom T inCan activity id identifies the Tin Can object and
custom T inCan verbs defines the verbs that must be used in the Tin Can
statements. These fields are required. Note that the Tin Can actor is identi-
fied unequivocally by the parameter lis person contact email primary.
These parameters are sent back to the external tool when the tool is launched.
If the LTI link is imported and then exported, the custom parameters should
be maintained across the import/export process unless the intent is to redefine
the link.

3.2 Seamless Access to LRS Proxy

Tools, as are considered here, are web applications that can run within a browser.
Therefore, a learner only needs to use a browser to authenticate against his/her
176 F.J. Garcı́a Clemente et al.

local LMS (typically a username and password) and then get access to the learn-
ing space where he/she could find Tools directly without additional authentica-
tion process in other remote LMS. This LTI process, based on OAuth protocol
and called single sign-on (SSO), allows users to enter their credentials to gain
access to multiple systems just once.
In the same way, our solution proposes that learner gains access to LRS proxy
using a SSO mechanism and so avoiding a new authentication process. Specif-
ically, when the Tool is launched, it is presented into a type of browser win-
dow/frame (identified by the LTI field launch presentation document target)
embed in the learner’s learning space. That allows the Tool to get access to LRS
Proxy service located into the learner’s LMS using the current session.

3.3 Collecting Experience Data

The following Tin Can statement is an example of learner interaction catched

by an external application and sent to the LRS proxy. This statement means
“the user updated the value of the element processes to 10 in the
activity ActivitySystem”.

”actor”: {
”mbox”: ””
”verb”: {
”id”: ””,
”display”:{”en-US”: ”changed”}
”id”: ””
”result”: {
”extensions”: {
””: ”processes”,
””: ”10”

The actor object could have two properties, “name” and “mbox”, but only
“mbox” uniquely identifies the user. Verbs in Tin Can are URIs, and should be
paired with a short display string. Typically, the object will be a tool and the
result will include the extensions fields in order to provide a complete description
about the learner interaction.
Additionally, our solution proposes that the allowed verbs are set by the LTI
custom field custom T inCan verb. Therefore, the external application should
Collecting Experience Data from Remotely Hosted Learning Applications 177

only send statements with these verbs. In this sense, the application must be
aware about possible verbs that can be requested. Considering the learning
application is running in a web browser, we propose that valid verbs must be
associated to HTML events. For example, the onchange event is related to the
changed verb. In addition, the event is triggered by actions inside a HTML ele-
ment and even it might include relevant action values. These event elements
are included into the statements using the extensions fields, as showed in the
previous example.

4 Implementation

In order to validate our proposal, we have implemented a LRS proxy plug-in in

Moodle [12] that stores learning records in a SCORM Cloud LRS service [9],
and a LRS proxy javascript client on Easy JavaScript Simulation (EjsS) [4].

4.1 LRS Proxy Moodle Plug-In

Plugins enable the addition of new features and functionality to Moodle, such
as new activities, new quiz question types, new reports, integration with other
systems and many more. Specifically, LRS proxy is a web service into a local
The following description declares the service including the name that iden-
tifies the plugin, web service functions and internal properties.

$services = array(
’LRS Proxy’ =>array(
’shortname’ =>’lrsproxy’,
’functions’ =>array (’lrsproxy echo text’, ’lrsproxy store statement’,
’lrsproxy store statements’, ’lrsproxy retrieve statement’,
’lrsproxy fetch statements’, ’lrsproxy store activity state’,
’lrsproxy retrieve activity state’, ’lrsproxy fetch activity states’,
’lrsproxy delete activity state’, ’lrsproxy clear activity states’),
’restrictedusers’ =>1,
’enabled’ =>0

The function lrsproxy echo text is only for testing purposes. The rest of func-
tions are divided into two groups. One is related to the functions for storing,
retrieving and fetching statements. These functions are used by Tools and might
also be used by other applications that can manage Tin Can statements. The other
group is for storing, retrieving and fetching states. These functions might be used
by Tools that wants to save arbitrary documents in the context of a particular
learner and particular Tool, for example, a snapshot of the learner’s experience.
178 F.J. Garcı́a Clemente et al.

In relation to the internal implementation, the plugin was deployed using Tin
CanPHP [10], which provides a PHP library for implementing the Tin Can API.
Moreover, this library includes examples to show how to use Tin Can endpoints
services available to a SCORM Cloud account.

4.2 LRS Proxy Javascript Client

In order to use the LRS proxy, Tools must include a client that provides functions
to send Tin Can statements and the capabilities to listen to user events and build
statements as well as to parse LTI fields for extracting the LTI link parameters.
In relation to the internal implementation, the client was deployed using
Tin CanJS [10] that provides a JavaScript library for implementing the Tin Can
API. In addition, the client was integrated into the EjsS library in order to catch
HTML events and then create the Tin Can statements. The following code shows
how the move events are catched when the moved verb is set into the LTI link

model.addLRSListeners = function(verbs) {
if(verbs.indexOf(’moved’) >-1){
document.addEventListener(’mousemove’, model.sendMovedInteraction);
document.addEventListener(’touchmove’, model.sendMovedInteraction);

However, a user who creates a simulation or a remote laboratory with EjsS

does not need to worry about how the Tin Can statements are sent or how
the LTI fields are captured, since the EjsS library does this automatically, in a
transparent way to author.

5 Tests and Discussion

In order to validate our proposal, we have deployed a basic remote laboratory.

Moreover, we have tested our implementation using resources located in three
European universities, as shown in Fig. 3.
The main elements of our scenario are distributed as follows: the local learn-
ing system located in the UNILabs Moodle server at UNED, the remote learning
system in a different Moodle server at UMU and the remote applications at the
EPFL. The implementation of this basic remote laboratory was an application
that gets the load average of a computer activity during one minute and its
number of processes in runnable state, and allows turning off/on CPU cores as
well as adding/removing processes.
Collecting Experience Data from Remotely Hosted Learning Applications 179

Fig. 3. Testing scenario with a basic online laboratory.

Figure 4 presents the user interface for the application that was deployed by
EjsS in order to show a real-time graphics with the system activity and two
form inputs with the number of CPU cores online and the number of processes
running, as well as buttons to increase or decrease both input values.

Fig. 4. Graphics interface for the application.

In relation to this remote laboratory, note that the load average is a mea-
sure of system activity, calculated by the operating system and expressed as a
fractional number. In order to ensure adequate performance, the load average
should ideally be less than the number of CPU cores in the system. However,
learners can change the number of processes or CPU cores online to visualize
how the load average evolves.
180 F.J. Garcı́a Clemente et al.

The Moodle configuration for the LTI link is shown in the Sect. 3.1 and an
example of Tin Can statement generated by this application is shown in the
Sect. 3.3. SCORM Cloud provides a simple interface for LRS endpoint service
configuration and statement viewer. However, it could be replaced by other LRS,
for example, Learning Locker [5].
Other existing remote laboratories can be included into the architecture
if the LRS proxy client is integrated in the application, i.e. if the application
processes the custom LTI fields and sends the Tin Can API statements to the
LRS proxy. Although this functionality is implemented into EjsS library, it could
be extracted and used independently.
Moreover, LRS could be shared by several Learning Management Systems
and so, they could even share LA tools in the future. In this way, organizations
can increase their goals of sharing resources and reducing costs in the deployment
of learning systems.
Finally, while our implementation uses Moodle, the same elements could be
deployed with other LMS, for example openEdX [2] or Graasp [3]. In fact, since
our proposal is based on standard technologies, the integration between different
LMS is supported.

6 Conclusions and Future Directions

Current learning technologies permit the deployment and sharing of learning

applications in different learning systems, but it is necessary to find a correct
way to integrate all these technologies in order to avoid complex architectures or
confusing authentication process. In this sense, our proposal shows how to get a
seamless integration of the main learning technologies in a Learning Management
As future work, we plan to consider the deployment of learning analytic tools
in order to get an online and offline feedback. Specifically, we are working on
tools based on data mining and data analysis that will be available to provide
teachers with a just in time feedback.

1. Dodero, J.M., González-Conejero, E.J., Gutiérrez-Herrera, G., Peinado, S., Tocino,
J.T., Ruiz-Rube, I.: Trade-off between interoperability and data collection perfor-
mance when designing an architecture for learning analytics. Future Gener. Com-
put. Syst. 68, 31–37 (2017)
2. edX. Open edX: Open Courseware Development Platform.
Accessed 31 Oct 2016
3. EPFL React Group: Grassp project. Accessed 31 Oct 2016
4. Clemente, F.J.G., Esquembre, F.: EjsS: A JavaScript library and authoring tool
which makes computational-physics education simpler. In: Poster Presented at the
XXVI IUPAP Conference on Computational Physics (CCP), Boston, USA (2014)
5. HT2 Labs: Learning locker. Accessed 31 Oct 2016
Collecting Experience Data from Remotely Hosted Learning Applications 181

6. IMS Global Learning Consortium: IMS global learning information services best
practice and implementation guide. Accessed 31
Oct 2016
7. IMS Global Learning Consortium: Learning tools interoperability. https://www. Accessed 31 Oct 2016
8. Learning Analytics Technologies for Education: KlassData.
Accessed 31 Oct 2016
9. Rustici Software: SCORM cloud. Accessed 31 Oct 2016
10. Rustici Software: Tin Can API. Accessed 31 Oct 2016
11. Sierra, A.J., Martı́n-Rodrı́guez, A., Ariza, T., Muñoz-Calle, J., Fernández-Jiménez,
J.J.: LTI for interoperating e-Assessment tools with LMS. In: Methodologies and
Intelligent Systems for Technology Enhanced Learning, 6th International Confer-
ence, pp. 173–181. Springer, Switzerland (2016)
12. UNED Labs: Moodle LRS proxy.
lrsproxy. Accessed 31 Oct 2016
“Remote Wave Laboratory” with Embedded
Simulation – Real Environment
for Waves Mastering

Franz Schauer1,2(&), Michal Gerza1, Michal Krbecek1,

and Miroslava Ozvoldova1,2
Faculty of Applied Informatics, Tomas Bata University in Zlin,
760 05 Zlin, Czech Republic
Faculty of Education, University of Trnava, 918 43 Trnava, Slovak Republic

Abstract. The paper describes a new remote experiment in REMLABNET -

“Remote Wave Laboratory” constructed on the ISES (Internet School Experi-
mental System). The remote experiment contributes to understanding of concepts
of harmonic waves, their parameters (amplitude, frequency and period, and phase
velocity) and dependence of the instantaneous phase on time and path covered.
Also it serves for the measurements and understanding of the concept of the phase
sensitive interference and the superposition of parallel/perpendicular waves.

Keywords: ISES  Remote Wave Laboratory  Embedded multiparameter

simulation  Wave phenomena  Parameters of waves  Interference 

1 Introduction

Waves and their phase sensitive interference and superposition are important phe-
nomena constituting a major problem in students’ teaching of waves and optics, due to
the necessary students’ imagination. Then, the phenomena of interference of waves and
their superposition are difficult to understand. The proposed “Remote Wave Labora-
tory” is aimed at real measurements of the phase and the most frequent phenomena of
phase-sensitive wave superposition on real physical instrumentation with multiple use
and applications. As a teaching tool for better understanding of real measurements
serves the embedded real multiparameter simulation of the observed phenomena
introduced for the first time in our remote experiments.

2 Purpose or Goal

The whole system of the remote experiment (RE) “Remote Wave Laboratory” is
conceived to enable demonstrating the basic concepts of wave phenomena, as:
– The concept of the basic parameters of harmonic waves - the amplitude, the fre-
quency, the period, the initial phase, the phase velocity and the wavelength,

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_18
“Remote Wave Laboratory” with Embedded Simulation 183

– The concept of the instantaneous phase (corroborated with our electronic phase
laboratory) as the function of the elapsed time and the path covered by the wave (in
relation to two periodicities of waves - in time and in space),
– The concept of the phase sensitive interference and the superposition of
parallel/perpendicular waves.

3 Approach and Schematic Arrangement

3.1 Theory of Wave Laboratory
Let us suppose the signal detected by both acoustic detectors is u1 ¼ a sinðxtÞ and
u2 ¼ a sinðxt þ DuÞ.
• Parallel interference of both signals gives in general

x1 þ x2 ¼ a sinðxtÞ þ a sinðxt þ DuÞ ¼ A sinðxt þ D/Þ; ð1Þ

where the amplitude A and the initial phase D/ of the resulting signal is
A¼a 2ð1 þ cosðDuÞÞ; ð2Þ

tgðD/Þ ¼ : ð3Þ
1 þ cosðDuÞ

• Perpendicular superposition of both signals gives x ¼ a sinðxtÞ and

y ¼ a sinðxt þ Du þ p=2Þ gives in general

y2  2xycosðDuÞ þ x2 ¼ a2 sinðDuÞ2 ; ð4Þ

which reduces in particular cases Du = 0 and p/2 into the straight line and circle,
respectively, in other cases of Du gives an ellipse.
• Phase measurements - it is then possible to determine the phase shift of both
signals (waves) Du from the position of the ellipse as shown Fig. 1.
Du ¼ arcsin ð5Þ

3.2 Students’ Results in Computer Oriented Hands-On Laboratory

Computer oriented hands-on laboratory exercise on acoustic waves has been for con-
siderable time the part of students’ laboratory. Its arrangement is similar as that in
Fig. 3, with positioning of the sound detector manually. The students’ results are in
Fig. 2, where (a) depicts the phase shift dependence of both signals from detectors Du
on the wave path difference of both waves Dx for wavelength k = 35.3 cm, and in
(b)–(d) are the superpositions of two waves’ signals for phase shifts Du = 0 rad (b),
Du = p/2 rad and (c) Du = p rad (d). The upper panel shows the signals of both the
184 F. Schauer et al.

Fig. 1. Scheme for the phase shift Du of two waves determination using Eqs. (4) and (5)

Fig. 2. Examples of student’s work on hands - on experiment (a) Phase shift Du dependence on
the difference of the wave path Dx, (b)–(d) Superposition of two waves for phase shift
Du = 0 rad (b), Du = p/2 rad (c), Du = p rad (d), all for the wavelength k = 35.3 cm; the upper
panel shows signal of both waves, the middle panel phase-sensitive superposition-Lissajous
figures (for perpendicular waves) and bottom panel phase-sensitive interference signal (for
parallel waves)
“Remote Wave Laboratory” with Embedded Simulation 185

Microphones ISES display

Loudspeaker posiƟoning
ISES microphones

AC generator

ISES plate and

ISES relays modules
Motor and posiƟon board

Fig. 3. Schematic arrangement (upper panel) and the real RE “Remote Wave Laboratory”
(lower panel) with the loudspeaker as the acoustic wave source, two acoustic detectors 1 and 2
and the driving motor for moving detector 2, producing the phase shift Du of both signals,
corresponding to the detectors´ distance Dx

sound waves, the middle panel the phase-sensitive superposition - Lissajous figures (for
perpendicular waves) and the bottom panel shows the phase-sensitive interference
signal (for parallel waves).
186 F. Schauer et al.

3.3 Arrangement of Remote “Remote Wave Laboratory”

The arrangement of the remote experiment (RE) “Remote Wave Laboratory” is in
Fig. 3 both in schematical and real experimental arrangement. The acoustic wave
source generates the planar wave, which is detected by two detectors, one of them is
movable by the motor drive in a controlled way, producing the phase shift Du of both
coherent signals, corresponding to the detectors distance Dx. Both signals are
phase-sensitively added/superimposed in parallel/perpendicular directions.
The ISES USB module with the controlling PC controls the remote experiment
serving of both RE and embedded simulation (ES) control. The system is built on the
Internet School Experimental System (ISES) components [1]. The whole arrangement
is placed on the optical bench and it consists of the loudspeaker as the wave source and
two miniature microphones as the signal detectors, one of them is movable by the step
drive. Both signals are displayed, together with their phase-sensitive interference signal
(for parallel, linearly polarized progressive waves) and phase-sensitive superposition
Lissajous figures (for perpendicular, linearly polarized progressive waves) with the
corresponding data outputs for data processing (see Fig. 2). The .psc file controlling
program and the web page were built using ISES environment Easy Remote ISES (ER
ISES) for compiling the control RE programs [1].

Fig. 4. Example of the view of the RE web page “Remote Wave Laboratory”, measured data,
(left) and simulation of the observed phenomenon (right); from upper graph: both
signals-perpendicular superposition- interpherence; the position of the movable detector is
visible in the live stream
“Remote Wave Laboratory” with Embedded Simulation 187

3.4 Embedded Simulation of the Wave Laboratory

In Fig. 4a is the web page of RE “Remote Wave Laboratory” (from Fig. 3) with the
measured data output (left) and the output of the embedded simulation (right).
As the part of the solution of embedded simulations in our ISES RE we used the
mathematical solver built in RE Measureserver and its.psc file. The solver provides
solving of a wide range of arithmetic operations and solves differential equations [4].
The example of the use for the response of the RLC circuit to voltage perturbation is
shown in Fig. 5 [2], here was used for simple plotting of calculated quantities
according Eqs. (1–4).
The Measureserver unit is a significant software part of the ISES RE concept. It is
the processing and communicating server located between the physical hardware and
connected clients. The Measureserver core is designed as an advanced finite-state
machine to setup and process the logical instructions solving prescribed activities. Its
functioning is based on the control program that comes from the.psc file loaded to the
Measureserver before its startup.

Fig. 5. The general mathematical unit for ISES remote experiments enabling both arithmetical
operations and differential equations solutions
188 F. Schauer et al.

When the client starts RE, the Measureserver begins the communication with
ISES HW. Then the RE is ready to perform all the required measurements according to
the web page instructions given by the client. Then, the Measureserver obtains
experimental data from the ISES modules (meters, sensors and probes) and transports
them again on the client’s web page for the analyses [3].
The embedded ES works in a similar way, replacing the ISES module by the
mathematical solver and providing the data for graphical comparison with the mea-
sured data. The difficult problem was the synchronization of both measured and sim-
ulated data into one time dependent graphical representation to study the role of model
parameters on the resulting signals.

4 Conclusions

The remote environment “Remote Wave Laboratory” provides the following knowl-
edge from waves:
– Phase of the wave as a function of the covered distance (with respect to the ref-
erence signal and its linearity),
– To examine parameters of the wave - the phase velocity and the wavelength in a
medium, the amplitude, the frequency and the period of the wave,
– To examine the concept of the coherence of two acoustic waves,
– To show the phase sensitive interference of two parallel waves and find the con-
ditions for extremes,
– To show the phase sensitive superposition of two perpendicular waves and to find
the phase shift and amplitude to the reference wave,
– To find the integer quotient of frequencies of an unknown waves.

Acknowledgement. The support of the project of the Swiss National Science Foundation
(SNSF) - “SCOPES”, No. IZ74Z0_160454 is highly appreciated. The support of the Internal
Agency Grant of the Tomas Bata University in Zlin No. IGA/FAI/2016 for PhD students is
“Remote Wave Laboratory” with Embedded Simulation 189

1. Ozvoldova, M., Schauer, F.: Remote laboratories in research-based education of real world.
In: Frankfurt, F.S. (ed.), p. 157. Peter Lang International Academic Publisher (2015) ISBN
2. Gerza, M., Schauer, F., Dostal, P.: Embedded simulations in real remote experiments for ISES
e-Laboratory. In: EUROSIM 2016, Oulu, Finland, pp. 653–658. ISBN 978-1-5090-4119-0
3. Gerza, M., Schauer, F.: Intelligent processing of experimental data in ises remote laboratory.
Int. J. Online Eng., 58–63 (2016). ISSN 1861-2121. Austria
4. Inspiration of Prof. F. Esquembre in Solver Compiling is Appreciated
Remote Laboratories: For Real Time Access to Experiment
Setups with Online Session Booking, Utilizing a Database
and Online Interface with Live Streaming

B. Kalyan Ram1 ✉ , S. Arun Kumar1, S. Prathap1, B. Mahesh2,

( )

and B. Mallikarjuna Sarma2

Electrono Solutions Pvt. Ltd., #513, Vinayaka Layout, Immadihalli Road, Whitefield,
Bangalore 560066, India
Independent Consultants, Bangalore, India,

Abstract. This paper discusses the physical implementation of lab experiments

that are designed to be accessed from any web-browser using clientless remote
desktop gateway apache guacamole with the support of remote desktop protocol.
Which also facilitates live streaming of the experiments using axis cgi api, online
slot booking for students to book their respective sessions and apache Cassandra
database for users details storage.
Here, we shall address all aspects related to the system architecture and infra‐
structure needed to establish a Real time Remote access system for a given
machine (in this case being electric machines - which otherwise could be extended
to any machine). This is being built to evaluate the system feasibility to implement
a complete machine health monitoring system with remote monitoring and control
capability, though the current implementation is aimed at students being able to
perform the experiments related to machines lab.

Keywords: Remote labs · Engineering laboratory experiments · Apache

guacamole · Remote desktop protocol · Live streaming · Axis cgi api · Online slot
booking · Apache Cassandra database

1 Introduction

Laboratory experiments are the integral part of Engineering Education. The main focus
is to gain access to these lab experiments over the internet using various integration
tools. Remote laboratory (also known as online laboratory, remote workbench) is the
use of telecommunications to remotely conduct real (as opposed to virtual) experiments,
at the physical location of the operating technology, Enabling the students to utilize these
technology from a separate geographical location. Supported by resources based on new
information and communication technologies, it is now possible to remotely control a
wide variety of real laboratories.

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_19
Remote Laboratories: For Real Time Access to Experiment Setups 191

2 Architecture of Guacamole

In cloud computing environment, there are various important issues, including standard,
virtualization, resource management, information security, and so on. Among these
issues, desktop computing in virtualized environment has emerged as one of the most
important ones in the past few years. Currently, users no longer use a powerful, more-
than-required hardware but share a remote powerful machine using light weight thin-
client. A thin-client is a stateless desktop terminal that has no hard drive. All features
typically found on the desktop PC, including applications, sensitive data, memory, etc.,
are stored back in the server when using a thin client. These thin clients may not neces‐
sarily be a totally different hardware but can also be in the form of PCs. Thin clients,
software services, and backend hardware make up thin client computing, a remote
desktop computing model [1]. Guacamole is not a self-contained web application and
is made up of many parts. The web application is actually intended to be simple and
minimal, with the majority of the grunt work performed by lower-level components.
Users connect to a Guacamole server with their web application. The Guacamole client,
written in JavaScript, is served to users by a web server within the Guacamole server.
Once loaded, this client connects back to the server over HTTP using the Guacamole
protocol. The web application deployed to the Guacamole server reads the Guacamole
protocol and forwards it to guacd, the native Guacamole proxy. This proxy actually
interprets the contents of the Guacamole protocol, connecting to any number of remote
desktop servers on behalf of the user [2] (Fig. 1).

Fig. 1. Guacamole architecture

2.1 Guacamole Protocol

The web application does not understand any remote desktop protocol at all. It does not
contain support for VNC or RDP or any other protocol supported by the Guacamole
192 B. Kalyan Ram et al.

stack. It actually only understands the Guacamole protocol, which is a protocol for
remote display rendering and event transport. While a protocol with those properties
would naturally have the same abilities as a remote desktop protocol, the design prin‐
ciples behind a remote desktop protocol and the Guacamole protocol are different: the
Guacamole protocol is not intended to implement the features of a specific desktop
environment. As a remote display and interaction protocol, Guacamole implements a
superset of existing remote desktop protocols [1].
Adding support for a particular remote desktop protocol (like RDP) to Guacamole
thus involves writing a middle layer which “translates” between the remote desktop
protocol and the Guacamole protocol. Implementing such a translation is no different
than implementing any native client, except that this particular implementation renders
to a remote display rather than a local one.

• guacd is the heart of Guacamole which dynamically loads support for remote desktop
protocols (called “client plug-ins”) and connects them to remote desktops based on
instructions received from the web application.
• guacd is a daemon process which is installed along with Guacamole and runs in the
background, listening for TCP connections from the web application. guacd also does
not understand any specific remote desktop protocol, but rather implements just
enough of the Guacamole protocol to determine which protocol support needs to be
loaded and what arguments must be passed to it. Once a client plug-in is loaded, it
runs independently of guacd and has full control of the communication between itself
and the web application until the client plug-in terminates (Fig. 2).

Fig. 2. Guacamole server
Remote Laboratories: For Real Time Access to Experiment Setups 193

2.3 Remote Desktop Gateway

A remote desktop gateway provides access to multiple operating environments using an
HTML5 capable browser without the use of any plug-ins. Users connect to a Guacamole
server with their web browser. The Guacamole client, written in JavaScript, is served to
users by a web server within the Guacamole server. Once loaded, this client connects
back to the server over HTTP using the Guacamole protocol. The web application
deployed to the Guacamole server reads the Guacamole protocol and forwards it to
guacd, the native Guacamole proxy. This proxy actually interprets the contents of the
Guacamole protocol, connecting to any number of remote desktop servers on behalf of
the user. Remote Desktop Protocol (RDP) provides remote login and desktop control
capabilities that enable a client to completely control and access a remote server. The
protocol is implemented by Microsoft Corporation based on ITU-T T.120 family proto‐
cols. The major advantage distinguishing the RDP from other remote desktop schemes,
such as the frame-buffer approach, is that the protocol is based on preferably sending
graphic device interface (GDI) information from a server, instead of full bitmap
images [3].

3 Remote Labs Implementation

The implementation of remote lab involves designing a hardware infrastructure that

supports the remote access feature through the technology infrastructure mentioned
herewith (Fig. 3).

Fig. 3. Remote labs architectural block diagram

This specific remote laboratory setup is made up of Motor - Generator setups, PLC
trainer setup and Process control trainer setup.
194 B. Kalyan Ram et al.

These setups are designed to be accessed remotely by an authorized user through a

browser interface. Currently, the system is tested for different browsers namely Google
Chrome, Microsoft Internet Explorer and Mozilla Firefox and has found to be compat‐
ible for these browsers accordingly. The architecture is designed to support most
commonly used browsers.

4 Cassandra Database

Apache Cassandra is a highly scalable, high-performance distributed database designed

to handle large amounts of data across many commodity servers, providing high avail‐
ability with no single point of failure. It is a type of NoSQL database.
Cassandra has become popular because of its outstanding technical features. Given
below are some of the features of Cassandra:
– Elastic scalability - Cassandra is highly scalable; it allows to add more hardware to
accommodate more customers and more data as per requirement.
– Always on architecture - Cassandra has no single point of failure and it is contin‐
uously available for business-critical applications that cannot afford a failure.
– Fast linear-scale performance - Cassandra is linearly scalable, i.e., it increases your
throughput as you increase the number of nodes in the cluster. Therefore it maintains
a quick response time.
– Flexible data storage - Cassandra accommodates all possible data formats including:
structured, semi-structured, and unstructured. It can dynamically accommodate
changes to your data structures according to your need.
– Easy data distribution - Cassandra provides the flexibility to distribute data where
you need by replicating data across multiple data centers.
– Transaction support - Cassandra supports properties like Atomicity, Consistency,
Isolation, and Durability (ACID).
– Fast writes - Cassandra was designed to run on cheap commodity hardware. It
performs blazingly fast writes and can store hundreds of terabytes of data, without
sacrificing the read efficiency.
The design goal of Cassandra is to handle big data workloads across multiple nodes
without any single point of failure [4]. Cassandra has peer-to-peer distributed system
across its nodes, and data is distributed among all the nodes in a cluster.
– All the nodes in a cluster play the same role. Each node is independent and at the
same time interconnected to other nodes.
– Each node in a cluster can accept read and write requests, regardless of where the
data is actually located in the cluster.
– When a node goes down, read/write requests can be served from other nodes in the
In Cassandra, one or more of the nodes in a cluster act as replicas for a given piece
of data. If it is detected that some of the nodes responded with an out-of-date value,
Cassandra will return the most recent value to the client. After returning the most recent
Remote Laboratories: For Real Time Access to Experiment Setups 195

value, Cassandra performs a read repair in the background to update the stale values.
The following figure shows a schematic view of how Cassandra uses data replication
among the nodes in a cluster to ensure no single point of failure (Fig. 4).

Fig. 4. Structure of Cassandra database

5 Single Sign-On Application

The first time that a user seeks access to an application, the Login Server:
– Authenticates the user by means of user name and password
– Passes the client’s identity to the various applications
– Marks the client being authenticated with an encrypted login cookie
In subsequent user logins, this login cookie provides the Login Server with the user’s
identity, and indicates that authentication has already been performed. If there is no login
cookie, then the Login Server presents the user with a login challenge. To guard against
sniffing, the Login Server can send the login cookie to the client brow er over an
encrypted SSL channel. The login cookie expires with the session, either at the end of
a time interval specified by the administrator, or when the user exits the browser. It is
never written to disk. A partner application can expire its session through its own explicit
1. Single Sign-On Application Programming Interface (API)
(a) The Single Sign-On API enables:
(i) Applications to communicate with the Login Server and to accept a user’s
identity as validated by the Login Server
(ii) Administrators to manage the application’s association to the Login Server
(b) There are two kinds of applications to which Single Sign-On provides access:
(i) Partner Applications
(ii) External Applications
196 B. Kalyan Ram et al.

2. Partner Applications
Partner applications are integrated with the Login Server. They contain a Single Sign-
On API that enables them to accept a user’s identity as validated by the Login Server.
3. External Applications
External applications are web-based applications that retain their authentication
logic. They do not delegate authentication to the Login Server and, as such, require a
user name and password to provide access. Currently, these applications are limited to
those which employ an HTML form for accepting the user name and password. The user
name may be different from the SSO user name, and the Login Server provides the
necessary mapping (Fig. 5).

Fig. 5. Single Sign-On

6 Port Forwarding

In computer networking, port forwarding or port mapping is an application of network

address translation (NAT) that redirects a communication request from
one address and port number combination to another while the packets are traversing a
network gateway, such as a router or firewall. This technique is most commonly used
to make services on a host residing on a protected or masqueraded (internal) network
available to hosts on the opposite side of the gateway (external network), by remapping
the destination IP address and port number of the communication to an internal host.
Port forwarding allows remote computers (for example, computers on the Internet) to
connect to a specific computer or service within a private local-area network (LAN). In
a typical residential network, nodes obtain Internet access through a DSL or cable
Remote Laboratories: For Real Time Access to Experiment Setups 197

modem connected to a router or network address translator (NAT/NAPT). Hosts on the

private network are connected to an Ethernet switch or communicate via a wireless LAN.
The NAT device’s external interface is configured with a public IP address. The
computers behind the router, on the other hand, are invisible to hosts on the Internet as
they each communicate only with a private IP address [6]. When configuring port
forwarding, the network administrator sets aside one port number on the gateway for
the exclusive use of communicating with a service in the private network, located on a
specific host. External hosts must know this port number and the address of the gateway
to communicate with the network-internal service. Often, the port numbers of well-
known Internet services, such as port number 80 for web services (HTTP), are used in
port forwarding, so that common Internet services may be implemented on hosts within
private networks.
Typical applications include the following:
– Running a public HTTP server within a private LAN
– Permitting Secure Shell access to a host on the private LAN from the Internet
– Permitting FTP access to a host on a private LAN from the Internet
– Running a publicly available game server within a private LAN
Usually only one of the private hosts can use a specific forwarded port at one time,
but configuration is sometimes possible to differentiate access by the originating host’s
source address.

7 A Record

An A record maps a domain name to the IP address (IPv4) of the computer hosting the
domain. Simply put, an A record is used to find the IP address of a computer connected
to the internet from a name. The A in A record stands for Address. Whenever you visit
a web site, send an email, connect to Twitter or Facebook or do almost anything on the
Internet, the address you enter is a series of words connected with dots. For example, to
access any website you enter a URL for instance At the name server
there is an A record that points to the IP address This means that a request from
your browser to is directed to the server with IP address A
Records are the simplest type of DNS records, yet one of the primary records used in
DNS servers [7]. You can actually do quite a bit more with A records, including using
multiple A records for the same domain in order to provide redundancy. Additionally,
multiple names could point to the same address, in which case each would have its own
A record pointing to the that same IP address.

8 Video Streaming API

The HTTP-based video interface provides the functionality for requesting single and
multipart images and for getting and setting internal parameter values. The image and
CGI requests are handled by the built-in web server. The mjpg/video.cgi is used to
request a Motion JPEG video stream with specified arguments. The arguments can be
198 B. Kalyan Ram et al.

specified explicitly, or a predefined stream profile can be used. Image settings saved in
a stream profile can be overridden by specifying new settings after the stream profile
argument [8].

9 Tomcat Web Application Deployment

Deployment is the term used for the process of installing a web application (either a 3rd
party WAR or your own custom web application) into the Tomcat server. Web appli‐
cation deployment may be accomplished in a number of ways within the Tomcat server.
– Statically, the web application is setup before Tomcat is started
– Dynamically; by directly manipulating already deployed web applications (relying
on auto-deployment feature) or remotely by using the Tomcat Manager web appli‐
The Tomcat Manager is a web application that can be used interactively (via HTML
GUI) or programmatically (via URL-based API) to deploy and manage web applica‐
tions. There are a number of ways to perform deployment that rely on the Manager web
application. Apache Tomcat provides tasks for Apache Ant build tool. Apache Tomcat
Maven Plug-in project provides integration with Apache Maven. The desired environ‐
ment should define a JAVA_HOME value pointing to your Java installation. Addition‐
ally, you should ensure the Java javac compiler command run from the command shell
that your operating system provides.

10 Network Architecture

The network architecture consists of ISP-Connection, firewall, load-balancer, Switch,

Server system and thin clients.

10.1 Firewall

A firewall is a network security system designed to prevent unauthorized access to or

from a private network. Firewalls can be implemented in both hardware and software,
or a combination of both. Network firewalls are frequently used to prevent unauthor‐
ized Internet users from accessing private networks connected to the Internet, espe‐
cially intranets. All messages entering or leaving the intranet pass through the firewall,
which examines each message and blocks those that do not meet the speci‐
fied security criteria.

10.2 Load-Balancer
A load balancer is a device that acts as a reverse proxy and distributes network or appli‐
cation traffic across a number of servers. Load balancers are used to increase capacity
(concurrent users) and reliability of applications. They improve the overall performance
Remote Laboratories: For Real Time Access to Experiment Setups 199

of applications by decreasing the burden on servers associated with managing and

maintaining application and network sessions, as well as by performing application-
specific tasks. Load balancers are generally grouped into two categories: Layer 4 and
Layer 7. Layer 4 load balancers act upon data found in network and transport layer
protocols (IP, TCP, FTP, UDP). Layer 7 load balancers distribute requests based upon
data found in application layer protocols such as HTTP. Requests are received by both
types of load balancers and they are distributed to a particular server based on a config‐
ured algorithm (Fig. 6).

Fig. 6. Network architecture

Some industry standard algorithms are:

– Round robin
– Weighted round robin
– Least connections
– Least response time
Layer 7 load balancers can further distribute requests based on application specific
data such as HTTP headers, cookies, or data within the application message itself, such
as the value of a specific parameter. Load balancers ensure reliability and availability
by monitoring the “health” of applications and only sending requests to servers and
applications that can respond in a timely manner.
200 B. Kalyan Ram et al.

10.3 Managed Switches

Switches are crucial network devices, so being able to manplate them is sometimes
important in dealing with information flow. Traffic may need to be controlled so that
information is transmitted according to its level of importance, urgency and any opera‐
tional requirements. This is the key reason for including managed switches alongside
unmanaged switches. Whereas an unmanaged switch is sufficient to deal with normal
networking, where traffic is managed solely by servers, a managed switch becomes
useful when it becomes important to filter traffic more precisely.

10.4 Remote Lab Server

The server machine runs on windows server 2012 and makes use of the remote desktop
service to configure and host software developed to control the hardware systems from
server machine.

10.5 Thin Clients

Thin client is a lightweight computer that is purpose-built for remote access to a server
(typically cloud or desktop virtualization environments). It depends heavily on another
computer (its server) to fulfill its computational roles. The specific roles assumed by the
server may vary, from hosting a shared set of virtualized applications, a shared desktop
stack or virtual desktop, to data processing and file storage on the client’s or user’s behalf.
This is different from the desktop pc (fat client), which is a computer designed to take
on these roles by itself.
Thin clients occur as components of a broader computing infrastructure, where many
clients share their computations with a server or server farm. The server-side infrastruc‐
ture makes use of cloud computing software such as application virtualization, hosted
shared desktop (hsd) or desktop virtualization (vdi). This combination forms what is
known today as a cloud based system where desktop resources are centralized into one
or more data centers. The benefits of centralization are hardware resource optimization,
reduced software maintenance, and improved security.

10.6 Heartbeat/Health Information System with SMS Alert

The status of the systems is unknown by the system administrator until and unless he
monitors the results physically. So by sending packets to each systems and receiving an
acknowledgement that the packet is recieved, similar to a two way hand shake algorithm.
We developed a service where the packets are send and recieved. Once these packets
are unable to be send from any of the systems or none of the systems are receiving these
packets an sms alert will be given to the system administrator phone.
Remote Laboratories: For Real Time Access to Experiment Setups 201

11 User Statistics of Remote Labs

Statistics is the study of numerical information, which is called data. People use statistics
as tools to understand information. Learning to understand statistics helps a person react
intelligently to statistical claims. Statistics are used in the fields of business, math,
economics, accounting, banking, government, astronomy, and the natural and social
sciences. Over all session statics is put in the admin portal. Where the admin will have
the privilege to check the overall user sessions, how many session are booked and
cancelled. The scheduler help to book a slot at the required time as per the user needs.
And the lab can be accessed at the particular time slot booked by the users (Fig. 7).

Fig. 7. Scheduler

Recently, many educational institutions have acknowledged the importance of

making laboratories available on-line, allowing their students to run experiments from
a remote computer. While usage of virtual laboratories scales well, remote experiments,
based on scarce and expensive rigs, i.e. physical resources, do not and typically can only
be used by one person or cooperating group at a time. It is therefore necessary to admin‐
ister the access to rigs, where we distinguish between three different roles: content
providers, teachers and students [10]. A scheduler is a software product that allows an
enterprise to schedule and track computer batch tasks. These units of work include
running a security program or updating software [11]. A scheduler starts and handles
jobs automatically by manipulating a prepared job control language algorithm or through
communication with a human user.
Based on the scheduler designed. The start time from when the user has started to
access his lab and at the time the user has used the session is been recorded in the
Cassandra database. Using the scheduler we are able to track even the overall session
booked and using these data a statistical graphs are plotted as shown below (Figs. 8 and 9).
202 B. Kalyan Ram et al.

Fig. 8. Session portal

Fig. 9. Session statistics

These graphical representation of the admin portal consists of:

– new sessions per week
– new session per week
– average session per day
– cancelled session this week
System based usage statistics is also recorded using the scheduler where the number
if time the system has been accessed from a particular start date and time to a particular
start date and time. Which can be seen in the below image. This statistics can be very
useful for user monitoring and also the system usage can be recorded.
Remote Laboratories: For Real Time Access to Experiment Setups 203

12 Resource Allocation and Utilization

In computing, resource allocation is necessary for any application to be run on the

system. When the user opens any program this will be counted as a process, and therefore
requires the computer to allocate certain resources for it to be able to run. Such resources
could have access to a section of the computer’s memory, data in a device interface
buffer, one or more files, or the required amount of processing power. A computer with
a single processor can only perform one process at a time, regardless of the amount of
programs loaded by the user (or initiated on start-up). Computers using single processors
appear to be running multiple programs at once because the processor quickly alternates
between programs, processing what is needed in very small amounts of time. This
process is known as multitasking or time slicing. The time allocation is automatic,
however higher or lower priority may be given to certain processes, essentially giving
high priority programs more/bigger slices of the processor’s time. On a computer
with multiple processors different processes can be allocated to different processors so
that the computer can truly multitask. We should allocate system resources in such a
way that the above conflicts doesn’t happen which might affect the performance of the
software. Also proper maintenance of each of the systems is to be ensured to provide
appropriate uptime to this system (Fig. 10).

Fig. 10. Resource utilization

13 Conclusion

Remote labs are the natural choice for accessing physical laboratories online to enhance
the accessibility of both Software and Hardware infrastructure in Engineering colleges
[12]. In the context of India, the data shows that the utilization of Laboratory resources
is very low and the accessibility of laboratory resources to the students is sparse [13].
The topics presented in this paper addresses the technological architecture and the tools
needed for implementation of an effective Remote Lab Infrastructure from the perspec‐
tive of OS independent, Browser independent and Application independent solution.
204 B. Kalyan Ram et al.


1. Wang, S.-T., Chang, H.-Y.: Development of web-based remote desktop to provide adaptive
user interfaces in cloud platform. World Acad. Sci. Eng. Technol. Int. J. Comput. Electr.
Autom. Control Inf. Eng. 8(8), 1572–1577 (2014)
3. Tsai, C.-Y., Huang, W.-L.: Design and performance modeling of an efficient remote
collaboration system. Int. J. Grid Distrib. Comput. 8(4) (2015)
4. Cassandra.
5. SSO.
6. Port Forwarding.
7. Introduction to A-record.
8. VideoAPI.
9. Apache Tomcat.
10. Gallardo, A., Richter, T., Debicki, P., et al.: A rig booking system for on-line laboratories.
In: IEEE EDUCON Education Engineering– Learning Environments and Ecosystems in
Engineering Education Session T1A, p. 6 (2011)
11. Scheduler.
12. Kalyan Ram, B., Arun Kumar, S., Mallikarjuna Sarma, B., Bhaskar, M., Chetan Kulkarni,
S.: Remote software laboratories: facilitating access to engineering softwares online. In: 13th
International Conference on Remote Engineering and Virtual Instrumentation (REV), p. 394
13. Kalyan Ram, B., Hegde, S.R., Pruthvi, P., Hiremath, P.S., Jackson, D., Arun Kumar, S.: A
distinctive approach to enhance the utility of laboratories in Indian academia. In: 12th
International Conference on Remote Engineering and Virtual Instrumentation (REV), p. 235
Web Experimentation on Virtual and Remote

Daniel Galan1(B) , Ruben Heradio2 , Luis de la Torre1 , Sebastián Dormido1 ,

and Francisco Esquembre3
Departamento de Informática y Automática, Facultad de Informática,
UNED, Madrid, Spain
de Ingenierı́a de Software y Sistemas Informáticos Computer Science School,
UNED, Madrid, Spain
Departament of Computer Engineering and Technology,
University of Murcia, Murcia, Spain

Abstract. Laboratory experimentation is essential in any educational

field. Existing software allows two options for performing experiments:
(1) Interacting with the graphic user interface (it is intuitive and close
to reality, but it has certain constraints that cannot be easily solved), or
(2) scripting algorithms (it allows more complex instructions, however,
users have to handle a programming language). This paper presents the
definition and implementation of a generic experimentation language for
conducting automatic experiments on existing online laboratories. The
main objective is to use an existing online lab, created independently,
as a tool in which users can perform tailored experiments. To achieve
it, authors present the Experiment Application. Not only unifies the
two conceptions of performing experiments; it also allows the user to
define algorithms for interactive laboratories in a simple way without the
disadvantages of the traditional programming languages. It is composed
by Blockly, to define and design the experiments, and Google Chart, to
analyze and visualize the experiment results. This tool offers benefits to
students, teachers and, even, lab designers. For the moment, it can be
used with any existing lab or simulation created with the authoring tool
Easy Java(script) Simulations. Since there are hundreds of labs created
with this tool, the potential applicability of the tool is considerable. To
illustrate its utility a very well-known system is used: the water tank

Keywords: Experimentation language · Experiments · Virtual labora-

tories · Remote laboratories · Easy java(script) Simulations · Javascript ·

1 Introduction
Students need to understand the theoretical and practical fundamental concepts
in order to achieve a quality education in any field, hence, experimentation in

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 20
206 D. Galan et al.

traditional laboratories is essential [5]. The high costs associated with equipment,
space, and maintenance staff, impose certain constraints on resources. Virtual
and Remote laboratories (VRLs) try to overcome these limitations [4,14]. Differ-
ent empirical studies [3,19] have shown that both, VRLs and traditional labora-
tories, can obtain similar learning outcomes. Furthermore, VRLs provide inter-
esting additional advantages: they support experimentation about unobservable
phenomena and avoid health risks, such as radioactivity, chemical reactions, or
electricity [6,9].
A laboratory is meant to offer experimentation possibilities. Experimentation
can be defined as the process of extracting data from a system by exerting it, not
only through its inputs, but also through the model parameters. Traditionally,
users of VRLs were expected to perform experiments by scripting algorithms in
a certain simulation language or by interacting with the controls and buttons of
the applications graphical user interface (GUI).
Most of the modern modeling or simulation tools already include script-
ing facilities that allow users to script certain types of experiments [12,15,16].
Among them it can be found ACSL [2], EcosimPro [1], and Dymola [8]. Advanced
Continuous System Language (ACSL) was one of the first commercially available
modeling and simulation tools designed for simulating continuous systems. ACSL
includes a programming language that supports creating experiments. Dymola
also support a script facility that makes it possible to load model libraries, set
parameters, set start values, simulate and plot variables by executing scripts.
However, the major drawback is that if these tools are used to create a labora-
tory with educational purposes, final users (students mainly) will have to know
how the laboratory was implemented and handle fluently a specific programming
language just to perform any experiment.
Due to this disadvantages, most of the VRLs for educational purposes are
geared towards performing experiments by interacting with the GUI. For these
labs, visualization and interactivity are features of special importance, [11,13].
It is highly recommended the use of images or animations in order to help users
to understand more easily the system under study. Current developments in
interactivity allows users to visualize the response of the system to any external
or internal change, [10,18]. These features, rich visual contents and the possibility
of an instantaneous visualization of the system response make VRLs a human-
friendly tool to learn, helping users to achieve practical experience.
Despite all these improvements, there are certain limitations that need to be
solved. Consider, for example, a VL with a PI control of the level of water in
a tank. A typical process for an experiment in which several PIs are compared
could be:

1. Set initial conditions.

2. Let the system evolve until the exact moment when the level reaches the
initial set point with a 5% tolerance.
3. Determine the time elapsed in step 2.
4. Repeat steps 1 through 3 one hundred times with different sets of PI para-
5. Perform an analysis of the results thus obtained.
Web Experimentation on Virtual and Remote Laboratories 207

This set of actions cannot be executed with the accuracy needed or in reason-
able time by just interacting with the GUI. For example, pausing the simulation
in an exact moment is practically impossible. Other repetitive tasks such as tak-
ing tens of measurements to perform an analysis of the results are tedious and
provide no educational value so it is preferable not to ask for them.
Alternatively, it would be preferable to code the experiment using a flexible,
intuitive and user-friendly experimentation language, then run it automatically
and finally visualize the results or the plots. In other words, the solution will be
joining the two conceptions of how to perform experiments in a lab (interaction
and script programming).
The main goal of this work is to enrich existing VRLs with an application
that enables the creation and execution of automated experiments. To achieve
this objective, a new Application Programming Interface (API) and a set of func-
tions which VRLs should conform to provide the desired experimentation capa-
bilities, has been designed. On the basis of the general specifications obtained
from the most commonly used simulation languages, authors have added some
new requirements to achieve an universal, full-fledged specification that provides
more general and flexible features.
In order to test the viability of the proposed experimentation application,
authors’ implementation uses JavaScript labs developed with the modeling tool
Easy java(script) Simulations (EjsS), [11]. EjsS is a software tool that helps the
user with the creation of interactive simulations in Java, or JavaScript. EjsS has
been designed to be used by scientists without special programming skills, and
has proven to simplify the creation of simulations for scientific and engineering
purposes. An excellent proof of the EjsS potential is the ComPADRE repository
[7], which hosts free online resource collections, supporting students and teachers.
Among these resources, users can find more than 500 applications created with
EjsS. These labs are enriched with the capability to execute experiments, which,
in the presented approach, are scripts coded with Blockly, [17]. Blockly is an
easy and intuitive graphical programming language.
Despite the huge advantages and great utilities offered by EjsS there was
not a way to use them for allowing users to create simulation experiments. This
limitation is not restricted to EjsS; PhET simulations, [20], also available to
download for free, present the same problem too.
The paper is structured as follows. Section 2 presents the Experiment Appli-
cation and its benefits. Section 3 discusses the implementation of the language
and the blocks needed to represent experiments. Section 4 shows an example
that uses the experimentation language in practice. Finally, Sect. 5 discusses the
results and describes further work.
208 D. Galan et al.

2 The Experiment Application

Four elements compose the Experiment Application (ExApp):

1. Blockly Editor to design the experiment.

2. Google Charts visualization to analysis the experiment results.
3. The API to share information between the VRL and the experiment.
4. The experimentation language.

The first two elements (Blockly and Google Charts) that comprise the appli-
cation GUI (see Fig. 1) are explained in this section. The other two, the API
and the experimentation language are explained in Sect. 3. Notice that the lab
is not part of the application. If the lab, whether a virtual lab, a remote lab,
an hybrid lab or a simulation, implements the API proposed by the authors the
ExApp can be used.
In the first place, Blockly is the selected tool for the design of the experiments.
It is a free and open source library that adds a visual code editor to web and
Android apps. The Blockly editor uses graphical puzzle-like blocks to represent
concepts like variables, logic expressions, loops, and any element of a traditional
programming language. It allows users to apply programming principles with-
out having to worry about syntax or the laboratory structure. Blockly is used in
lots of learning applications as: Blockly Games (a set of educational games that

Fig. 1. ExApp GUI (Blockly Code and Google Charts) with a virtual lab modeling a
bouncing ball
Web Experimentation on Virtual and Remote Laboratories 209

teach programming concepts), MIT’s App Inventor (to create applications for
Android), (to teach introductory programming to millions of students
in their Hour of Code program), Wonder Workshop (to control their Dot and
Dash educational robots), the Open Roberta project (to program Lego Mind-
storms EV3 robots), or ScratchyCAD (a web based parametric 3D modeling tool
which allows users to create 3D objects). Using Blockly to create experiments
for VRLs rather than other programming languages is a valuable asset from the
experience of the authors. As VRLs can be used by any person, with or without
programming skills, Blockly is the easiest way to start creating algorithms to
conduct experiments. Furthermore, this code editor offers interesting features
that favor the web use, maintaining the power of traditional languages (imple-
mented with JavaScript, minimal type checking supported, easy to extend with
custom blocks, localized into 50+ languages, ...).
The data analysis is provided by Google Charts, [21]. This free and open
source library is used to visualize data on a website. Google Charts provide a
large number of ready-to-use chart types. It is able to represent from simple
line charts to complex hierarchical tree maps. It is highly customizable and
supports dynamic data and controls to create interactive dashboards. It also
offers functions to import and export data to other formats. As Blockly, it is a
JavaScript library so its incorporation to an online tool is simple and clean.

2.1 Benefits from Using ExApp

The beneficiaries of ExApp, from an educational role perspective, can be divided
into three groups:

1. Lab designers. They are in charge of creating the model, the view, deciding
which variables are going to be visualized in charts and adding some interac-
tive elements to control the execution of the lab by changing some variables
or internal functions. If the designer uses a tool that implements the API
proposed in this paper (EjsS, for example), he/she will not need to change a
single thing in the lab implementation in order to use ExApp. Furthermore,
the designer could focus only in the model definition and the view, charts or
any interactive element are not longer needed to control the lab. Since ExApp
has access to every variable, the final user can decide the way to work with
the lab and the data to show in the charts. This means that the time needed
to create a lab is reduced and the experiences proposed to the students are
not limited by the design.
2. Teachers. They have to define the lab experiences for the students. If the
lab is open and not restricted by the designer pretensions, the teacher will
have plenty of possibilities to propose different kind of experiments to the
students. From simple algorithms to discover the important variables of a
system, to create from the scratch a controller for the level control of a water
tank. Deploying ExApp and the lab on a web page is as simple as preparing
an HTML with the two elements. Authors’ next step is to include ExApp
as a Moodle plugin. In this way, the lab, the ExApp, the experiment files
210 D. Galan et al.

and the results would be managed by Moodle. The correction of these type
of interactive experiments using Blockly is as easy as running the student’s
file and evaluating the results obtained. Regarding the evaluation, teachers
may give value to whether the correct result is obtained as well as how the
student reached to that solution. Teachers have the possibility to analyze the
experiments structure, to study the algorithms used and to perform the stu-
dents experiments as many times as needed just with one click. An additional
advantage is that the time needed by teachers to explain how to use the tool is
extremely low comparing with other simulation tools that allow the creation
of experiment scripts. Even more, Blockly and other similar tools as Scratch
are currently being used in elementary schools. This means near future users
will not need any extra explanation about how to used it, because students
will be familiar with these tools.
3. Students. They are the final users of the lab and ExApp. Currently, Blockly
is the first step to start learning programming skills, so even students with no
programming knowledge will find ExApp an easy tool to code their scripts.
Blockly offers visualization features as highlighting blocks which are executed
at a certain time so it is very easy to follow the execution flow of the exper-
iment and correct possible mistakes. For the same lab, students can face
different assignments depending on their skills which promote, among oth-
ers, imagination to solve the assignments, learning interest, critical thinking,
being challenged and inquiry-based learning. Also, by scripting the experi-
ment, students avoid tedious or repetitive tasks that lack of any educational
value. They are able to exchange, compare y confront experiments with teach-
ers or other students. Visualizing, collecting and analyzing results is easier
thanks to Google Charts.

3 Implementation
To achieve the objective of controlling every aspect of a VRL an interface between
ExApp and the VRL is needed. Such API should then contain the following

1. Elements to initialize and configure ExApp.

2. Elements to access VRLs’ variables.
3. Elements to specify algorithms.
4. Elements to control the execution of the VRL.
5. Elements to analyze the results.

These elements, how they conform the experimentation language and the
way they are implemented in ExApp, are described in the following subsections.

3.1 Elements to Initialize and Configure ExApp

An initialization process is needed to configure correctly ExApp and to link it
to a lab. This means that ExApp has to receive the object that contains the
Web Experimentation on Virtual and Remote Laboratories 211

variables and lab functionality (in EjsS labs, the model variable). Once ExApp
and the lab are linked, all variables from the lab system are classified by type
and prepare for their use in the code. Optionally, an XML file can be configured
to show more or less Blockly blocks in order to create from the most simple
to the most complex algorithm. By default, the XML is configured with all the
possible blocks.

3.2 Elements to Access VRLs’ Variables

The lab has to implement two functions to set and get the variables
of the model. EjsS labs, for example, implement these functions using a
JSON Object, model.userUnserialize({variable,value}) to set and the function
model.userSerialize() to get the values of the variables.
The experimentation language implements two blocks using these two func-
tions (see Fig. 2). The one at the top shows a message with value of the selected
variable (t) and the one in bottom sets the value of the statement linked to it.
In this case, it assigns a value of 3 to the variable g. Model variables appear in
the variable chooser of the block automatically, as seen in Fig. 2.

Fig. 2. Setting and getting variables from the lab

3.3 Elements to Specify Algorithms

Experiments usually require specific algorithms that use variables from the lab,
but also, additional functions and variables defined by users, so the API should
allow it. Thanks to the JavaScript features this is easy to implement.
To do this, the experimentation language provides different blocks to create
standard algorithmic constructions to allow users to write complex algorithms,
if required. Figure 3 shows declaration of a new function in the code, how to call
it and a few blocks to create different types of statements.
212 D. Galan et al.

Fig. 3. Defining and calling a function using Blockly

3.4 Elements to Control the Execution of the VRL

The API should implement different ways to control the lab execution. If it is
a lab which evolution depends on time, instructions to start, pause or stop the
lab are necessary. The In every step do block can be used to executed a code
in every step of the simulation. Also, more complex functions are needed, like
events (do something when a given condition is met). For example, “run the sim-
ulation until the level of the tank is greater than 10” or “run the simulation and
increase the set point by 50% when t = 10”. The lab should implement the func-
tion model.addEvent(conditionCode, actionCode) and model.addFixedRel(Code)
in order to allow these type of statements.
Figure 4 shows how the experimentation language implements the functions
to add code to every step and to add events to the lab. First of all, the lab is
reset and then the code, print variable z, is added to the step. After that an
event is added. The condition is 3 minus the variable t from the lab. And the
action consist in pausing the execution of the VRL. After this, the lab is started.
When the variable t equals 3, the lab will pause.

Fig. 4. Controlling the lab execution using Blockly
Web Experimentation on Virtual and Remote Laboratories 213

3.5 Elements to Analyze the Results

The API should provide a function to visually compare output data from lab
variables produced in form of charts, graphs, etc. For instance, users can be
interested in comparing the plots of the evolution in time of the response of a
controller with different tuning parameters. Google Charts is the tool used to
visualize data. These data are tables with the values of the selected variable in
one column and the time variable of the model in another column.
The experimentation language provides three blocks to implement this func-
tionality. Figure 5 shows the three of them (left part of the image) and the chart
obtained (at the right part). The first block is to declare which model variables
are going to be recorded, the sample period for it and the function names. It
is possible to declare as many variables as needed. Once the recording variables
have been declared, the start recording and the stop recording blocks are used
to define the time intervals within which of those variables would be recorded.
Once the experiment starts, the chart will visualize the selected variables.

Fig. 5. Analyzing data with Blockly

4 Example of Use
This section presents several examples to show the usefulness of ExApp and the
advantages of it use described throughout the paper. Each of these examples
contains a brief description of the experiment, the code of the experiment, their
results and the advantages of working with the experiment editor. Experiments
have been developed for the water tank system VL. Their general features and
functionality are detailed below.

4.1 The Water Tank System

The water tank has two valves simulating the input flow (Qin) and the output
flow (Qout). The mathematical model of plant is shown in Eq. 1 (Fig. 6).
214 D. Galan et al.

Fig. 6. Simulation of the water tank

dh (Qin − Qout)
= (1)
dt A
where h represents the current tank water level and A the cross section of the
tank. The input flow, Qin, and the output flow, Qout, are given by Eqs. 2 and 3

Qin = K1 ∗ a1 (2)

Qout = K2 ∗ a2 ∗ 2 ∗ g ∗ h (3)

where K1 represents the first valve input flow, a1 the first valve perturbation.
K2 is the second valve output flow, a2 the second valve perturbation, the gravity
is represented by g and h the current tank water level.

4.2 The Virtual Lab

The virtual laboratory was developed using Easy java(script) Simulations (EjsS).
EjsS simulations are created by specifying a model to be run by the EjsS engine
and by building a view to visualize a graphical representation of the system
modeled and to interact with it. Department of Computer Science and Automatic
of UNED commonly uses EjsS simulations as virtual or remote laboratories.
The main intention of these examples is to show the ExApp power and useful-
ness, for this reason the virtual lab is as simple as possible. The GUI only shows
the tank, the in and out pipes and the level of water in the tank. Notice that
Web Experimentation on Virtual and Remote Laboratories 215

there are no interactive controls, plots or variable indications. The lab imple-
ments the Eqs. 1, 2 and 3, consequently, the parameters and variables of this
equations are the only ones implemented in the lab.

4.3 Example 1: Obtaining the Equilibrium Point of the System

Obtaining the equilibrium point of the system is a basic experiment which can
be proposed to the lab users. Given a system f such that:
x (t) = f (x(t)) (4)
A particular state xe is called an equilibrium point if
f (xe ) = 0 (5)
Applying this to the water tank system, Eq. 1 must be equal to 0, which implies
that the input flow has to be equal to the output flow. The equilibrium point is
determined by the level of water in that moment. ExApp offers an easy way to
visualize and obtain values from the system variables. Figure 7 shows an experi-
ment to obtain the equilibrium point by checking with an event if the input flow
(Qin) is equal to the output flow (Qout). When this occurs the virtual lab is
stopped and the value of the water level is display. Two charts are showed, the
one in the left side shows the evolution over time of the water level of and the
one in the right the evolution of Qin and Qout over time.

Fig. 7. Example 1: obtaining the equilibrium point
216 D. Galan et al.

4.4 Example 2: Creating a Level Controller for the Water Tank

A typical experiment proposed to lab users is to study and compare different

controllers for the water tank system. For that purpose, without the ExApp, the
designer has to implement each controller in the virtual lab, which means that
the learning experience is limited to observe the behavior of those controllers.
However, using the ExApp, it is possible for any user to design its own controller
and to experience with it, even in a lab without controller implementations, as
is the case in question.
To illustrate this, a simple Proportional-Integral (PI) controller is imple-
mented using the experimentation blocks. The PI algorithm computes and trans-
mits an output signal, U, every sample time, T, to the final control element (the
inlet flow in this case). The computed U from the PI algorithm is influenced

Fig. 8. Example 2: creating a PI controller
Web Experimentation on Virtual and Remote Laboratories 217

by the controller tuning parameters and the controller error, e(t). PI controllers
have two tuning parameters to adjust, K and Ti. PI controllers provide a bal-
ance of complexity and capability that makes them by far the most widely used
algorithm in process control applications. The PI controller implemented for this
experiment has the form shown in Eq. 6.

U = Ke(t) + e(t)dx (6)
Figure 8 shows the experiment script and the charts obtained at the end of the
experiment. The experiment is divided in five parts for better readability. The
first one is used to create and define the charts. The initialization part set the
initial values to prepare the lab for the experiment. The controller implementa-
tion part is the main script of the experiment. By using the “In every step do”
block the input flow is calculated using Eq. 6. Because of the limitations of the
valve some conditions are added for not allowing negative input flows and setting
a top maximum value. Events are used to change the set-point value at 50 s and
to finish the experiment at 100 s. Once everything is defined, the experiment is
executed. The chart at the left part shows the level and the set-point over time
and the one at the right part shows the input flow and the output flow over time.

5 Conclusion
To expand the experimental activities that can be carried out in virtual and
remote laboratories, a new web application and its corresponding implementa-
tion has been presented in this paper. Current alternative approaches to auto-
mate experiments require to interact and consequently to use the same code
that implements the laboratory, which implies the use of the same language in
which the lab is written. In contrast, in the authors approach, it is possible to:
access and modify all the laboratory variables, create algorithms and functions,
and control the execution of the experiment. Additionally, users can execute the
experiment step by step or run the whole script with a modifiable interval of time
between code sentences. The developed web application will be implemented as
a Moodle plugin in the near future. From the authors point of view, these type of
plugins can change the way of performing experiments, creating new experiences
in Learning Management Systems (LMS).
To illustrate the potential and ease of use of the language, an example has
been described in detail: the water tank system.
Authors are in the process of testing the initial design creating different types
of experiments of practical use in teaching Automatic Control and Physics. Ini-
tial results show that this implementation is both simple and flexible, supplying
users with a great deal of control over the running simulation. The combination of
JavaScript and Blockly has been crucial in making the proposed implementation
very natural. The way EjsS implements the VRLs allows external applications
to easily access all its variables without any required modification in the lab
applications already developed. In a more general context, authors believe the
218 D. Galan et al.

API proposed in this work can effortlessly be adapted to different lab implemen-
tations or to any future standard protocol.

Acknowledgments. This work has been funded by the Spanish Ministry of Econ-
omy and Competitiveness under the projects EUIN2015-62577, DPI-2013-44776-R and

1. EA internacional ecosimpros website.
2. MGA software ACSL reference manual, version 11 (1995)
3. Brinson, J.R.: Learning outcome achievement in non-traditional (virtual and
remote) versus traditional (hands-on) laboratories: a review of the empirical
research. Comput. Educ. 87, 218–237 (2015)
4. Brodersen, A.J., Bourne, J.R.: Virtual engineering laboratories. J. Eng. Educ. 83,
279–285 (1994)
5. Cellier, F.E., Greifeneder, J.: Continuous System Modeling. Springer, New York
6. Chiu, J.L., DeJaegher, C.J., Chao, J.: The effects of augmented virtual science
laboratories on middle school students’ understanding of gas properties. Comput.
Educ. 85, 59–73 (2015)
7. Christian, W., Esquembre, F., Barbato, L.: Open source physics. Science
334(6059), 1077–1078 (2011)
8. Mattsson, S.E., Brck, D., Elmqvist, H., Olsson, H.: Dymola for multi engineer-
ing modeling and simulation. In: Proceedings of the 2nd International Modelica
Conference (2002)
9. de Jong, T., Linn, M.C., Zacharia, Z.C.: Physical and virtual laboratories in science
and engineering education. Science 340, 305–308 (2013)
10. Dormido, S., Dormido-Canto, S., Dormido, R., Sánchez, J., Duro, N.: The role of
interactivity in control learning. Int. J. Eng. Educ. 21(6), 1122 (2005)
11. Esquembre, F.: Adding interactivity to existing Simulink models using Easy Java
simulations. Comput. Phys. Commun. 156, 199–204 (2004)
12. Feisel, L., Peterson, G.D.: A colloquy on learning objectives for engineering edu-
cational laboratories. In: ASEE Annual Conference and Exposition, Montreal,
Ontario, Canada (2002)
13. Heck, B.S.: Future directions in control education [guest editorial]. IEEE Control
Syst. 19(5), 36–37 (1999)
14. Heradio, R., de la Torre, L., Galan, D., Cabrerizo, F.J., Herrera-Viedma, E.,
Dormido, S.: Virtual and remote labs in education: a bibliometric analysis. Com-
put. Educ. 98, 14–38 (2016)
15. Law, A.M., Kelton, W.D.: Simulation Modeling and Analysis, 2nd edn.
McGrawHill, New York (1991)
16. Law, A.M., Kelton, W.D.: Simulation Modeling and Analysis. McGrawHill,
New York (2001)
17. Marron, A., Weiss, G., Wiener, G.: A decentralized approach for programming
interactive applications with javascript and blockly. In: Proceedings of the 2nd
Edition on Programming Systems, Languages and Applications Based on Actors,
Agents, and Decentralized Control Abstractions, pp. 59–70. ACM (2012)
Web Experimentation on Virtual and Remote Laboratories 219

18. Sánchez, J., Morilla, F., Dormido, S., Aranda, J., Ruipérez, P.: Virtual and remote
control labs using Java: a qualitative approach. IEEE Control Syst. 22(2), 8–20
19. Sun, K.T., Lin, Y.C., Yu, C.J.: A study on learning effect among different learning
styles in a web-based lab of science for elementary school students. Comput. Educ.
50(4), 1411–1422 (2008)
20. Wieman, C.E., Adams, W.K., Perkins, K.K.: PhET: simulations that enhance
learning. Science 322(5902), 682–683 (2008)
21. Zhu, Y.: Introducing Google chart tools and google maps API in data visualization
courses. IEEE Comput. Graph. Appl. 32(6), 6 (2012)
How to Leverage Reflection in Case of Inquiry
Learning? The Study of Awareness Tools
in the Context of Virtual and Remote

Rémi Venant, Philippe Vidal, and Julien Broisin(B)

Institut de Recherche en Informatique de Toulouse,

Université Toulouse III Paul Sabatier,
118 route de Narbonne, 31062 Toulouse Cedex 04, France

Abstract. In this paper we design a set of awareness and reflection

tools aiming at engaging learners in the deep learning process during
a practical activity carried out through a virtual and remote labora-
tory. These tools include: (i) a social awareness tool revealing to learn-
ers their current and general levels of performance, but also enabling
the comparison between their own and their peers’ performance; (ii) a
reflection-on-action tool, implemented as timelines, allowing learners to
deeply analyze both their own completed work and the tasks achieved
by peers; (iii) a reflection-in-action tool acting as a live video player to
let users easily see what others are doing. An experimentation involv-
ing 80 students was conducted in an authentic learning setting about
operating system administration; the participants evaluated the system
only slightly higher than traditional computational environments when it
comes to leverage reflection and critical thinking, even if they evaluated
the system as good in terms of usability.

Keywords: Virtual and remote laboratory · Computer science · Aware-

ness tool · Reflection

1 Introduction
In the context of inquiry learning that leads to knowledge building and deep
learning [11], Virtual and Remote Laboratories (VRL) gain more and more inter-
est from the research community, as the Go-Lab European project that involved
more than fifteen partners demonstrates it. However, research in this area mainly
focus on the technical and technological issues instead of emphasizing the peda-
gogical expectations to enhance learning. Yet, some research conducted around
remotely controlled track-based robots [17] showed that, among other benefits,
reflection and metacognition could emerge [21].
On the other hand, during the last decade, a significant number of researchers
studied how awareness tools could be used to promote reflection. A wide variety

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 21
How to Leverage Reflection in Case of Inquiry Learning? 221

of ideas and initiatives emerged, from dashboards exposing various statistical

data about the usage of the learning environment by learners [14] to visual
reports about physiological data to foster learners’ self-understanding [12].
Our previous works introduced Lab4CE, a remote laboratory for computer
education. The main objective of this environment is to supply remote collabora-
tive practical sessions in computer science to learners and educators. It provides
them with a set of tools and services aiming at improving pedagogical capabili-
ties while hiding the technical complexity of the whole system. In this paper, we
design new awareness and reflection tools to investigate the following research
question: how the design of both individual and group awareness tools could
leverage reflective thinking and peer support during a practical activity?
To tackle the above research question, our methodology consists in (i) design-
ing and integrating a set of awareness and reflection tools into our existing remote
lab, (ii) setting up an experimentation of the enhanced Lab4CE environment in
an authentic learning context, and (iii) analyzing the experimentation results.
These three steps constitute the remaining of the paper. They are preceded by
a brief presentation of our remote lab and followed by conclusions.

2 Lab4CE: A Remote Laboratory for Computer


Our previous research on computer supported practical learning introduced

Lab4CE [5], a remote laboratory for computer education. In this section, we
focus on its main learning features and expose the learning analytics capabilities
that represent the basis for the awareness tools exposed later in this article.

2.1 Educational Features

The Lab4CE environment stands on virtualization technologies to offer to users

virtual machines hosted by a cloud manager (i.e., OpenStack1 ). It exposes a set
of scaffolding tools and services to support various educational process. Learners
and tutors are provided with a rich learning interface illustrated on Fig. 1 and
integrating the following artifacts: (i) a web Terminal gives control on the remote
virtual resources required to complete the practical activity, (ii) a social pres-
ence tool provides awareness about the individuals working on the same practical
activity, (iii) an instant messaging system ensures exchanges of text messages
between users, and (iv) an invitation mechanism allows users to initiate collab-
orative sessions and to work on the same virtual resources; learners working on
the same machine can observe each other’s Terminal thanks to a streaming sys-
tem. Finally, the Lab4CE environment includes a learning analytics framework
in which all interactions between users and the system are recorded.

222 Venant et al.

Fig. 1. The Lab4CE learning interface

2.2 Learning Analytics Features

The Lab4CE learning analytics framework is detailed in [29]. Basically, it man-

ages data about interactions between users, between users and remote resources,
and between users and the Lab4CE learning interface.
We adopted the “I did this” paradigm suggested by ADL [8] to represent and
store the tracking data. The data model we designed allows to express, as JSON-
formatted xAPI statements, interactions occurring between users and the whole
set of artefacts integrated into the Lab4CE GUI. For instance, interactions such
as “the user rvenant accessed its laboratory on 11 of November, 2016, within the
activity Introducing Shell ”, or “the user rvenant executed the command rm-v on his resource Host01 during the activity Introducing Shell ” can be
easily represented by the data model.
Traces are enclosed within a learning record store (LRS) so they can be
easily reused either by learning dashboards and awareness tools for visualization
purposes, or by other components to compute indicators. Within our framework,
an enriching engine is able to generate, starting from the datastore, valuable
information that make sense from the educational point of view. To enrich a trace
with valuable indicators, this component relies on an inference model composed
of a solver and of a set of rules, where the former applies rules that specify how
a given indicator must be inferred.
How to Leverage Reflection in Case of Inquiry Learning? 223

On the basis of this framework, the next section introduces (self-)awareness

tools aiming at initiating reflective learning within the Lab4CE environment.

3 The Awareness Tools

The visualization tools exposed here are based on instructions carried out by
learners, and aim at making learners aware of their learning experience.

3.1 The Social Comparison Tool

Theoretical Basis and Objectives. The analysis by learners of their own
performance can be supported by self-awareness tools exposing to learners, on
the basis of their learning paths within instructional units, various information
about their level of knowledge. These learning analytics tools build dashboards
to return feedback about students’ overall results [24], their global level of per-
formance [1], strengths and weakness [15], or about precise concepts through
computer-based assessments [23]. These tools all evaluate learners’ performance
by addressing acquisition of theoretical concepts and knowledge. However, in the
context of practical activities, such evaluation techniques become inappropriate
as they do not evaluate how learners are able to reuse and apply their theoretical
knowledge when they are faced with a concrete and practical situation (i.e., level
of practice).
In addition, recent research show that learners should also become engaged
in a social analysis process to enhance their reflection [30]. Comparative tools are
designed to make each learner’s performance identifiable, and therefore to allow
individuals to compare their own and their partners’ activity. Such types of tools
consist of social comparison feedback that allow group members to see how they
are performing compared to their partners [22]. These social awareness tools
present awareness information in various ways and bring students the feeling of
being connected with and supported by their peers [19].

Design and Learning Scenario. Evaluating learners’ level of practice implies

the evaluation of the interactions between users and the learning artifacts of
the Lab4CE environment. In the present study, we focus on the evaluation of
interactions between users and remote resources, since this type of interaction is
highly representative of the learners’ level of practice. In particular, we address
the syntactic facet so as to identify weather a command carried out by a learner
has been successfully executed on the target resource. The technical rightness
indicator should be evaluated as right (respectively wrong) if it has been (respec-
tively has not been) properly executed on the resource; in that case, the value
of the indicator is set to 1 (respectively 0). In a Shell Terminal, the output of
a command can be used to detect the success or failure of its execution; the
implementation details are given in the next section.
The social comparison tool we designed thus reuses the technical rightness
indicator to reflect to users their level of practice. Previous research showed that
224 Venant et al.

visualization tools dealing with such data have to require very few attention to
be understood and beneficial for learners [27]. We adopted a simple color code
(i.e., green if the indicator is set to 1, red if it is set to 0) to represent, as progress
bars, learners’ performance. The tool distinguishes the learners’ level of practice
during the session within the system (i.e., since they logged in the system - see
progress bar My current session in Fig. 2), and their level of practice taking into
account the whole set of actions they carried out since they started working
on the given practical activity (i.e., not only the current session, but also all
previous sessions related to the given activity - see progress bar My practical
activity in Fig. 2). This tool also comprises a progress bar to reflect the level of
practice of the whole group of learners enrolled in the practical activity (i.e., all
the sessions of all users - see progress bar All participants in Fig. 2). Each time a
command is executed by a learner, the progress bars are automatically updated
with a coloured item (see next section). Finally, the social presence tool (see
Sect. 2.1) exposing the users currently working on the same practical activity
has been enhanced: the session level of practice of each user is displayed using a
smaller progress bar (see bottom right corner of Fig. 1).

Fig. 2. The social comparison tool exposing learners’ performance

Through the current and general progress bars, learners can get aware of the
progression of their level of practice regarding a given activity; they are also
able to compare their current level with their average level. In conjonction with
the group progress bar, learners can position themselves in relation to peers and
become more engaged in learning tasks [18]. In addition, the progress bars of
the social presence tool allow learners to identify peers that perform better, and
thus to get support from them using other awareness tools (see further).
Let us note that the indicator on which the social comparison tool stands
on, i.e., the technical rightness, is not specific to computer science. In most of
STEM disciplines, such an indicator may be captured: a given instruction is
executed (respectively not executed) by an equipment if it is (respectively is
not) technically/semantically well-formulated.
How to Leverage Reflection in Case of Inquiry Learning? 225

Implementation. To infer the technical rightness indicator, our approach con-

sisted in identifying the various error messages that may occur within a Shell
Terminal when a technically wrong command or program is executed. According
to our findings, we specified four rules to detect an error: R1 reveals errors arising
when the argument(s) and/or option(s) of a command are incorrect; R2 triggers
the error occurring when a command entered by the user does not exist; R3 and
R4 indicate if the manual of a command that does not exist has been invoked.
Finally, the indicator is processed according to a mathematical predicate based
on these rules and that returns 0 if no errors were detected for a given command.
Once this indicator is inferred by the enriching engine, the enriched raw trace
is decoded and stored into the LRS (see Sect. 2.2). The social comparison tool
then adopts the publish-subscribe messaging pattern to retrieve and deliver these
information. The server side of the Lab4CE system produces messages composed
of a pair timestamp-technical rightness as soon as a new trace is stored into the
LRS, and publishes these messages into various topics; the progress bars act as
subscribers of these topics. The current and general progress bars are updated
in near real time (i.e., just after a user executes a command), whereas the group
artifact is updated on an hourly basis only.
The social comparison tool allows learners to self-analyze their levels of per-
formance, as well as those of their peers, but the visualization approach we
adopted prevents them to deeply analyze their own and peers’ actions. While
exposing performance, the tool presented below thus provides details about the
actions carried out by users on resources.

3.2 The Reflection-on-Action Tool

Theoretical Basis and Objectives. According to [4], reflection is a complex

process consisting of returning to experiences, re-evaluating the experiences, and
learning from the (re)evaluation process in order to adapt future behaviour. This
model makes learners self-aware of their learning progress, and capable of taking
appropriate decisions to improve their learning [10]. It is also in line with the
research conducted by [30] who found that analyzing and making judgements
about what has been learned and how learning took place are involved in the
reflective process. These tasks can only be achieved by learners themselves, but
their engagement in reflection can be initiated and fostered by technology in the
context of online learning through reflection-on-action tools [30].
Reflection-on-action can be defined as the analysis of process after the actions
are completed [10], or as “standing outside yourself and analyzing your perfor-
mance” [16]. [9] recommends various strategies to engage learners in reflection-
on-action such as imitation by learners of performance especially modeled for
them, or replay of students’ activities and performance by teachers. Since some
approaches consider that reflective thinking implies something other than own
thinking [30], the tool presented here acts at both the individual and social lev-
els, and aims at supporting reflection-on-action by offering users the opportunity
to return to what they and their peers have learned, and how.
226 Venant et al.

Design and Learning Scenario. The tool features visualization and analysis
of detailed information about interactions between users and remote resources.
Users are able to consult the commands they carried out during a particular
session of work, or since the beginning of a given practical activity. The tool has
been designed to let users easily drill down into deeper and fine-grained analysis
of their work, but also to let them discover how peers have solved a given issue.
Figure 3 shows the graphical user interface of this tool: the top of the interface
exposes a form to allow users to refine the information they want to visualize,
whereas the main panel exposes the selected data. To facilitate the projection
of the information, the filtering features include the possibility to select a given
user, a particular session of work and, if applicable, one or several resources
used during the selected session. The actions matching with the selected criteria
are then exposed to users as timelines. Each node of a timeline represents a
command, and is coloured according to its technical rightness. In addition, the
details of a command can be visualized by putting the mouse over the matching
node; in that case, the date the command has been carried out, the action and
the output are displayed into the area appearing on Fig. 3.
This reflection-on-action tool allows users to browse the history of the actions
they carried out, and thus brings learners into a reflective learning situation
where they can analyze their practical work sessions in details. In addition,

Fig. 3. The reflection-on-action tool
How to Leverage Reflection in Case of Inquiry Learning? 227

learners can easily focus, thanks to the coloured-coded artifact, on the difficulties
they experienced. Also, combined with the social presence tool, learners are able
to easily seek immediate help from peers by analyzing the commands executed
by users currently performing well into the system.

Implementation. The reflection-on-action tool stands on the traditional client-

server architecture. Based on the configuration of the data to analyze, the tool
builds the matching query and sends a request to the Lab4CE server side. The
set of commands comprised into the response, encoded using the JSON format, is
then parsed to display green or red nodes according to the value of the technical
rightness indicator.

3.3 The Reflection-in-Action Tool

Theoretical Basis and Objectives. In contrast with reflection-on-action,
which occurs retrospectively [20], reflection-in-action occurs in real-time [26].
This concept has been originally introduced by [25]: when practitioner fails,
(s)he analyzes own prior understandings and then “carries out an experiment
which serves to generate both a new understanding of the phenomenon and a
change in the situation” [25] (p. 68). Skilled practitioners often reflect-in-action
while performing [16]. [13] successfully experimented a test-driven development
approach to make computer science students move toward reflection-in-action.
In our context, users can reflect-in-action thanks to (web) Terminals: they can
scroll up and down the Terminal window to analyse what they just done, and
then run a new command and investigate the changes, if any.
However, as stated earlier, research suggested that collaboration, and more
especially interaction with peers, supports reflection in a more sustainable way
[3]. The objective of the tool presented below is to strengthen reflection-in-action
through peer support by letting users be aware of what others are doing. When
students face difficulty, uncertainty or novelty, we intend to let them know how
their peers complete tasks. Even if synchronous communication systems might
contribute to this process, users need also a live view on both the actions being
carried out by peers, and the remote resources being operated, to correlate both
information and make proper judgements and/or decisions.

Design and Learning Scenario. The reflection-in-action tool we designed is

illustrated on Fig. 4, and acts as a Terminal player where interactions occurring
between users and remote resources during a session and through the web Ter-
minal can be watched as a video stream: both inputs from users and outputs
returned back by resources are displayed character by character. The tool fea-
tures play, pause, resume and stop capacities, while the filtering capabilities of
the reflection-on-action tool are also available: users can replay any session of
any user to visualize what happened within the web Terminal. When playing
the current session stream of a given user, one can get aware, in near real time,
of what the user is doing on the resources involved in the practical activity.
228 Venant et al.

During a face-to-face computer education practical session, learners are used

to look at the screen of their partners in order to get the exact syntax of source
code or to find food for reflection. Our awareness tool aims to reproduce this
process in a remote setting. In Fig. 4, the user connected to the system is watch-
ing the current session of the learner jbroisin. Since the stream of data played
by the tool is updated just after an action is executed by a user through the web
Terminal, the user is provided with a near live view about what jbroisin is doing
on the remote resource, and how it reacts to instructions. Also, combined with
the tools presented before, the reflection-in-action tool leverages peer support:
learners can easily identify peers performing well, and then look at their web
Terminal to investigate how they are solving the issues of the practical activity.

Fig. 4. The reflection-in-action tool

Implementation. This awareness tool implements both the publish-subscribe

messaging pattern and the client-server architecture, depending on the practical
session to process: the former is used in case of current sessions (i.e., live video
streams), whereas the latter is dedicated to completed sessions. When a live
session is requested, the matching topic is dynamically created on the Lab4CE
server side, and messages are published as soon as commands are carried out by
the user being observed. The process suggested to retrieve a completed session
has been described in Sect. 3.3: a query is sent to the server side, and then results
are parsed and interpreted by the tool.
How to Leverage Reflection in Case of Inquiry Learning? 229

The three tools presented in this section have been designed, coded and
integrated into the existing Lab4CE environment. An experimentation based on
the enhanced system has then been set up; the design, results and analysis of
this study are exposed below.

4 Experimentation

The experimentation presented here investigates the impact of the awareness

and reflection tools designed in the previous sections on students’ perception of
learning during a practical activity, according to the five following scales: rel-
evance, reflection, interactivity, peer support and interpretation. Our objective
was to compare students’ perception of learning while using two different envi-
ronments: the enhanced Lab4CE system and the traditional computers usually
available to students to perform practical activities.

4.1 Design and Protocol

The experiment took place in the Computer Science Institute of Technology

(CSIT), University of Toulouse (France), and involved 80 first year students
(with a gender repartition of 9 women and 71 men, which reflects the distribution
of CSIT students) enrolled in a learning unit about the Linux operating system
and Shell programming. The experimentation was conducted for three face-to-
face practical sessions that lasted 90 min. These sessions were all related to Shell
programming: students had to test Shell commands into their Terminal, and
then to write Shell scripts to build interactive programs. Students had also to
submit two reports: one about the first session, and the other about the second
and third sessions (the work assigned to students required two practical sessions
to be completed). These reports had to be posted on a Moodle server four days
after the matching session, so that students could work during week-ends and
have extra-time to complete their tasks.
Two groups of students were randomly created. One group of students (i.e.,
the control group: N = 48, 6 women, 42 men, mean age = 18.8) had access, as
usual, to the Debian-based computers of the institution to carry out the practical
activities. The other group (i.e., the Lab4CE group: N = 32, 3 women, 29 men,
mean age = 18.6) was provided with the enhanced Lab4CE environment; each
student had access to a Debian-based virtual machine during each practical
session, and their interactions with the remote lab were recorded into the LRS.
Two different teachers made a live demo of the Lab4CE features to the Lab4CE
group during the first 10 min of the first session.
At the end of the last practical session, both groups of students were asked
to fill the Constructivist Online Learning Environment Survey (COLLES). This
questionnaire [28] includes twenty four items using a five-point Likert scale (i.e.,
almost never, seldom, sometimes, often, almost always) to measure students per-
ception of their learning experience. The COLLES has been originally designed
to compare the preferred learners experience (i.e., what they expect from the
230 Venant et al.

learning unit) with their actual experience (i.e., what they did receive from the
learning unit). In our experimentation, learners actual experience of both groups
has been compared: the control group evaluated the Linux computers, whereas
the Lab4CE group had to evaluate our system. In addition, the System Usabil-
ity Scale (SUS), recognized as a quick and reliable tool to measure how users
perceive the usability of a system [6], has been delivered to students.

4.2 Results and Analysis

COLLES. Among the Lab4CE group, 22 students fulfilled the questionnaire,
while 36 learners of the control group answered the survey. The whisker plot of
Fig. 5 shows the distribution of answers relative to five of the six scales evaluated
through the COLLES and also exposes, for each of them, the class mean scores,
first and third quartiles of each group of users.

Fig. 5. COLLES survey summary

The first scale (i.e., relevance) expresses the learners’ interest in the learning
unit regarding future professional practices. The Lab4CE group evaluated the
system with a slightly higher mean score and a higher concentration of scores
distribution. Since this category deals more with the topic of the learning unit
itself than the supporting environment, high differences were not expected.
The second scale relates to reflection and critical thinking. Even if the tradi-
tional environment assessed by the control group does not provide any awareness
How to Leverage Reflection in Case of Inquiry Learning? 231

and/or reflection tools, the plots do not show a significant difference between
both groups, but slightly higher mean score and median for the Lab4CE group
only. We make here the hypothesis that learners did not realize they were engaged
in the reflection process while consulting the Lab4CE awareness tools. Indeed,
according to the system usage statistics, a mean of almost 42% of the students of
the Lab4CE group have used the reflection-on-action tool to review each of their
own sessions. On the other hand, we think that students of the control group
have considered the reflection processes occurring within the classroom instead
of considering the processes generated through the computer system only.
Feedback from both groups are quite equivalent regarding the interaction
scale which measures the extent of learners’ educative dialogue and exchange of
ideas. Here, results from the Lab4CE assessment were expected to be higher than
those returned by the control group as Lab4CE provides a chat where students
can exchange instant text messages, and a total of 166 messages have been posted
during the 3 sessions. In addition, almost 30% of the Lab4CE students have
worked at least once with a peer using the collaborative feature (see Sect. 2.1).
Again, we think that students are not aware of being involved in an interaction
task when exchanging ideas with peers.
Results about the peer support are also quite the same for both groups, even
slightly lower in the Lab4CE group. Beside our previous hypothesis that can
explain such unexpected results (here again, 47% of the Lab4CE students have
used the reflection-on-action tool), this scale reveals a potential improvement of
our platform. Learners have significantly used the reflection tools to analyze the
work done by peers, but the system does not currently provide learners with such
awareness information. The peer support scale is about the feeling of learners on
how peers encourage their participation, or praise or value their contributions.
We believe that providing students with awareness information about analysis
performed by peers on their work would increase that perception.
The last scale evaluates how messages exchanged between students, and
between students and tutors, make sense. Scores from the Lab4CE group are
characterized by a higher concentration of distribution and a little higher class
mean. These results tend to confirm that providing students with reflection tools
helps them to get a better comprehension of their interactions with each other.
In addition to the statistics commented in the previous paragraphs, interest-
ing data are the number of peers sessions analysis the day the first report had
to be submitted: almost 43% of the Lab4CE students analyzed at least one ses-
sion of a peer using the reflection-on-action tool. We assume that these learners
didn’t know how to achieve the objectives of the practical work, and thus sought
for help from peers sessions: the mean level of performance of users whose the
session has been analyzed is 90 (for a highest score of 100).
Finally, the social comparison tool which, by default, is hidden within the
user interface (see Fig. 1), has been displayed by most of users at each session
even if this rate slightly decreases when the level of performance increases. This
finding is in line with research about social comparison tools. Their impact on
cognitive and academic performance has been thoroughly examined, and main
232 Venant et al.

results showed that informing learners of their own performance relative to others
encourages learning efforts and increases task performance [18].

System Usability Scale. The score of the SUS has been computed according
to [7]. The SUS score was 62.4 for the control group, while a SUS score of a
73.6 was attributed to the Lab4CE system. According to [2], the Linux-based
computers have been evaluated as below than acceptable systems in terms of
usability, while Lab4CE has been qualified as good regarding this criteria.

5 Conclusions and Perspectives

We designed a set of awareness and reflection tools aiming at engaging learners
in the deep learning process. These tools have been successfully integrated into
the Lab4CE system, our existing remote laboratory environment dedicated to
computer education, before being experimented in an authentic learning context.
The objectives of this experimentation were to evaluate, in a face-to-face prac-
tical learning setting, students’ perception of learning when performing tasks
using the enhanced Lab4CE system, and to compare these measures with their
perception of learning when using traditional practical environments. Even if
the face-to-face setting might have had a negative impact on the Lab4CE envi-
ronment evaluation, students rated both environments at the same levels of
relevance, reflection and interpretation.
From this experimentation, we identified new awareness tools that might be of
importance to leverage reflection, such as a notification system alerting learners
that peers are analyzing their work, or dashboards highlighting analysis of their
works based on their performance. Finally, the analysis of the experimentation
results also emphasize the low levels of interactivity and peer support within
our system. We will dive into these broader areas of investigation through the
design and integration of scaffolding tools and services such as private message
exchanges, recommendation of peers that may bring support, or help seeking.

1. Arnold, K.E., Pistilli, M.D.: Course signals at Purdue: using learning analytics to
increase student success. In: Proceedings of the 2nd International Conference on
Learning Analytics and Knowledge, pp. 267–270. ACM (2012)
2. Bangor, A., Kortum, P., Miller, J.: Determining what individual SUS scores mean:
adding an adjective rating scale. J. Usability Stud. 4(3), 114–123 (2009)
3. Boud, D.: Situating academic development in professional work: using peer learn-
ing. Int. J. Acad. Dev. 4(1), 3–10 (1999)
4. Boud, D., Keogh, R., Walker, D.: Reflection: Turning Experience into Learning.
Routledge, New York (2013)
5. Broisin, J., Venant, R., Vidal, P.: Lab4CE: a remote laboratory for computer edu-
cation. Int. J. Artif. Intell. Educ. 25(4), 1–27 (2015)
6. Brooke, J.: SUS: a retrospective. J. Usability Stud. 8(2), 29–40 (2013)
How to Leverage Reflection in Case of Inquiry Learning? 233

7. Brooke, J., et al.: SUS-a quick and dirty usability scale. Usability Eval. Ind.
189(194), 4–7 (1996)
8. Advanced Distributed Learning (ADL) Co-Laboratories: Experience API. https:// Accessed 21 Nov
9. Collins, A., Brown, J.S.: The computer as a tool for learning through reflection.
In: Learning Issues for Intelligent Tutoring Systems, pp. 1–18. Springer, New York
10. Davis, D., Trevisan, M., Leiffer, P., McCormack, J., Beyerlein, S., Khan, M.J.,
Brackin, P.: Reflection and metacognition in engineering practice. In: Using Reflec-
tion and Metacognition to Improve Student Learning, pp. 78–103 (2013)
11. De Jong, T., Linn, M.C., Zacharia, Z.C.: Physical and virtual laboratories in science
and engineering education. Science 340(6130), 305–308 (2013)
12. Durall, E., Leinonen, T.: Feeler: supporting awareness and reflection about learn-
ing through EEG data. In: The 5th Workshop on Awareness and Reflection in
Technology Enhanced Learning, pp. 67–73 (2015)
13. Edwards, S.H.: Using software testing to move students from trial-and-error to
reflection-in-action. ACM SIGCSE Bull. 36(1), 26–30 (2004)
14. Govaerts, S., Verbert, K., Klerkx, J., Duval, E.: Visualizing activities for self-
reflection and awareness. In: International Conference on Web-Based Learning,
pp. 91–100. Springer, Heidelberg (2010)
15. Howlin, C., Lynch, D.: Learning and academic analytics in the realizeit system. In:
E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare,
and Higher Education, pp. 862–872 (2014)
16. Jonassen, D.H.: Instructional design theories and models: a new paradigm of
instructional theory. Des. Constr. Learn. Environ. 2, 215–239 (1999)
17. Kist, A.A., Maxwell, A., Gibbings, P., Fogarty, R., Midgley, W., Noble, K.: Engi-
neering for primary school children: learning with robots in a remote access lab-
oratory. In: The 39th SEFI Annual Conference: Global Engineering Recognition,
Sustainability and Mobility (2011)
18. Kollöffel, B., de Jong, T.: Can performance feedback during instruction boost
knowledge acquisition? Contrasting criterion-based and social comparison feed-
back. Interact. Learn. Environ. 24(7), 1–11 (2015)
19. Lowe, D., Murray, S., Lindsay, E., Liu, D.: Evolving remote laboratory architectures
to leverage emerging internet technologies. IEEE Trans. Learn. Technol. 2(4), 289–
294 (2009)
20. Matthew, C.T., Sternberg, R.J.: Developing experience-based (tacit) knowledge
through reflection. Learn. Individ. Differ. 19(4), 530–540 (2009)
21. Maxwell, A., Fogarty, R., Gibbings, P., Noble, K., Kist, A.A., Midgley, W.: Robot
RAL-ly international-promoting stem in elementary school across international
boundaries using remote access technology. In: The 10th International Conference
on Remote Engineering and Virtual Instrumentation, pp. 1–5. IEEE (2013)
22. Michinov, N., Primois, C.: Improving productivity and creativity in online groups
through social comparison process: new evidence for asynchronous electronic brain-
storming. Comput. Hum. Behav. 21(1), 11–28 (2005)
23. Miller, T.: Formative computer-based assessment in higher education: the effective-
ness of feedback in supporting student learning. Assess. Eval. High. Educ. 34(2),
181–192 (2009)
24. Prensky, M.: Khan academy. Educ. Technol. 51(5), 64 (2011)
25. Schön, D.A.: The Reflective Practitioner: How Professionals Think in Action. Basic
Books, New York (1983)
234 Venant et al.

26. Seibert, K.W.: Reflection-in-action: tools for cultivating on-the-job learning condi-
tions. Org. Dyn. 27(3), 54–65 (2000)
27. Sweller, J.: Cognitive load theory, learning difficulty, and instructional design.
Learn. Instr. 4(4), 295–312 (1994)
28. Taylor, P., Maor, D.: Assessing the efficacy of online teaching with the construc-
tivist online learning environment survey. In: The 9th Annual Teaching Learning
Forum, p. 7 (2000)
29. Venant, R., Vidal, P., Broisin, J.: Evaluation of learner performance during prac-
tical activities: an experimentation in computer education. In: The 14th Interna-
tional Conference on Advanced Learning Technologies, ICALT, pp. 237–241. IEEE
30. Wilson, J., Jan, L.W.: Smart Thinking: Developing Reflection and Metacognition.
Curriculum Press, Carlton (2008)
Role of Wi-Fi Data Loggers in Remote Labs

Venkata Vivek Gowripeddi1(&), B. Kalyan Ram2, J. Pavan1,

C.R. Yamuna Devi1, and B. Sivakumar1
Dr. Ambedkar Institute of Technology, Bangalore 560056, KA, India,,,
BITS-Pilani KK Birla Goa Campus, Goa 403726, India

Abstract. All data are important and useful but what is more important is the
way this data is used. Wi-Fi Data-logger is a major step towards making use of
data for effective management of a remote lab. The purpose is to build a
real-time data-logger with Wi-Fi capabilities to remotely monitor the equipment
status and environmental conditions inside a remote lab containing high-end
electrical and electronic machinery. This device should be adaptive, flexible,
easy to use and should give deterministic results to take action.
The structure of Wi-Fi data logger consists of two zones: (a) de-
vice-level-hardware zone and (b) server-level-software zone.
(a) A micro-controller is connected to various sensors such as Temperature,
Humidity, Gas, motion sensors and to fault testing lines of the equipment and
peripherals. The data is continuously obtained in real time is pumped through
Wi-Fi over TCP/IP or UDP protocols to a server computer.
(b) It consists of a simple program running on the server computer to receive
the data from micro-controller through Wi-Fi and organize it. This program
also has a script running which throws up possible a warning in case of
malfunctioning and possible solution with step-wise instructions is
Key Outcomes include: (a) Seamless integration of the device with the
existing machinery requiring minimal effort (b) Protection to components
(c) Over 40% reduction in the time required to detect and fix an issue achieved
by impeccable synchronous effort of device and software.
Thus, these Wi-Fi data-loggers enhance the way remote labs operate by
taking care of safety issues and increasing the stability of the whole remote labs
architecture. This technology can pave way for more complex architecture of
remote labs and the evolution of Wi-Fi data-logger technology will result in
evolution of remote labs.

Keywords: Remote labs  Internet of Things (IoT)  Wireless monitoring 

Real time  Safety  Revolutionary

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_22
236 V.V. Gowripeddi et al.

1 Introduction

Remote labs are undoubtedly the future of laboratory education as they provide
opportunities for effective and holistic learning for students and researchers with
limited access to laboratories by providing them with that extra flexibility and time
required to complete that experiment or make that breakthrough or pull out an amazing
research paper like this one [1]. Remote labs have been increasing in numbers with
advanced technology day by day with new labs set up in the places spanning all
domains [2]. Remote labs serve as a bridge between virtual and real labs as well as
serving as they can be used not only in the field of education, but also for doing any
measurement-task with real laboratory instruments [3].
The general architecture of a remote lab consists of an experiment set-up inside a
room whose structure includes a computer with hardware connected to it, webcam,
microphone, feedback of experimental results back to the computer [4]. More impor-
tantly everything inside connected to the outside world through the internet or the
intranet accessed through a webserver as shown in Fig. 1.

Fig. 1. Architecture of remote labs

Data loggers are small devices with capability to accumulate mass data through
acquisition cards and store them in memory before dumping to mass storage device.
Data loggers can be made more purposeful with the advancement of technology and
their adaptive implementation can lead to collection of critical data which can be of
immense importance [5]. However, data loggers are generally overlooked by most
researchers due to their simplicity and deemed by industrialists to be an extra feature
rather a required feature. We wish to change that by showing the important role that
data loggers play in remote labs ecosystem and their huge influence and boost to the
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 237

2 Approach

In this section, the build of the data logger whose description has been told towards the
end of introduction is discussed in depth. Figure 2 shows the architecture of the data
logger in general with two branches of its build – (a) Hardware level and (b) Software
Side. Each of the two branches will be dealt in detail in this section and the whole
approach to build a data logger and implementing it in the remote lab ecosystem will be

Fig. 2. Wi-Fi data logger architecture

2.1 Preliminary Steps

Choosing the Laboratory. The key to choosing a laboratory is by analyzing its down
time and checking for the feasibility of installing the data logger with minimal cost and
infrastructure changes. Our primary criteria to choose a laboratory are that it should
have a sufficiently high down time and low installation cost.
For this, different laboratories were identified and their downtimes (Down Time Per-
centage = Total Downtime/Total time) as well as approximate cost factor (Cost of
installation of data logger/Cost of remote lab set up). Based on this data, the best of the
lot was chosen by Factor of Decision which is weighted sum of the DTP and Cost
Factor together according to their proportionality as shown in Fig. 3.
Factor of decision is 25% when downtime is 25% and cost factor is 25%. Lesser the
cost and more the downtime, Factor of decision goes up. So, for a cost-effective
installation should be greater than 25% and for a more effective purpose, above 35%
was chosen. As shown in the Fig. 3. Five out of the eight have a Factor above 35% and
these laboratories were first choice for data logger installation. This understanding
gives us a clear idea of laboratories and a head start by helping us the best ones on
which the data logger can have its most impact on.
238 V.V. Gowripeddi et al.

Fig. 3. Choosing the laboratory using the factor of decision

Identifying the Failures. Failures can be due to varied reasons ranging from simple
overheating to equipment malfunction. In this section, some of the failures across the
whole remote labs ecosystem are listed out.
1. Sudden Variation in power can cause the system to fail.
2. Faulty machine lines can lead to malfunctioning.
3. Failure of temperature maintenance system can cause severe damage to compo-
nents due to overheating or overcooling.
4. Increased humidity and water deposition might brick the system.
5. Poor maintenance of infrastructure is an important cause of damage.
6. Use of components or hardware products which are not rated sufficiently high for
parameters like current, voltage, temperature can cause burning out of the com-
ponents resulting in a major failure.
7. Other failures can be attributed to rapid, violent, and unexpected changes that can
8. Loose connections can also be an issue.
9. Mechanical stress between components can lead to damage of critical moving
10. Error in software code that can put the system in infinite loop.
11. If proper security measures are not in place it can lead to misuse.
12. Overflow of memory can hang the system.

Finding the Suitable Solution

1. Providing constant power supply to enhance the performance.
2. Constant monitoring of the fault lines for quick error detection can solve the issue.
3. By installing proper safety measures and cooling systems to avoid failure.
4. For better performance, the hardware components must be rated high on param-
eters like voltage, humidity and temperature.
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 239

5. To understand unexpected failures, an error detection system must be designed and

controlled using feedback elements.
6. Quality of the solder materials must be good and the solder joints must be strong
enough to avoid physical damage or broken connection.
7. Advanced temperature and humidity sensors should be used to give precise data.
8. Software code should be developed with proper techniques.
9. Memory issue should be taken care of clearing the buffer regularly.
10. Software should be maintained and updated regularly.

2.2 Hardware Configuration

Identifying the Suitable Hardware. It is important to choose the hardware such that
it transcends across most types of laboratories and laboratory equipment and the only
change that would be required will be change in the implementation of code and
connections (Table 1).

Table 1. Hardware compatibility table

Laboratory type Type of microcontroller
Type A Type B Type C Type D
Microcontroller Lab 2 ✔ ✔
Electrical DC machines lab ✔ ✔ ✔
Measurement Lab 2 ✔ ✔ ✔
Process Trainer Kit (PTK) ✔ ✔

As you can see in the given table, Type B is adaptable to more labs than Type A, so
Type B is preferred over Type A. Similarly, Type C is preferred over Type D. Even if
cost of Type B is slightly greater than that of Type, it is worthy of using as it saves up
on cost of spares [6]. Figure 4 shows an example of a Wi-Fi based microcontroller.1

Fig. 4. Adafruit – Cortex M3 with Wi-Fi microcontroller

The mentioned microcontroller was used for microcontroller lab and the product contains a Cypress
WICEDTM chip. It is sold by Adafruit Industries based in New York.
240 V.V. Gowripeddi et al.

Choosing Necessary Components

See (Table 2).

Table 2. List of components used (basic idea).

Components list Range
Temperature sensor −80 °C to +70 °C
Humidity sensor 0 to 130 g/m3 Output: 0–13 mV
Infrared sensor 760 nm wavelength
Passive infrared sensor Up to 20 m
Voltage sensors 0–30 V
Current sensors 0.2–1.6 A & 2–10 A
Microcontroller 32 bit
Battery 12 V
Box case with heat sink Special PVC material (Resistant up to 150 °C)
Buck boost converter 12–24 V and 12–5 V
Mains supply 240 V, 10 A
SD card, hard disk 32–128 GB, 1 TB

Adding Components to Board. This is a 3-stage process where in the components are
put in a circuit on a breadboard to test their working. This is illustrated by the Fig. 5.
Then the components are soldered manually onto the PCB board and made as shown.

Fig. 5. Adding components to board: (a) Testing the hardware design on breadboard
(b) Soldering the components onto a circuit (c) Design of PCB and production

Encapsulating and Enclosing the Hardware Platform. This step involves packing
the whole hardware side in a high-grade case by making required openings for Inputs
and outputs through the device. Figure 6(a) and (b) clearly describe the packaging and
its features.
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 241

Fig. 6. (a) Hardware packaged in a IP60 box (b) Openings through the case are well sealed.

2.3 Software Configuration

Choosing the Right Software for Hardware Side as Well as Client Side. This step
involves choosing the software that is most suited to embedded programming [7] and
client side software application [8].
Arduino is an open-source electronics platform based on easy-to-use hardware and
LabVIEW is an integrated development environment designed specifically for
engineers and scientists building measurement and control systems.
Arduino is chosen for:
• Simplicity
• Strong Hardware – Software interaction
• Code at an Embedded C level
• Open Source and a huge Community for support
• Large database of libraries and binaries
LabVIEW is chosen for:
• Excellent Design in form of front panel and block diagram
• Built in Libraries and tools
• Precision measurement reading
• Highly Customizable

Programming the Hardware. Arduino IDE was used to program the microcontroller
by embedding C code onto the device. Figure 7 illustrate how the same device can be
adapted to read different parameters which makes the device universal and adaptive.
Programming the Client Side. Client side programming is done through LabVIEW,
various loops, conditions are designed. All the conditions, restrictions and boundary
conditions are set in the LabVIEW block diagram with indicators and user output data
on Front Panel. Figure 8 illustrates LabVIEW programming logic done on block
diagram window of LabVIEW software.
242 V.V. Gowripeddi et al.

Fig. 7. (a), (b) and (c) show how with a few lines of modification in the code different
parameters can be read.

Fig. 8. LabVIEW programming

2.4 Server Configuration

To facilitate remote monitoring the LabVIEW front panel can be made as a standalone
running program on a server which can be accessed through remote desktop connection
on an operator’s PC [9]. Option of using existing remote lab server or setting up an
exclusive server for datalogger are available.
Local Server. This is the simplest approach where an already existing remote lab
server can be used for running the Monitoring VI. This option requires no additional
cost and is not recommended as failure of remote server night lead to failure of the
whole system.
Exclusive Server. An exclusive server for the data logger monitoring can be installed
to provide a more robust architecture as failure in main server will not affect monitoring
and is recommended for long term installation.
Auto – Alerting through SMS and Email. Using MQQT, conditions can be given
such that email and SMS notification are sent to concerned personal [10], which is
depicted in future sections.
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 243

3 Working
3.1 Different Cases of Operation
Following set of Figures illustrate how monitoring front panel looks like in different
cases of operation.
Figure 9 shows a typical remote lab monitoring screen where everything looks
okay. Temperature and Humidity are under control. Fault lines are off and a message is
displayed that indicating the same.

Fig. 9. Normal state of operation

Figure 10 shows a warning state of operation where the temperature is higher than
usual but temperature is not high enough to cause a damage to components, message is
displayed for operator indicating the same and instructions are provided for operator to
sort this error in simple terms.

Fig. 10. Warning state of operation
244 V.V. Gowripeddi et al.

Figure 11 illustrate the error state of operation where the lines are faulty and red
light indicates the machine has stopped running. Message is displayed indicating the
same and a solution is provided for the operator. Since, this condition may require
expertise, notification using auto alerting is sent to concerned personnel.

Fig. 11. Error state of operation

3.2 Different Labs in Operation

Figure 12 illustrates how parameters vary from lab to lab and the same can be dis-
played on the monitoring screen. This particular example shows the graph of power
consumption along with other necessary parameters. It can be seen that there is a
sudden spike in power consumption [11] and this is depicted in real time, warning
displayed as well.

Fig. 12. Monitoring different kinds of labs by displaying related information
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 245

3.3 Viewing the Recordings

In case of failure, logs are generated and can be viewed later on. The logs are generated
and stored on a local storage such as a SD card or hard disk. These can be retrieved
later when the system is back online through TCP/IP or physically [12]. This storage
serves as black box and the data can be analyzed to find the reason for failure.

4 Outcomes

4.1 Seamless Integration

From Fig. 13 gives an overview see how cost of datalogger compares against total cost
of remote lab and how man hours required to install a data logger fares against man
hours required to build a remote lab. From the figure, which has logarithmic Y axis, it
can be observed that typically cost of a datalogger varies between 8–14% to that total
cost of remote lab and man hours required to setup a datalogger is less than 1/15th to
that of remote lab. This proves the motto of “Seamless Integration” [13]. With minimal
cost incurred and minimum effort, data logger can be installed to most remote labs.

Fig. 13. Shows how cost of datalogger and its installation fares against total remote lab cost for
different kinds of labs.
246 V.V. Gowripeddi et al.

4.2 Reduced Time to Detect a Failure and Rectify It

Since the laboratory is continuously monitored for various physical parameters and
fault lines, in case of an error, warning or error is shown which was depicted earlier in
the Working Section earlier, the time required to detect an error is drastically lower
than before. As illustrated in Fig. 14, for most labs it just takes 1/3rd of the usual time
required to detect and fi failure unlike without datalogger, I think this is the most
important feature of the whole architecture as it drastically improves the availability of
lab for use and helps in easy maintenance of infrastructure.

Fig. 14. Compares the time required to detect a failure and correct it with datalogger (in red) vs.
Without datalogger (in blue)

4.3 Increased Working Efficiency

As the time required to detect the failure and correct it is minimized, efficiency of the
system goes up. In Fig. 15 it can be seen that downtime for most of the labs in less than
4% which translates to an efficiency of more than 96% compared to 86–91% earlier.
This is a significant result which proves the need for datalogger.

4.4 Cost Efficiency

Finally, the important measure of outcome for sustained use of dataloggers is the savings
due to the installation over a certain period ranging from one year to five years extending
to ten years [14, 15]. According to the statistics, savings over five years are generally
more than cost of datalogger itself which can be seen in Fig. 16. Our estimates show that
breakeven point occurs two to three years from time of installation. This proves that data
logger is profitable venture both from a qualitative and quotative perspective.
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 247

Fig. 15. Downtime Comparison with and without dataloggers for different labs

Fig. 16. Compares cost of data logger, yearly savings due to datalogger and savings made over
five years
248 V.V. Gowripeddi et al.

5 Conclusion

The discussion of this paper started with importance of Remote Labs in current context
and need for Wi-Fi Datalogger for efficient functioning of remote labs was well
established. Approach to build the data logger was discussed from scratch. The
foundation of Datalogger from choosing the lab to choosing components and hardware
was discussed. The building of datalogger in hardware, software and server aspect is
well illustrate in Sect. 2. Working of the datalogger, with live screens from different
labs and different states of operation is shown in working section. Data logger was
judged on the parameters of integration costs and effort, time to detect a failure and
rectify it, efficiency and finally cost perspective. The results clearly prove the effec-
tiveness of data logger. The importance of data loggers in remote labs ecosystem is
well established through this paper.

Acknowledgment. The authors wish to extend thanks to various universities and industries
across India and across the world for providing with opportunities to test the datalogger archi-
tecture and make findings.

1. Auer, M.E.: Virtual lab versus remote lab. In: 20th World Conference on Open Learning and
Distance Education (2001)
2. Ram, B.K., Kumar, S.A., Sarma, B.M., Mahesh, B., Kulkarni, C.S.: Remote software
laboratories: facilitating access to engineering softwares online. In: 2016 13th International
Conference on Remote Engineering and Virtual Instrumentation (REV), pp. 409–413. IEEE,
February 2016
3. Pruthvi, P., Jackson, D., Hegde, S.R., Hiremath, P.S., Kumar, S.A.: A distinctive approach to
enhance the utility of laboratories in Indian academia. In: 2015 12th International
Conference on Remote Engineering and Virtual Instrumentation (REV), pp. 238–241. IEEE,
February 2015
4. Esche, S.K., Chassapis, C., Nazalewicz, J.W., Hromin, D.J.: An architecture for multi-user
remote laboratories, dynamics (with a typical class size of 20 students), 5, 6 (2003)
5. Outram, J.D., Outram, R.G.: Adaptive data logger. U.S. Patent No. 4,910,692, 20 March
6. Yunlong, F., Fang, A., Li, N.: Cortex-M0 processor: an initial survey. Microcontrollers
Embed. Syst. 6, 33 (2010)
7. D’Ausilio, A.: Arduino: a low-cost multipurpose lab equipment. Behav. Res. Methods 44(2),
305–313 (2012)
8. Gontean, A., Szabó, R., Lie, I.: LabVIEW powered remote lab. In: 2009 15th International
Symposium for Design and Technology of Electronics Packages (SIITME). IEEE (2009)
9. Auer, M., Pester, A., Ursutiu, D., Samoila, C.: Distributed virtual and remote labs in
engineering. In: 2003 IEEE International Conference on Industrial Technology, vol. 2,
pp. 1208–1213. IEEE, December 2003
10. Aloni, E., Arev, A.: System and method for notification of an event. U.S. Patent
No. 6,965,917, 15 November 2005
Role of Wi-Fi Data Loggers in Remote Labs Ecosystem 249

11. Shnayder, V., Hempstead, M., Chen, B.R., Allen, G.W., Welsh, M.: Simulating the power
consumption of large-scale sensor network applications. In: Proceedings of the 2nd
International Conference on Embedded Networked Sensor Systems, pp. 188–200. ACM,
November 2004
12. Tinga, T.: Application of physical failure models to enable usage and load based
maintenance. Reliab. Eng. Syst. Saf. 95(10), 1061–1075 (2010)
13. Vuletid, M., Pozzi, L., Ienne, P.: Seamless hardware-software integration in reconfigurable
computing systems. IEEE Des. Test Comput. 22(2), 102–113 (2005)
14. Robinson, R.: Cost-effectiveness analysis. BMJ 307(6907), 793–795 (1993)
15. Tanner, M., Eckel, R., Senevirathne, I.: Enhanced low current, voltage, and power
dissipation measurements via Arduino Uno microcontroller with modified commercially
available sensors. APS March Meeting Abstracts (2016)
Flipping the Remote Lab with Low Cost Rapid
Prototyping Technologies

J. Chacón(B) , J. Saenz, L. de la Torre, and J. Sánchez

Universidad Nacional de Eduación a Distancia (UNED), Madrid, Spain

Abstract. This work proposes the idea of flipping the remote lab.
A flipped remote lab would consist on requesting students to build a
remotely accessible experiment, so that teachers would test the lab in
order to evaluate it, instead of creating it themselves. Building a remote
lab is a multidisciplinary activity that involves using different skills and
which promotes long-life learning and creativity. Also, by assigning this
task to work in groups, students would also build up abilities such as
teamwork, communication and leadership. Because creating a remote
lab is a complex task, the idea is to use the experience acquired dur-
ing many years of development and use of virtual and remote labs for
teaching engineering and physics, to simplify the process and make it
manageable for students. Given the current state of the technology, pro-
viding students with some guidelines and reference designs should be
enough to make feasible for them to develop a remote experiment.

Keywords: Remote labs · Flipped classroom · Low cost platforms

1 Introduction

Flipped classroom is an instructional strategy based on reversing the traditional

learning process. Students carry out research at home and are actively involved
in construction and knowledge acquisition, but also participate in the evaluation
of their learning. On the other hand, it is widely accepted that solving today’s
major challenges requires a multidisciplinary approach. Therefore, combining
the flipped classroom teaching paradigm with online control education labs can
be an interesting and formative experience for engineering students.
The purpose of this work is to propose the idea of flipping the remote lab.
A flipped remote lab would consist on requesting students to build a remotely
accessible experiment, so that teachers would test the lab in order to evaluate
it, instead of creating it themselves. Building a remote lab is a multidisciplinary
activity that involves using different skills and which promotes long-life learning
and creativity. Also, by assigning this task to work in groups, students would
also build up abilities such as teamwork, communication and leadership. Because
creating a remote lab is a complex task, the idea is to use the experience acquired
during many years of development and use of virtual and remote labs for teaching

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 23
Flipping the Remote Lab with Low Cost Rapid Prototyping Technologies 251

engineering and physics, to simplify the process and make it manageable for
students. Given the current state of the technology, providing students with
some guidelines and reference designs should be enough to make feasible for
them to develop a remote experiment.
Recently, low cost single board computers such as Raspberry Pi or Beagle-
bone Black, and 3D printing technologies, which allow for rapid prototyping of
mechanical systems, have become pervasive. These tools provide an interesting
framework that can assist the creation of remote labs. The hardware framework
is complemented with reusable software components, a web-based architecture,
and standard communication protocols to reduce the development costs and
efforts. Based on this paradigm, an easily replicable remote lab architecture is
proposed, using 3D printed parts designs that have been open source licensed to
allow for free use or modification, as well as software components that implement
the different subsystems of the lab, and elements that either can be gathered from
old electronics devices or are cheap and commonly used components.
Some examples can be found in the literature about flipped classroom
adapted to blended learning [5,8], and engineering subjects [7], as well as of
laboratories based on single-board low-cost platform either hands-on [3] and
remote [1,4,6].
Currently, a virtual and remote lab of an air flow levitation system has been
built using the proposed methodology. It consists of a small object that has to
be lifted using the air flow generated by a fan inside a cylinder. The position
of the levitating object is measured with an infrared distance sensor and it is
used to control the rotation speed of the fan. The prototype will be incorporated
into a master degree course in control engineering. Some of the benefits expected
from the experience are to provide students with a global insight of engineering
processes, increase their motivation to research about different sensing technolo-
gies, and promote their creativity.

2 Approach
Since the idea is to let students build their own systems, the remote lab design
has to be well thought and structured. There are a few requirements the lab
should verify to be realizable:

– The design should be as low-cost as possible (about or under 100$). Students

will have to construct the lab, so it is likely that they wear out of the compo-
nents more rapidly. Moreover, in case there are several students or working
groups, the components (therefore the cost) will have to be duplicated.
– The design should be easily replicated, so that students will be able to build

If these requisites are met, not only students could be able to build the lab
as part of an assigned work, but maybe even those who would want to have
their own experimentation platform can afford to construct one on their own.
The lab should use open-source technologies, mainly for reducing costs in order
252 J. Chacón et al.

to meet the first requirement, but also because this approach encourages to
acquire knowledge by tinkering with the system design, propose modifications
or enhancements, and so on. There are some other aspects that have been con-
sidered in order to keep the costs of building the remote lab low. The first one is
to use materials that are easy to obtain and have a reasonable cost. For exam-
ple, the IR sensor and single-board computer are cheap and can be bought in
virtually any electronic component shop. Also, components should be reused
whenever it is possible. The fans can be easily gathered from an old PC or other
electronic devices. Taking benefit from the boom rapid prototyping technologies
is also important: 3D printing allows to reduce greatly the cost of mechanic pro-
totyping, and it is relatively easy to have access to a 3D printer, either at the
university, specialized shop or online services which print your designs.
The design of the laboratory can be decomposed into several tasks, some of
which have to be done by the educator, and others that have to be prepared to
be assigned to students. The tasks that corresponds to educators are:

1. Design of the experience to be carried out.

2. Design and construction of the plant.
3. Design and construction of the server software.
4. Creation of the GUI.

2.1 Design of the Experience to Be Carried Out

It is responsibility of the educator to think about what concept should be learned,

what kind of system will be used for the laboratory, the physical variables that
will be measured, the elements that will interact with the environment, and so
on. Depending on the knowledge level of students, it has to be decided which
tasks may be assigned to them.
For example, if students are good in writing software, but they lack knowledge
of electronics, maybe it is reasonable to give them the plant and let them create
only the software parts. Or the design and construction of the structural parts
can be assigned to students with mechanics engineering knowledge.

2.2 Design and Construction of the Plant

The design and construction of the plant is a thorough engineering process from
which students can benefits, acquiring a learn-by-doing understanding of how to
convert an idea or a concept to a practical solution.
In the next paragraphs, a (not exhaustive) review is provided of the hard-
ware platforms that are available to develop electronic systems. It is followed
by a review of some open source cad tools to model structural components and
electronic circuits, and finally it will be discussed the architecture followed by
our previous designs, which can be used as a reference (but not the only and
definitive solution) for future labs.
Flipping the Remote Lab with Low Cost Rapid Prototyping Technologies 253

Hardware. Since the release of the first Raspberry Pi model, a bunch of single
board computers have appeared intending to fit developers needs, which range
from small DIY projects such as home media centers or domotic appliances,
to high performance research computing. Most of these boards are specifically
focused on the maker community, students and educators, so they are fully open-
source hardware (Fig. 1).

Fig. 1. Screenshots of two popular open-source CAD software tools, (a) FreeCAD and
(b) OpenSCAD

An interesting feature of these single-board computers is their ability to run

a complete OS. As an example, a Raspberry PI can run several Linux distros
(Raspbian, Ubuntu, LibreElec, etc.), Windows 10 IOT Core, or RISC OS, imme-
diately opening an universe of possible applications: it is easy to set up a web
server, enable remote connections through SSH or even graphical sessions, or
use many different programming languages to develop our project. Furthermore,
the integrated input/output capabilities through digital IO, interconnection pro-
tocols (SPI, I2C, etc.), or AD converters makes easy (and affordable) to build
electronics systems, even if not an expert in the subject.

Software. CAD tools assist the designer to model the physical components
which will be part of the system, in our case the structural parts and the elec-
tronics circuits. It is out of the scope of this work to discuss the pros and cons of
the so many options available. However, it is worth to mention at least some of
the most popular open-source alternatives that cover the lab needs: FreeCAD,
FreeCAD is an open-source 3D CAD software tool very popular among the
3D printing community. It has many features, parametric design, multiplatform
(works on Linux, Windows and Mac), a fully customizable GUI, and native
support for python scripting and extensions.
OpenSCAD is another popular tool, mostly used to design 3D printed parts.
Unlike FreeCAD, it uses a non-graphical with a different modelling approach.
It is based on a specific description language, so the creation process is more
similar to traditional programming. One of the advantages of this approach is
the flexibility to parameterize designs.
254 J. Chacón et al.

The electronic circuits and the PCBs has been created with the software
KiCaD, a multiplatform and open-source tool that have the support of the
CERN, which started the KiCad project and have made important contribu-
tions to it as part of the Open Hardware Initiative (OHI)1 .
As in the case of 3D printing, there are many PCB manufacturers where you
can send your circuit design and have your PCB with professional quality and a
moderate cost or, following the maker paradigm, you can build your own circuit
with a CNC PCB milling machine or a chemical etching process.
At the end of this stage, the incomes are the structural parts and electronic
circuits needed to construct the plant.

2.3 Design and Construction of the Server Software

The software in the target computer must implement several capabilities,
including: Hardware interface, Datalogging, and Communication and Control

Hardware Interface. The hardware interface purpose is to read measures

from the sensors, and send values to the actuators. Though it is obviously very
platform dependent, it is a good practice to use standard libraries and protocols.
For example, the Arduino API is widely used for its simplicity and it has been
exported to other hardware, like the Beaglebone boards or Raspberry Pi. The
functionality to be covered can usually be reduced to read and write digital or
analog input and outputs.

Datalogging. Once the values have been acquired, it is needed to store them
in order to be accessed whenever be required. For that purpose there are many
options, but again it is recommended to use a standard solution. There are time
series database systems (TSDB) that are specialized on time series management,
such as InfluxDB, graphite, OpenTSDB or RRDtool.

Communication. The server software, running at the target platform (the

single-board computer) must provide an API to interact with the system. The
Remote Interoperability Protocol (RIP) has been proposed to interconnect engi-
neering systems with user interfaces. It is a simple API based on the JSON-RPC
protocol which is human-readable and can be easily integrated with JavaScript
applications, as it uses the JavaScript Object Notation.

Control. The remote labs have a local controller implemented, which can be as
simple or as sophisticated as needed. In the case of a control engineering lab, it
must be a central part of the design, but even in other cases it is always needed
to take some safety measures, to assure that the system cannot be harmed by
accident or by a malicious user.
Flipping the Remote Lab with Low Cost Rapid Prototyping Technologies 255

2.4 Creation of the GUI

The interface design should be intuitive and easy to use, include a webcam visu-
alization of the system and a way to control and monitor the plant. The authors’
labs are created with Easy Java/Javascript Simulations (EjsS), an open source
tool which offers an easy way to create simulations and remote labs with a GUI
for users with no programming skills. These interfaces can be made according
to the user needs of interactivity and visualization.

3 Use Case: Air Levitator System

The following paragraphs give an insight of all the stages of the lab building,
from the 3D modelling of the structural pieces and printing to the electronics
and software setup, describing the Air Levitator System to have it as a reference

3.1 Air Levitator System

The air levitator system is composed of a cylinder in which a forced air flow
is used to lift a small object levitating on a desired position. The structure is
simple on purpose, there are only a few elements: a methacrylate tube with a
nozzle, at one end, coupled with a blower fan. Both elements are supported by
an open and movable stand, which let the air flow into the fan. The system has
been built using only the following components:
– A methacrylate tube.
– A small and light object.
– 3D printed parts.
– A single-board computer (Beaglebone).
– An infrared distance measuring sensor (PIR).
– A PC fan.
– Some discrete electronic components and a PCB.
– A webcam.

Printed Parts. Most structural elements have been printed in a Prusa Mendel
i3 3D printer, a very popular and affordable RepRap printer, available at the
authors’ department. The 3D parts has been modeled with FreeCAD.

Electronics. The air levitator system is controlled by a single-board computer,

running a GNU/Linux distribution. The Beaglebone provide general purpose
input/output (GPIO) pins to interconnect with external components. Since the
range of the voltage signal provided by the sensor (PIR) lies outside the one
admitted by the analog inputs of the board (0, 1.8 V), it must be adapted before
being connected. Similarly, the actuators (fans) require voltages and currents
that can not be directly handled by the board, so a signal conditioning circuit
has to be used.
256 J. Chacón et al.

Software. The hardware interface task is accomplished using the bonescript

library, which basically mimics the Arduino API to cope with Beaglebone and the
GPIO pins of the board. There is a real-time loop implementing the time critical
actions: read sensors, update the controller and write outputs. Technically, it
is not actually real-time, because currently it is not supported by the Node.js
bonescript library. But for the time scale of the system, which is sampled at a
100ms rate, it performs correctly. In case hard real-time is needed, there are
other alternatives (such as C++) supported by the Beaglebone board.
The datalogging capabilities have been separated into a low priority task that
periodically dumps measures and control actions to a database, so the data is
stored and can be accessed to perform off-line processing of past sessions.
The communication subsystem to make the server functionality accessible
from outside of the lab computer implements the Remote Interoperability Pro-
tocol (RIP, [2]), which provides a standard API to control and monitor the
hardware. That basically means that any RIP enabled application can easily
interconnect with the server to read and modify variables and plant parameters,
so it is easy to decouple the GUI design from the rest of the system.
Finally, the control subsystem implements a PID controller which parameters
can be modified and tuned. The control subsystem is prepare to be extended with
more sophisticated controllers without much development effort.

The GUI. The interface design is clean and simple, sharing the same layout
with the virtual lab: there is a view of the system on the left, which is obtained
from the laboratory webcam (the equivalent to the 3D visualization in the vir-
tual lab), some plots on the right showing the time evolution of the interesting
variables (the height of the lifting object measured by the IR sensor, the set-
point and the control signal sent to the fan). Finally, at the bottom there is a
control panel which allows to modify some system parameters, as the controller
gains or the setpoint, and the connection buttons which are analogous to the
simulation execution control ones in the virtual lab. Figure 2 shows the remote
lab web interface, designed with EjsS and the RIP Model Element (an add-on)
which enhances EjsS with RIP interconnection capabilities.

Fig. 2. The remote lab.
Flipping the Remote Lab with Low Cost Rapid Prototyping Technologies 257

4 Conclusions
In recent times, it is not unusual that students, even of first courses of engi-
neering, have at least basic knowledge of the mentioned development platforms,
and a good predisposition to use them. In spite of that, the popularity of 3D
printing technologies and the do-it-yourself (DIY) and the maker community
can be an attractive way of drawing the students’ attention. As an example of a
similar approach, some universities already proposed robotic competitions where
students are asked to solve some problems using basic construction kits. These
activities, which have demonstrated to benefit students’ development, are not
very different in nature compared to the one proposed in this work. Therefore,
it is expected to obtain great profit from the flipped remote lab.

Acknowledgements. This work was supported in part by the Spanish Ministry of

Economy and Competitiveness under projects DPI-2012-31303 and DPI2014-55932-

1. Balula, S., Henriques, R., Fortunato, J., Pereira, T., Borges, H., Amarante-Segundo,
G., Fernandes, H.: Distributed e-lab setup based on the Raspberry Pi: the hydro-
static experiment case study. In: 2015 3rd Experiment@ International Conference
( 2015), pp. 282–285 (2015)
2. Chacón, J., Farias, G., Vargas, H., Visioli, A., Dormido, S.: Remote interoperability
protocol: a bridge between interactive interfaces and engineering systems. IFAC-
PapersOnLine 48(29), 247–252 (2015)
3. Krauss, R.: Combining Raspberry Pi and Arduino to form a low-cost, real-time
autonomous vehicle platform. In: 2016 American Control Conference (ACC), pp.
6628–6633, July 2016
4. Michels, L.B., Gruber, V., Schaeffer, L., Marcelino, R., da Silva, J.B., de Resende
Guerra, S.: Using remote experimentation for study on engineering concepts through
a didactic press. In: 2013 2nd Experiment@ International Conference ( 2013),
pp. 209–211, September 2013
5. Shi, J., Yuan, S., Zou, Q.: From practice to experiment: Development and enlight-
enment of flipped classroom in China. In: 2016 International Symposium on Edu-
cational Technology (ISET), pp. 94–98, July 2016
6. Simão, J.P.S., Lima, J.P.C., Heck, C., Coelho, K., Carlos, L.M., Bilessimo, S.M.S.,
Silva, J.B.: A remote lab for teaching mechanics. In: 2016 13th International Con-
ference on Remote Engineering and Virtual Instrumentation (REV), pp. 176–182,
February 2016
7. Toner, N.L., King, G.B.: Restructuring an undergraduate mechatronic systems cur-
riculum around the flipped classroom, projects, labview, and the myrio. In: 2016
American Control Conference (ACC), pp. 7308–7314, July 2016
8. Zhang, H., Meng, L., Han, X., Yuan, L., Wang, J.: Exploration and practice of
blended learning in HVAC course based on flipped classroom. In: 2016 International
Symposium on Educational Technology (ISET), pp. 84–88, July 2016
Remote Experimentation with Massively
Scalable Online Laboratories

Lars Thorben Neustock(B) , George K. Herring, and Lambertus Hesselink

Stanford University, Stanford, CA 94305, USA,

Abstract. In this paper we present a solution for highly scalable online

laboratories at low cost. The Massively Scalable Online Laboratories
(MSOL) is an online platform that enables the virtualization of real
experiments in a fashion that very closely mimics a physical experiment.
Moreover, it includes social features to enable peer-to-peer learning and
facilitates the creation of an online community. To add an experiment to
the MSOL platform, an existing setup is automatically turned into a data
set, accessible through data base queries. In this way, MSOL provides an
effective and scalable solution to add an important element to current
online education systems at low costs. The MSOL platform might also
accompany scientific and engineering papers to add another domain to
disseminate qualitative and quantitative data.

Keywords: Online laboratory · Education · Experimentation · Scala-


1 Introduction

Massively Open Online Courses (MOOC) have the potential to reach vast audi-
ences both in geographic and socioeconomic scope. Currently, many universities,
including Stanford and MIT, use online coursework to augment educational pro-
grams for their students, provide professional programs for a fee, and offer video
lectures as MOOCs to the general public. These universities use online course-
work as an augmentation of physical classrooms in a flipped classroom approach,
where students study online education materials to enable increased interaction
between students and teachers. This enhances educational efficiency and depth of
learning. Moreover, some professional certificate programs, such as Udacity, focus
on online coursework to reach their students. All of these concepts rely on the abil-
ity of online coursework to be a scalable and effective means of education [1].
Although current techniques of video streaming allow users to easily view
lectures online, and, in some cases talk to advisers or teaching assistants via video
call, there currently is not any means of including experiments into an online
coursework environment. However, experiments, which normally take place in a
laboratory environment, with severe time and cost restrictions, are a crucial part

c Springer International Publishing AG 2018
M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6 24
Remote Experimentation with Massively Scalable Online Laboratories 259

of education in science and technology. Experimentation provides an opportunity

for students to gain intuition about physical processes and encourages intrinsic
motivation. In the end, all major discoveries in natural science and medicine
were done via experimentation, and only through experimentation can a theory
be validated.
The MSOL concept aims at recording a fully digitized version of a physical
experiment, which can be displayed online, replicating the feeling of a physical
experiment in a laboratory. Our proposed concept is called Massively Scalable
Online Laboratories (short: MSOL) and is based on the iLabs platform devel-
oped at Stanford in 1996. This platform was, to the best of our knowledge, the
first Internet controlled Laboratory and allowed students to control and observe
a physical experiment “in a box”. It was tested at Stanford and around the
world through the “Optics for Kids” website supported by the Optical Soci-
ety of America. We could show that the remote experimentation significantly
improved a student’s ability to master new knowledge [2].
The new approach presented in this paper aims for the same goal by providing
a highly scalable version of online experiments, where the experiment is fully
digitized and thus easy to distribute over the internet.

2 Concept of Massively Scalable Online Laboratories

The proposed concept of MSOL is a highly scalable version of an online labora-

tory, where students can easily access the contents of a lab and collaborate with
other students.
In his works, the philosopher Socrates called the “Purposeful Conversation”
the best means of education. He points out that the best learning is achieved by
direct contact between people, who share the aim to educate themselves. Based
on this concept, we created a platform for a shared laboratory experience, which
can reach broad audiences connected via meetings, online coursework, or social
media on a one-to-one basis. This tool will augment online education and can
easily be embedded in an online course. The high-level idea behind our approach
is split into two steps. The first step is to turn a physical experiment into a data
set by recording all of its possible states. Then, this laboratory experiment can
be represented as a MSOL on our platform to a user who can control the vir-
tual experiment, similar to the experience of controlling a real experiment over a
remote connection. In the setting of a MSOL, which is accessed by users through
a web-page, students can collaborate to explore a diverse set of experiments and
the configurations possible in each experiment. The visualization of the exper-
iment is designed to be interactive. Students can study how the experiment
behaves according to the inputs that they provide. This allows student driven
exploration as they observe how different control settings result in diverse exper-
imental results. The student observations then serve as the basis for purposeful
conversations between peers. By operating the virtual experiment, students learn
how to operate the equipment and observe the experimental results changing as
a result of their actions. Since the experiment has to be run only one time to turn
260 L.T. Neustock et al.

it into a data-set, and subsequently into a MSOL, the platform provides access
to an otherwise economically-restricted, advanced laboratory experience. More-
over, since the laboratory is provided over a web-page, the barrier of entrance is
low. It can be accessed from all over the world and has very low acquisition costs
for the students and educational institutes. Each additional student requires only
enough resources to respond to their web requests. Therefore, it can be used in
resource poor areas when laboratory equipment is unavailable, or it can aug-
ment existing remote or in-class education. Additionally, users will be able to
access experiments and instruments that would otherwise require extensive prior
training, are dangerous, very expensive or not available.
Moreover, the entrance barrier to create a laboratory by turning an existing
experiment into a data set is very low as well. It only needs to be done once and
can easily be archived by a simple computer program. Today, most experiments
are already run by a computer, which means that only one layer of automatic
sweeping through possible permutations needs to be added. The MSOL platform
provides the required automation tools and storage facilities.
To encourage team building and peer to peer learning, interactive social fea-
tures, as described in Sect. 3, are added to the website. This design, alongside the
low entrance barriers, allows the creation of a large online community composed
of small sub-communities that encourage “Purposeful Conversation”.

3 Implementation of MSOL
3.1 Turning an Experiment into a Data-Set
The first step in turning an existing experiment into a MSOL is to record it in
all possible stages and save the corresponding information, such as values from
sensors or images of the experiment. Most modern experiments are already con-
figured to be computer controlled. The computer controls allow for repeatability
and accuracy in a research setting. This computer control also allows a program
to iterate through all possible states of all controls automatically with only very
little extra effort. If the sensor data and associated images are recorded with
each state in an automatic sweep, then this data is all that is required to cre-
ate a virtual experiment compatible with the MSOL interface. The majority of
relevant experiments can be turned into data in this fashion.
With decreasing storage costs and increased internet bandwidth, it is rea-
sonable to store more than 105 images per experiment, which provide the view
of the experiment at each permutation as if the observer was in the room. The
images can then be stored on a local hard drive, uploaded on a server, and simul-
taneously accessed by thousands of different users. Reviewing data just as a list
would be a very tedious task; therefore, the platform provides an interactive
interface that only shows relevant pictures and data, given the control states
that the user is interested in. This reduces the required bandwidth and increases
the scalability.
In general, before running the automation, the number of planned permuta-
tions should be considered and evaluated concerning its feasibility. Yet, in most
Remote Experimentation with Massively Scalable Online Laboratories 261

educational experiments merely several thousand input combinations will yield

an interesting result and thus, this constraint does not cause any reduction in
the capability to recreate the relevant portions of an experiment.
During or after recording the experiment, experiment data can be uploaded
to a MSOL server. This upload will contain a data file, which includes the number
of different controls, binary controls and indicators, as well as information about
their dimensionality and range. Subsequently, each permutation of the state of
the experiment will be encoded by the values of the controls and indicators.
With this state information, image data will be uploaded to a database. Thus,
the whole experiment will be available on a database which allows for low latency
Alongside this lab data and in preparation of displaying it, more information
can be added to customize the laboratory experience. The names of the indica-
tors and controls can be chosen freely. In addition, the user can upload a short
summary about experimental operation, an abstract on the theory of the exper-
iment and more in-depth information, e.g. pictures, tables, exact experimental
parameters, and theory of operation.

Sample Case: For the purposes of this paper, we chose to demonstrate the
functionality of the MSOL platform with a diffraction experiment. This can
be found on the current online version of the MSOL platform at http://www. Diffraction at a grating is a fundamental concept in optics,
by which the wave nature of light can be explored. The diffraction experi-
ment used here includes two different lasers and three different grating spacings.
A photo-detector that can be moved along the diffraction pattern is utilized as
an indicator, displaying the varying optical intensity due to the diffracted laser
light. The uploaded experiment also contains a light switch. A picture of the
setup can be seen in Fig. 1. The recording was done with the help of a simple
python script iterating over all permutations, recording 24,000 different data

Fig. 1. Experimental setup of the diffraction experiment: (a) Sketch of the setup
(b) Photo of the lab
262 L.T. Neustock et al.

points with pictures. This data is uploaded through an upload interface. This
diffraction experiment will function as an example through the rest of this paper.

3.2 MSOL Platform

The MSOL platform is a web application which displays uploaded experiments

in an interactive way providing tools for social cooperation. It is optimized for
laptops and desktop computers and provides all of the same functionality on
handheld devices. The visitor of the webpage, e.g. a student that wants to deepen
his/her knowledge about diffraction, will be greeted by a starting page, where
he/she can also read about the general idea of the MSOL platform and visit the
list of experiments (see Fig. 2). The visitor can then browse through the listed
experiments, trying out many different experiences in a short time. The core of
the MSOL platform is the display of the experiment. This is illustrated in Fig. 3
through 6. In Fig. 3, the general outlook of the experiment display is shown. This
page gets created automatically given the data previously uploaded about the
experiment. Thus, the number of (binary) controls and indicators is experiment
dependent. The user can display the indicator/sensor values by hitting a button
which creates an overlay over the experiment images, which is the center of this
laboratory webpage. The values of the indicators and experiment images will
change depending on the users’ selection of the controls’ states. This update will
happen in real time and appear to the user quickly after the changed input.
Thus, this interface mimics the actual experiment very accurately. The user will
be able to engage with the experiment in a similar way as if he/she were in the
laboratory, especially since, in most cases, he/she would be operating a computer
here as well. The display of the actual labs contains several other features. Firstly,
as seen in Fig. 4, after starting the lab, an abstract with the most important
information will be displayed and the option of taking a tutorial which guides the
user through the interface is provided. This tutorial points to different buttons in
the interface and explains how to operate the MSOL platform. Also, additional

Fig. 2. List of experiment that a user can conduct with the ability to search for a
particular topic, title, or author.
Remote Experimentation with Massively Scalable Online Laboratories 263

Fig. 3. Display of the laboratory experiment in the MSOL platform. The experiment
is visible along with the controls to provide input. The indicator overlay is visible.

Fig. 4. (a) Overlay at the beginning of the lab, which gives a first overview and basic
information (b) Part of the tutorial guiding through the functionalities of the platform

information (abstract, theory, and experimental details), will be displayed right

underneath the experiment images.
Secondly, and very important for the idea behind the MSOL platform, there
are various social features available, which can be accessed through another
overlay, shown using the functions button. The most important social feature is
the ability to create a meeting with other users. The meeting feature allows a
group of people to operate an experiment together. If one user changes a control
then the controls, indicators, and associated images change in the interfaces for
all members of the meeting. In this way, people can share their experience and
try to explore the physics underlying an experiment together. In our example,
one user could change the laser color from red to green and all other members
of the meeting would also see how the diffraction pattern would change with a
different laser wavelength. To create such a meeting, as displayed in Fig. 5, the
members have to agree, in advance, on a unique meeting name and all meeting
members must type the unique name in the meeting name field of the functions
overlay in the MSOL experiment. Other social features of the MSOL experiment
include the ability to share your opinion and emotions about the experiment
via social media. There is a button for sharing the link to the current setup
264 L.T. Neustock et al.

Fig. 5. The MSOL platform with visible functions overlay while creating a meeting to
collaborate on the same experiment with several users.

either via a URL or other buttons for posting the experiment on Facebook or
Twitter. Additionally, the interface allows the user to comment on the lab using
his/her Facebook account; sharing their excitement, giving suggestions, or asking
questions to a broad audience. In our example, they could ask about the physics
behind diffraction, share their findings or simply express their excitement.
Thirdly, while conducting the experiment, the user is able to record the data
of the experimental stages he or she is going through in a personalized lab book,
if the record option is selected. While recording, the indicator data points are
automatically added to a text field in the lab book interface, accessible via the
functions overlay. This data can subsequently be downloaded as a .csv-file and
be used for creating plots. This is similar to how an actual experiment would be
used as well. The lab book interface is displayed in Fig. 6. Thus, for our example,

Fig. 6. Overlay with recorded data, showing indicator values for several control set-
Remote Experimentation with Massively Scalable Online Laboratories 265

people will be able to record and compare the sensor data of optical intensity
for different diffraction gratings, or laser wavelengths.
In summary, the MSOL platform as described in these sections, is able to
accurately recreate the experience of the laboratory by providing an interactive
input and response system. In addition, social features enhance its usability in
an online learning environment.

4 Current State and Conclusion

The MSOL concept presented in this paper will behave like an actual experiment
while offering social features that would enable learning via “Purposeful Con-
versations.” The social features of this approach are a noteworthy, since social
engagement improves learning. This approach does not aim to replace any of
those standard approaches in science; instead, it is intended to augment the
usage of both. It is similar to an actual experiment in behavior, showing effects
that cannot be recreated in a simulation. The data points contain noise and
through this randomness, it provides a feeling close to an actual experiment.
Additionally, the presented approach is only based on retrieving small bits of
information from a database after turning a lab into a dataset. This makes it
scalable and easy to integrate in an online learning environment. To determine
the impact MSOLs can have on education, user testing with universities, MOOC
platforms and educational programs is planned. We encourage interested partic-
ipants to contact us at to add experiments
to the MSOL platform.
If this technology reaches its full potential, it will become the new standard
in providing experiments for online education. Crucial for this aim is a high
awareness of this new platform among students and educators, a very realistic
display of the experiment, and an easy way to upload new experiments to have
continuously updated content. MSOL can provide these features.

Acknowledgements. G.K. Herring wishes to thank the Stanford Graduate Fellow-

ship for their support. We also thank Stanford University for partial funding of this

1. Dalgarno, B., et al.: Effectiveness of a virtual laboratory as a preparatory resource
for distance education chemistry students. Comput. Educ. 53, 853–865 (2009)
2. Hesselink, L., et al.: Stanford cyber lab: internet assisted laboratories. Int. J. Dis-
tance Educ. Technol. 1(1), 22–39 (2003). Chang, S.-K., Shih, T.K. (eds.), Idea Group
Object Detection Resource Usage Within a Remote
Real-Time Video Stream

Mark Smith, Ananda Maiti ✉ , Andrew D. Maxwell, and Alexander A. Kist

( )

School of Mechanical and Electrical Engineering, USQ, Toowoomba, Australia


Abstract. The growth in remote education through technologies such as Remote

Access Laboratories has progressed to a stage where automated interpretations
of visual scenes within a video stream are necessary to provide enhanced learning
experiences. Augmented Reality tools are under development to expand the
current reach and immersion of remote laboratories. Network capabilities
between the experiment host and the client can affect the level of these enhance‐
ments. Augmented Reality relies on sensory engagement, which is critically
linked to the synchronization between the real-time scenes and the computer-
generated enhancements. This work highlights the problems of incorporating
Augmented Reality into Remote Access Laboratories, and the methods to
improve the level of user immersion.

Keywords: Remote laboratories · Augmented reality · Computer network · Data


1 Introduction

Remote Access Laboratories (RALs) provide a service whereby experimental rigs, key
hardware or software can be accessed and operated over a network remotely (Benetazzo
et al. 2000). Remote access provides the ability to deliver training and practical expe‐
rience to a larger cohort of students due to the increased availability of equipment. Most
RAL systems supply a live video stream of the equipment under control and include a
user interface to initiate tests and receive the results.
Augmented Reality (AR) shows a real-world environment with additional, computer
generated information. This allows a user of a service to experience a live action event,
but have the event enhanced through computer generated interactive sensory feedback
(Milgram and Kishino 1994). Users are generally provided sensory information of the
event which might not otherwise be directly viewable and thus extend the range of
information presented.
To date, Augmented Reality and Remote Access Laboratories have not been well
integrated. Incorporating AR into RALs has the potential to improve the practical expe‐
rience by supplying a rich sense of interactive control and immersion in the environment.
Combining these however introduces additional complexity and concern to current RAL

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_25
Object Detection Resource Usage 267

Remote Access Laboratories are generally affected by network induced delays (Kist
et al. 2014), where the primary source of bandwidth consumption is typically via the
live video stream of the remote experimental rig. Integration of multifaceted systems
such as AR into RAL environments can therefore exacerbate these issues resulting in
potential rapid consumption of available ICT resources.
RALs provide an important resource to schools and universities (Fisher and Jensen
1980). Schools can utilize RALs at the fraction of a cost to purchasing and up keeping
didactic resources. Universities cover a diverse range of students, from different time-
zones, and demographics, requiring access to resources at any time (Gustavsson 2003).
Applying AR processes to the RAL environment provides an extra level of interaction
with the equipment, allowing for an improved user involvement and immersion into the
test environment (Azuma 1997).
An important aspect of implementing AR features is Computer Vision (CV). This
usual requires extensive data procession and is therefore resource intensive. Improving
the reality and immersion of RALs through AR enhancements comes at cost of these
CV models, used in security and surveillance fields to detect and track objects, tend
to need training or use off-line processing of recordings because of the limitations of the
technology (Fisher and Jensen 1980). Applying object detection and tracking methods
to the equipment within the live video stream of the experimental rig, increases
complexity. It may consume resources to a point that any expected advantage AR is to
provide, ends up nullified. This leads to the key question of what are the additional ICT
resources that are needed for the inclusion of AR services into a RAL environment. This
paper investigates this question.
This work outlines typical resources utilized by common CV models. It describes
an implementation of those models to measure and ascertain the impact each model has
on ICT resources. Consumption of the host computers memory and processors are
reported, along with the time taken to process each video frame. These results can then
be used to ascertain the minimum set of resources required for combined AR and RAL
configurations. Additionally, these figures can also be applied to other models as a
baseline of the known resource consumption underlying other processes.
This paper is structured as follows. Section 2 provides a brief overview of the current
RAL and AR works, focusing on any overlapping areas. Issues pertaining to AR
resources are addressed in Sect. 3, where special computer vision aspects are measured
and explained. Section 4 highlights methods to improve user immersion within the AR
RAL environment when bandwidth limitations exist. Section 5 concludes this paper.

2 Current Research

The field combining AR within the RAL environment is relatively new. Augmented
Reality is a component of the Virtual Continuum. This is a sliding scale representing
full reality at one extreme, and a completely virtual environment on the other extreme.
Virtual test rigs and experiments have been used in engineering education and the engi‐
neering industry for more than twenty years. With the recent advances in computing
268 M. Smith et al.

availability and network capacity, these original activities have little resemblance to the
current RAL systems (Overstreet and Tzes 1999). Virtual rigs provide graphical repre‐
sentations of instruments which were controlled over a proprietary data bus (Fisher and
Jensen 1980). Overstreet and Tzes (1999) have produced a client/server configuration
which has quickly promoted a rapid uptake of web-based RAL configurations. Virtual‐
ized equipment has dominated the field. More recently infrastructure costs and capabil‐
ities have caught up with expectations.
Expanding bandwidth provides the infrastructure that is required for live video feeds
for remote systems, now readily available (Stauffer and Grimson 2000). The inclusion
of video streams into RAL systems has also provided the impetus to the field to expand
into other non-science and engineering fields. Diverse schools and faculties are utilizing
RAL to enhance their pedagogical outcomes. Disciplines as diverse as Nursing (Maiti
et al. 2016) and surveying have benefited from practical remote control of technical
RAL systems currently depend on live video streams for the user to observe the
operation of the equipment; however, interaction with the equipment is limited. Famil‐
iarization with technical equipment is somewhat restricted without the senses being
engaged with the functionality (Ester et al. 1996). Early mixed reality systems have
utilized fully virtualized instrumentation (Maiti et al. 2013). These mixed reality systems
where developed completely in-house, utilizing local resources. Support equipment
consisted of computer hardware and applications to simulate the environment, and
provide users with virtualized objects. Virtual Reality systems have not reached the
hype, mostly due to the lack of ICT capability and capacity. Some users of full virtual
environments also consider the experience unsettling (Fig. 1).
In recent years, AR has undertaken extensive growth in all aspects of computing.
This includes a wide variety of mobile devices. Mobile AR (Azuma 1997; Maiti et al.
2013; Fazli et al. 2009; Ester et al. 1996) systems have helped to promote the technology
through a series of convenient applications such as the addition to Google’s StreetView
called StreetLearn (Wagner and Schmalstieg 2003). Applications such as StreetLearn,
demonstrate the technology, helping to further promote research and development.
The majority of AR operations are performed for our visual sense. As the field, has
progressed, additional senses, such as tactile feedback systems were incorporated. As
such, immersion into the augmented environment has become easier to implement.
Azuma (1997) reported works of some of the first see-through head-mounted devices,
capable of viewing the current environment, overlaid with computer generated objects.
Augmented Reality has also expanded into education, helping users to visualize 3D
objects in real-time (Maiti et al. 2016). Using desktop or handheld devices, a magic-lens
effect is achieved where coded images within books, or on cards are detected, interpreted,
and rendered into complete 3D representations of topical items.
Remote Access Laboratories and Virtual Reality originally overlapped in engi‐
neering education and engineering industry fields. Remote instrument virtualization
provided a means to operate electronic test equipment over local networks. By the
1990’s, simple AR started to appear (Milgram and Kishino 1994) as a result of improved
computing resources. This form of AR, in experimentation, only supported basic sensory
Object Detection Resource Usage 269

data. Sensor data was displayed on virtual instrumentation, while watching videos of
the experiment.
Current AR systems in RALs have limited abilities. Very few works combine the
two technologies. Combined systems focus on visual enhancements, with some works
on the other senses. Works cover some practical implementations such as taxonomies
between hands-on and remote experimentation (Maiti et al. 2013), and more computer/
electronic test-bed (Fazli et al. 2009) systems. Many works have demonstrated the tech‐
nology through elaborate configurations. Engaging students with the technology has
produced systems such as an AR Racing Car games (Grimson et al. 1998), and 3D
modelling systems, all promoting the technologies capabilities.
Hence visual methods are typically used with AR, which rely on Computer Vision.
These CV models commonly used in industry to capture objects within video scenes,
are resource intensive to the extent that it can force significant portions of the processing
to occur off-line. Understanding the resource requirements for both AR and RAL struc‐
tures is necessary to develop effective sensory feedback systems suitable for implemen‐
tation. Basic estimates about RAL system resource limitations exist (Kist et al. 2014),
but there is little work done that has investigated the impact of the two technologies in

Fig. 1. Virtual continuum. Full reality on the left and a full virtual environment on the right.

3 AR Resources

The majority of current AR works focuses on the visual sense, while CV techniques for
object detection and tracking are employed to understand the scene in the live video
stream. This section will explain the various CV models which can be used with AR
systems to detect and track objects in the video stream. Resource monitoring and meas‐
urements are presented to demonstrate the additional ICT burden imposed by AR

3.1 Background
Any system implementing AR processes, has to ensure that the users of those systems
are able to engage and interact in a timely manner. The sense of immersion within AR
applications soon fails if registration, tracking and timing errors interfere with the system
processes. Remote Access Laboratories already require a variety of hardware, computer,
software and networking resources. Without including the additional re-source load
imposed by the AR processes, the RAL system could become degraded. Consequently,
AR resource usage must be determined and minimized to maintain effective synchro‐
nization and immersion.
270 M. Smith et al.

Augmented Reality interprets video scenes using two data modes: remote data sets
and local data sets. The use of local data sets is demonstrated in AR systems using
fiducial markers, which render 3D models (Grimson et al. 1998) when the marker is
detected. Remote data sets are impacted by the network resources available. Desktop
and mobile AR systems have had to delegate the object detection and graphic processes
to separate systems (Wagner and Schmalstieg 2003) so as to cope with the computing
resource demands. This delegation reduces the local resources needed to render the
virtual objects that interact with the current environment.
Developing visual AR systems hinges heavily on CV models. Previous CV works
on video streams provide comprehensive object identification and tracking. Unfortu‐
nately, CV models rely on off-line or post processing of the video stream. Very few
systems provide live or real-time interpretation of the video stream because of the heavy
load on ICT resources.
Computer Vision techniques are expected to materialize physical objects from
multidimensional datasets (e.g. video frame), with the same level of competency as the
human eye and brain. Within video streams, CV systems must attempt to compensate
for shadows, lighting variations, a moving background (such as trees moving in the
wind) and periodic object movements.
To help understand the variations in each video scene, the CV systems require
extensive training. Statistical analysis (Fazli et al. 2009), clustering (Ester et al. 1996)
and frame subtraction systems (Stauffer and Grimson 2000) require considerable
processing to handle data sets consisting of 20–30 frames per second, with a minimal
resolution of 76,800 pixels per frame (typical 240 × 320 frame size). This equates to
307.2 kB of data for a 32 bit RGB encoded frame. A total of 1,536,000 pixels, or 6,144 kB
per second must be processed, which is beyond the capabilities of all but dedicated
hardware. Compounding the problem is the quality of the network services. As the
connection quality deteriorates, the number of frames available for processing also
diminishes. Good network connections provide smooth transitions between frames, but
increases the resource consumption to process those frames. The two strategic resources
counter-balance effective immersion of the AR experience.

3.2 Computer Vision Model Testing

For an AR system which engages a user’s visual sense, three CV models have been
tested to determine their resource usage. Testing the resource baseline needs for the three
CV models involves repetition of analysis of the same 213 frames of an AVI video file
from an experimental rig under operation. The software was written using Microsoft’s
C# (4.0 .NET Framework). Testing occurred on a Windows 8.1 platform with an Intel
Core i7-4790 CPU @ 3.60 GHz with 8.0 GB’s of RAM.

Statistical Models
Statistical analysis of a pixel relies heavily on the historical data for the pixel. The
Object Detection Resource Usage 271

( ) ∑K ( )
p xN = 𝜂 xN ;𝜃j (1)

calculates the probability of the pixel being a foreground or background object through
its distribution of preceding frames. Cataloguing a single pixel via its distribution adds
to the overall processing requirements, and accumulates to significant levels. Testing
involved storing pixel arrays N deep (20 pixels). The previous 20 pixels for a coordinate,
are used as a Gaussian model to derive the status of the pixel. The current parameters
of the pixel are compared to the Gaussian parameters to ascertain if the pixel status has
changed. Time and processing costs involve statistical calculations for every pixel on
every frame. Below, in Fig. 3, are the processing times for each frame, using normal
distribution. The formula below was applied to each pixel, with no weighting of the
distribution, so as to keep the processing requirements to a minimum.
Figure 2 shows a reasonably consistent period of approximately 140 ms for each
frame and is much larger than the required 50–33 ms frame rate for standard video feeds.
This demonstrates that processing live video in the current configuration, will allow only
every third or fourth frame to be processed.

Frame Subtraction


process time (ms)







1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 145 153 161 169 177 185 193 201 209
Frame number

Fig. 2. Statistical (GMM) frame analysis and frame subtraction frame analysis: frame processing

Frame Subtraction Models

Frame Subtraction involves comparing every pixel in the current frame, with the corre‐
sponding pixel from the reference frame (Fresult = FRef − Fi ). Using the RGB color
channels as the data within the reference frame, individual pixel colors are subtracted
from each other. If the resultant pixel’s delta-color does not meet a threshold, then it is
returned as a black pixel for any location that has not changed from the reference frame.
In video streams, the term not changed is not absolute. For each pixel between each
272 M. Smith et al.

frame, the color values may fluctuate for many reasons such as; slight ambient lighting
changes, shadows, reflective surfaces, and the camera’s internal CCD variations.
For this test, simple raster like processing measures the difference between pixels of
the same coordinate (x, y), from the current frame and the previous frame. A threshold
of 10% was used. Each pixel has a maximum value of 255 per color channel, so if the
difference between pixels was less than 25, it was set as white, otherwise it was set to
Frame subtraction testing on the video file, produced better results than the statistical
and DBSCAN models, as shown in Figs. 2 and 3. A consistent 110 ms processing time
occurred for each frame. These results are still far from ideal, with every second or third
frame needing to be ignored if implemented in an AR environment.

DBSCAN Time (ms)








1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 113 120 127 134 141 148 155 162 169 176 183 190 197 204 211

Fig. 3. DBSCAN frame analysis: frame processing time

Clustering Models
Clustering methods do not take a pixel in isolation, but must analyze all unclassified
pixels within its neighborhood. All pixels within a radius (depending on the clustering
model) are required to be verified as to their suitability to be a member of current group.
Additionally, a pixel must be directly density-reachable (Ester et al. 1996) to the core
pixels before it can be considered a member of the cluster. While the task is not tech‐
nically challenging, the iterative nature of a O(n log(n)) time complexity system,
consumes precious resource time.
A pixel, under DBSCAN, can be a member of only one cluster. For each frame, the
test involves checking each pixel’s (Px) neighborhood, scanning a radius of pixels out.
Unclassified pixels within the region are tested, and if suitable, marked as part of P(x)
cluster. Processing costa increase as the number of clusters, the radius of the neighbor‐
hood, and the number of pixels’ reachable increases.
Performing a DBSCAN pass on each of the frames which have undergone processing
(such as frame subtraction of statistical analysis), adds significant delays. Figure 4 shows
a relatively consistent frame processing period until frame 149. At this point, the video
scene has an increase in object motion, and DBSCAN processing increases as a result.
This delay is at totally unacceptable levels. The summary of results shown in (Stauffer and
Object Detection Resource Usage 273

Grimson 2000). tally the resource usage for the framework models tested. (Stauffer and
Grimson 2000) results only consist of the CV model attributes, and do not include the user
interaction functions, sensor data or other RAL resource needs.

Process Time (ms)







1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 113 120 127 134 141 148 155 162 169 176 183 190 197 204 211

Fig. 4. Frame subtraction frame analysis: frame processing time

Offline video processing systems are tuned for graphical tasks, and are beyond the
capabilities of generic desktop systems of remote laboratory users. The summary of
results shown in Table 1 tally the resource usage for the framework models tested.
Table 1 results only consist of the CV model attributes, and do not include the user
interaction functions, sensor data or other RAL resource needs.

Table 1. Frame analysis performance summary

Average frame Average memory Average process
process time (ms) usage (kB) usage single CPU (%)
Frame subtraction 110 37333 97
Clustering 3305 36995 100
Statistical (GMM) 136 86629 99

The vision systems within a RAL environment can consist of single or multiple
cameras. An AR implementation must be expected to process the live video stream(s),
interpret the scene, accept sensor data, render video overlays, and retransmit data back
to the remote laboratory. User’s interaction and satisfaction with the AR RAL config‐
uration will wholly depend on the consumption of the ICT resources during this process,
and how the management of the resources can ensure user immersion in the experiment
or practical session.
274 M. Smith et al.

4 AR Improvements

The network topographies of RALs have been investigated (Maiti et al. 2013) to an
extent that they are well understood, and provide the offset to all timing techniques and
calculations discussed in this section. Network delays must be constantly and carefully
monitored and controlled so that the accumulation of all delays remains within accept‐
able levels for AR RAL environments.
Augmented Reality resource consumption revolves around interpreting the scene
between the frames of the live video stream. Any technique employed to improve AR
responsiveness must assume a minimum network latency.
Previously, old military visual systems, using cathode ray tubes, would hijack the
interlacing scheme to interweave tactical information into the image. In today’s envi‐
ronment, image overlays are the primary method to incorporate post-generation images
into the stream. Taking a leaf from the old frame interlacing techniques military systems,
image overlays can be given the opportunity to skip the current frame, providing addi‐
tional time for any intensive processing. Network latency times of 25 ms to 50 ms would
require that every second frame to be skipped, as a minimum resource necessity.
Changes within the scene between frames vary at different rates, depending on the type
of experiment/exercise being performed. Fast changing scenes may not find frame skip‐
ping an acceptable solution, while reasonable static scenes could be updated at much
greater intervals.
Rendering virtual objects is also dependent on the timing of sensor data received
from the experimental rig. Reception of the live video stream and the sensor data (also
streamed through the same link) complicates the synchronization of rendered virtual
objects. Ensuring the live action within the video stream matches sensor data informa‐
tion, adds to the processing overheads. With less active video scenes, it is possible to
pause or limit the need for continual analysis of each frame within the stream (Maiti
et al. 2016).
Real-time statistical analysis of a remote laboratory’s full video stream is time-
consuming through the number of pixels that must be processed. Reducing this bottle‐
neck can be achieved through the following techniques.

4.1 Windowing

Within every video scene, there are regions that are of no interest to the experiment, or
have no function. Within the gear experiment shown below in (Grimson et al. 1998),
separate regions are of interest at specific times. For example, monitoring only the top
half of an experiment, or the center region of the view is probably sufficient for some
demonstrations. This limits the processing needs of augmented systems to a much
smaller subset of data.

4.2 Training

CV techniques, based on statistical modelling, all benefit from training (Grimson et al.
1998) where each pixel’s color distribution is calculated from preloading, or training
Object Detection Resource Usage 275

from existing video data. The Gaussian mixture model defines the probability distribu‐
tion of a pixel. The number of distributions is a factor of the available memory (Maiti
et al. 2016), and processor power. As additional frames are received into the CV
processes, the distributions are updated. If the Gaussian distributions are performed on
the stream before the experiment begins, then this training will provide a baseline for
comparison during actual rig operations. For every new frame, the pixel color values
are checked against the distributions. It is common to use a standard deviation of 2.5
(Ester et al. 1996) to determines the threshold for a pixel, marking it as either a back‐
ground or foreground object. Foreground objects are the detected objects, which are
tracked. Training will reduce runtime processing costs.

4.3 Client/Server
Workstations at remote locations will vary in capabilities, which limits the base level
resources acceptable for effective AR. Placing hardware capable of performing the
intensive graphical processing at the host, ensures that all client access can receive the
full AR immersion.
Clients receive a video stream that already has the image overlays included. Data
from sensors is also processed at the host. Clients receive the full and complete video
stream, including all feedback data, plus the interface to operate the various controls and
devices. Synchronization of screen transactions and sensor readings are simpler, with
only the user interaction requiring alignment with the video scene.
User interaction within a remote laboratory experimental rig is mostly through
controlling the various equipment. User actions trigger small data packets to the server.
Smaller data requirements to the server, have smaller network loading needs. Conse‐
quently, network delays from user interactions should be minimized and impact
modestly on the users’ immersion level. Processing responsive user input at the server
allows the server to supply the complete rendered scene back to the user. With any
reasonable network access, user input should undergo minimal delays to the resultant
feedback images.

5 Conclusion

Augmented Reality systems capable of integrating with RALs have many hurdles to
overcome to provide services across the wide range of practical and experimental envi‐
ronments. Efficient utilization of ICT resources is paramount for comprehensive and
effective immersion within the remote environment. Augmented Reality for RALs relies
on a responsive network conduit and synchronization between the visual information
and user interactions. All network delays must be accounted for when determining AR
configurations. With reduced capabilities in any of these pathways, then the immersive
effect of AR becomes a liability rather than a benefit.
Incorporating AR functionality into RALs involves consuming additional ICT
resources. A computer system undertaking the AR processes for a remote laboratory
will require additional memory, processor, and network resources. Executing CV
276 M. Smith et al.

algorithms used within the AR processes provides a metric of the baseline resource
usage. Testing the three major computer vision methods - clustering, frame subtraction
and statistical - with actual remote experiment video streams demonstrated the imme‐
diate current shortcomings of the technology.
Object detection and tracking are both essential functions for AR and this paper has
identified the direction for future work to mitigate these limitations. Limiting the region
of interest within the video image can provide significant gains in terms of overall
responsiveness and reduction of resource usage. Providing some training to the vision
systems also benefits the responsiveness but requires additional management of the
experimental rig. Ensuring the users of a remotely controlled experiment have sufficient
resources can be mitigated by having the major computer vision processing done at the
host location by hardware better suited to the task.
These resource management plans will be further tested to ascertain their capabilities
and effectiveness for AR systems with a RAL environment.


Benetazzo, L., Bertocco, M., Ferraris, F., Ferrero, A., Offelli, C., Parvis, M., Piuri, V.: A web-
based distributed virtual educational laboratory. IEEE Trans. Instrum. Meas. 49(2), 349–356
Milgram, P., Kishino, F.: A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst.
77(12), 1321–1329 (1994)
Kist, A.A., Maiti, A., Maxwell, A.D., Orwin, L., Midgley, W., Noble, K., Ting, W.: Overlay
network architectures for peer-to-peer remote access laboratories. In: 2014 11th International
Conference on Remote Engineering and Virtual Instrumentation (REV), pp. 274–280. IEEE,
February 2014
Overstreet, J.W., Tzes, A.: Internet-based client/server virtual instrument designs for real-time
remote-access control engineering laboratory. In: Proceedings of the 1999 American Control
Conference, vol. 2, pp. 1472–1476. IEEE, June 1999
Fisher, E., Jensen, C.W.: PET and the IEEE 488 Bus (GPIB). OSBORNE/McGraw-Hill, Berkeley
Gustavsson, I.: A remote access laboratory for electrical circuit experiments. Int. J. Eng. 19, 409–
419 (2003)
Azuma, R.T.: A survey of augmented reality. Presence Teleoper. Virtual Environ. 6(4), 355–385
Wagner, D., Schmalstieg, D.: First steps towards handheld augmented reality. In: ISWC, vol. 3,
p. 127, October 2003
Maiti, A., Kist, A.A., Maxwell, A.D.: Estimation of round trip time in distributed real time system
architectures. In: Telecommunication Networks and Applications Conference (ATNAC), 2013
Australasian, pp. 57–62. IEEE, November 2013
Fazli, S., Pour, H.M., Bouzari, H.: A novel GMM-based motion segmentation method for complex
background. In: 2009 5th IEEE GCC Conference & Exhibition, pp. 1–5. IEEE, March 2009
Ester, M., Kriegel, H.P., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in
large spatial databases with noise. In: KDD, vol. 96, no. 34, pp. 226–231, August 1996
Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. IEEE Trans.
Pattern Anal. Mach. Intell. 22(8), 747–757 (2000)
Object Detection Resource Usage 277

Maiti, A., Kist, A., Smith, M.: Key aspects of integrating augmented reality tools into peer-to-
peer remote laboratory user interfaces. In: 2016 13th International Conference on Remote
Engineering and Virtual Instrumentation (REV), pp. 16–23. IEEE, February 2016
Grimson, W.E.L., Stauffer, C., Romano, R., Lee, L.: Using adaptive tracking to classify and
monitor activities in a site. In: Proceedings of 1998 IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, 1998, pp. 22–29. IEEE, June 1998
Integrating a Wireless Power Transfer System
into Online Laboratory: Example
with NCSLab

Zhongcheng Lei, Wenshan Hu(&), Hong Zhou, and Weilong Zhang

Department of Automation, School of Power and Mechanical Engineering,

Wuhan University, Wuhan, China

Abstract. Wireless Power Transfer (WPT) technology is able to transmit

electric power from the Tx side to Rx side without any electrical connection,
realizing electrical isolation and breaking through the limitations of electric
wires. Traditionally, finding the best working point of the WPT system is dif-
ficult as there are a great number of coupled parameters to tune. Besides, the
experimenter has to be on site to carry out the experiment with limitations such
as time, location, safety issue as well as sharing issue. In this paper, a two-coil
structure WPT system is integrated into web-based online laboratory NCSLab
using a controller and a DAQ (data acquisition) card as well as an user-defined
algorithm. With the latest technologies brought in, NCSLab is completely
plug-in free for experimentation on the WPT system. The optimum frequency
can be easily obtained by setting the system in the sweep-frequency mode using
the remote control platform. The remote control platform NCSLab addresses the
safety issue and test rig sharing issue by offering experimenter flexibility to carry
out WPT experiment anytime anywhere as long as the Internet is available. T he
integration of WPT system into NCSLab also provides teachers with a powerful
tool for classroom demonstration of state-of-the-art technology.

Keywords: Wireless Power Transfer (WPT)  Remote control  Data

acquisition  State-of-the-art technology sharing

1 Introduction

Wireless Power Transfer (WPT) technology has drawn growing attention in recent
years. Limitations on electric wires are no longer problems for the transformation of the
electric field to the magnetic field, and then to the electric field. In [1], a far-field
technique using propagating electromagnetic waves that transfer energy the same way
as radios transmit signals is presented. In contrast to the far-field technique, M. Soljacic
[2] introduced a near-field (inductive coupling) technique operating at distances less
than a wavelength of the signal being transmitted. As the near-field technique requires
relatively low frequency compared with far-field technique, it attracts much research
attention since it was proposed [3–6].

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_26
Integrating a Wireless Power Transfer System 279

WPT systems designed in Wuhan University [7–9] use inductive coupling technique.
WPT systems in [8, 9] which could be potentially used for high voltage power cable
monitoring were first introduced. All of the systems above adopt a simple 2-coil structure
easy for implementation rather than 3-coil [10, 11] or even multi-coil structure [12, 13].
Regarding conventional design, the tuning of parameters has been a problem.
Traditionally, finding the best working point of the WPT system is difficult as there are
a great number of coupled parameters to tune. What’s more, the experimenter of the
WPT system has to be on site to carry out the experiment with limitations on time and
location as well as safety issue.
State-of-the-art technology is able to keep people informed of the latest trends and
hotspot in the related field. However, conventional, it is not easy to share students with
the latest technology either for cumbersome equipment or device needs careful atten-
tion. For a WPT system with complicated structure and even high voltage generated in
the Tx and Rx side while energized, classroom demonstration is difficult.
The complicated implementation of physical system makes it impossible for every
university and institution to build a set of WPT system. Thus, it is urgent to address the
sharing issue to provide open access for experimentation and research, especially for
education of state-of-the-art technology.
The tuning issue, education issue along with sharing issue has brought out the idea
of Remote Control WPT (RCWPT) system based on networked control [14, 15] which
is a hotspot. There are already a great many of online laboratories which can provide
remote control of physical equipment. For example, in [16], the remote control of
electric and electronic instruments is introduced in NetLab, GOLDi-labs in [17] allows
users to remote control a 3-axis portal and in [18] a remote inclined plane laboratory for
displacement measurements versus time is presented.
NCSLab (Networked Control System Laboratory) is a hybrid online laboratory
which provides both physical and virtual test rigs for remote experimentation. Previ-
ously, only physical and virtual test rigs in control engineering are setup in NCSLab.
For example, fan speed control system [19], dual tank [20], and DC motor [21].
In all, there are 20 virtual rigs and six physical test rigs in NCSLab. However, as
one of the advantages of NCSLab, test rigs in geographically diverse locations can be
integrated into NCSLab [22]. Theoretically, all test rigs that match the interface of
NCSLab can be successfully deployed.
However, it remains to be found whether it is possible to utilize NCSLab to explore
the WPT system. For example, whether it is able to remotely find out the efficiency,
best working point and optimum frequency of the WPT system. Given that NCSLab is
powerful platform that new types of test rigs can easily be deployed into its framework
through the pre-customized interface, a WPT system could be potentially integrated
into NCSLab as well.
The WPT system is a physical test rig containing multiple electric and electronic
parts that all need careful attention. As various widgets such as textboxes, charts and
280 Z. Lei et al.

gauges are integrated into NCSLab, it provides convenience to remotely monitor and
tune parameters in a visual mode. The WPT system in this paper uses a simple 2-coil
structure as other ones in Wuhan University.
The rest of the paper is organized as follows. In Sect. 2, the NCSLab architecture is
presented. Two of the specific features of NCSLab are also introduced in this
part. Section 3 describes the principle of a two-coil WPT system adopted in this paper.
In Sect. 4, the integration of WPT system into NCSLab is explored, in which the
controller, USB data acquisition card and control algorithm are discussed in details.
Section 5 gives an example of a well-configured monitoring interface of the WTP
system in NCSLab. The paper is concluded in Sect. 6.

2 NCSLab Architecture

Evolved through over 10 years with the latest upgradation, NCSLab provides full
access at with the advantage of 24/7 with HTML5
technology fitted in. Apart from common features of remote laboratories [23, 24],
NCSLab has its specific features, two of which are introduced as follow.
1. Free from plug-ins
Web-based online laboratory offers convenience without any software installation.
However, potential web crash and updating issues caused by plug-ins remains to be
addressed. The finalization of HTML5 provides alternative to other 3D engines
which needs plug-ins for rendering. As previous Flash 3D engine is replaced by
HTML5 technology in NCSLab [25, 26] with more and more web browsers sup-
porting HTML5, experimenter can conduct various experiments in NCSLab in the
web browser free from plug-ins.
2. 3D Virtual roaming
Apart from the tree structure (laboratory - sub-laboratory - test rig) of the NCSLab,
virtual roaming which can be parallelly accessed is also provided for the experi-
menter. Same as in the physical laboratory, the experimenter can go to the virtual
laboratory building with keyboard and mouse. Several sub-laboratory room will
appear when walking into the main building. If the experimenter chooses one of the
sub-laboratory and walks in, a series of virtual experimental equipment will lie on
the virtual desks in front of the experimenter. Each virtual test rig is ready for
experimentation if it is “picked up” by the experimenter.
Figure 1 shows the current architecture of NCSLab in Wuhan University.
Researchers from all over the world can access the system to carry out experiments
with registered username and password as all the test rigs are open for experimentation.
Test rigs in control engineering as well as WPT system in electric and electronic
engineering are integrated into NCSLab.
Integrating a Wireless Power Transfer System 281

Fig. 1. NCSLab architecture

3 Principle of a Two-Coil WPT System

To provide a RCWPT system, the key issue is to find an appropriate parameter for
control using inductive coupling. Another problem to be addressed is to offer
observable result for monitoring. Therefore, a simple two-coil structure WPT system is
the best option.
The circuit model of two-coil WPT system using magnetically coupled resonator is
shown in Fig. 2, in which the Tx coil and the Rx coil share the same resonant fre-
quency. As can be seen in Fig. 2, an AC voltage source drives a RLC branch on the Tx
side, which is able to create a high frequency magnetic field on the Tx side. Once the
Tx coil is energized at the resonant frequency, the Rx coil can recover the energy from
282 Z. Lei et al.

RP1 Rp2

AC iron core
L1 L2

C1 C2

Fig. 2. Circuit model of two-coil WPT system

the field converted from electric power transmitted through the magnetic field between
the two coils. Finally the Rx coil can drive a load bulb for observation.
Using Kirchhoff’s voltage law (KVL), the two-coil model depicted in Fig. 2 can be
analyzed as

I1 ðR1 þ jxL1 þ Þ þ jxI2 M ¼ Vs ð1Þ

I2 ðR2 þ jxL2 þ Þ þ jxI1 M ¼ 0 ð2Þ

where R1 ¼ Rp1 , R2 ¼ Rp2 and the M is the mutual inductance between the Tx and Rx
coil. The relationship between coupling coefficient k and mutual inductance M are
M ¼ k L1 L2

To simplify the two circuit Eqs. (1) and (2), Z1 and Z2 are defined as the impen-
dence of the both circuit loops as

1 1
Z1 ¼ R1 þ jxL1 þ ; Z2 ¼ R2 þ jxL2 þ
jxC1 jxC2

The two KVL Eqs. (1) and (2) can be solved as

Z2 Vs jxMVs
I1 ¼ ; I2 ¼  ð3Þ
Z1 Z2 þ x2 M 2 Z1 Z2 þ x2 M 2
Integrating a Wireless Power Transfer System 283

4 Implementation of Integrating a WPT System into NCSLab

A WPT system is able to transmit electric power within a reasonable distance. To

achieve wireless power transfer, a great many of electronic devices are needed for the
practical implementation. Figure 3 demonstrates the diagram of the practical imple-
mentation. In Tx side, a H-bridge high frequency inverter is used to convert the DC to
AC. On Rx side, a high speed bridge rectifier made of Shockley diodes is used to
rectify AC to DC.

S1 D1 S3 D3 RP1 RP2
iron core
Vd V
L1 L2 bulb
S2 D2 S4 D4 k
C1 C2
- To S2 and S3

Generation To S1 and S4
USB DAQ Current and
Card Voltage


Fig. 3. Diagram of practical implementation

Figure 4 shows the RCWPT system in the physical laboratory, it can be seen that
there is no electric connection between the Tx and Rx coils. The physical system can
definitely be used for hands-on WPT experiment on site with forementioned limita-
tions. After integration, the RCWPT system called Wireless Power Transfer in
NCSLab can be accessed at in the Compli-
cated System sub-laboratory for remote experimentation.
Due to the relocation of the laboratory, there is no enough space for the WPT
system. Thus, the current WPT system is setup at the corner of the laboratory. For the
sake of legibility, the system in Fig. 4 used a picture taken in May, 2016, which is
exactly the same system as the current one except for the distance between the two
coils. The location of the system demonstrates the advantage of the RCWPT for saving
Apart from basic electronic components, in order to integrate the WPT system into
NCSLab to build a RCWPT, a controller, a USB DAQ (Data Acquisition) card and an
algorithm are three key factors.
284 Z. Lei et al.

Rx coil Tx coil

H-bridge high
Rectifier frequency


DAQ card

Fig. 4. Remote controlled WPT system (taken in May, 2016 in the old laboratory)

4.1 Windows-Based Controller

Actually, the controller for the RCWPT system is a Windows-based mini PC running
communication and camera-supporting program all the time. Figure 5 demonstrates the
controller based on mini PC bar. The USB interface board is mainly used.
The camera API is running to support the 24/7 monitoring of the system. For the
RCWPT system, two cameras are connected to the controller. One camera is for the
overall system. The other is for part of the system, or more precisely, the monitoring of
the bulb, ammeter and voltmeter. The ammeter is for the measurement of output

Fig. 5. Controller based on mini PC bar
Integrating a Wireless Power Transfer System 285

current, and the voltmeter is to measure the output voltage. The experimenter is able to
watch the monitoring result in the web page, in which the brightness of the bulb shows
the output power of the system.
Traditionally for other WPT systems in Wuhan University, a direct digital syn-
thesizer (DDS) module controlled by a MCU (microcontroller unit) is adopted to
generate the accurate square wave exciting signal [9]. Using the keyboard on the MCU
controller, the output frequency can be tuned from 0.1–1 MHz with the step size of
10 Hz. To achieve remote control of the WPT system, the controller is connected to the
frequency generator. Parameters such as inciting frequency, sweep frequency and
sweep amplitude can be remotely reset as long as it can be found from the control

4.2 USB DAQ Card

Another functionality of the controller is the communication with the USB DAQ card.
The USB DAQ card is used for collecting signals like the current and voltage both in
the Tx and Rx side. It should be noted that the collected current and voltage are
measured from the DC side in both the Tx and Rx side, which can be seen in Fig. 3.
Using the collected current and voltage, the input power and output power can be
calculated. Thus, the transfer efficiency can be obtained.
The DAQ card also monitors the command between the test rig and the server.
Command such as algorithm uploading and downloading as well as parameters tuning
are under its surveillance.

4.3 Sweep-Frequency Algorithm

The sweep-frequency algorithm is designed in MATLAB/Simulink, and built in
Real-time Workshop (RTW). Figure 6 shows the sweep-frequency algorithm in detail.
Setting out and Feedback blocks are two user-defined functions concerning
sweep-frequency setting and signals retrieving. After the design and compilation of the
algorithm, it is uploaded to the server in the web interface. Program running in the
controller can communicate with the algorithm.

Fig. 6. Sweep-frequency algorithm
286 Z. Lei et al.

The parameters in the algorithm such as frequency “Hz” block and sweep fre-
quency and amplitude in “Sweep Setting” block can be found and tuned in the tree
structure of the monitoring and control interface of NCSLab, and signals such as input
current, voltage and output current and voltage could be monitored using various
widgets offered by NCSLab.

5 Monitoring and Control of the WPT System in NCSLab

A WPT system can be integrated into NCSLab using hardware and algorithm men-
tioned in Sect. 4. The remote control platform NCSLab adopts Web structure, which
means experimenters don’t have to install any client applications. With the latest
technologies brought in, the platform is completely plug-in free, so the experimenter
just has to register and login to conduct the experiment on RCWPT system.
As the WPT system is for remote control rather than power delivery, the power
transfer efficiency and transferred power is not the priority in this paper, thus, the
RCWPT system is built without precise calculation. With the use of various widgets
provided by NCSLab, the system is able to monitor signals and parameters such as
current, voltage, power and frequency. Parameters such as frequency and
sweep-frequency amplitude can be easily controlled in the user-defined interface.
Signals can be collected easily using widgets like charts and gauges. More importantly,
it helps to remotely explore the optimum transfer frequency by tuning the exciting
frequency, sweeping frequency and sweeping amplitude.
In order to analyse the power transfer efficiency and optimum frequency, data such
as the input and output power, and working frequency should be collected. In partic-
ular, to obtain the optimum frequency, the WPT system should be set in
sweep-frequency mode, which is shown in Fig. 7(a). The resonant frequency is
180.75 kHz at the distance of 13 cm with sweep frequency 0.4 and sweep amplitude
1000 Hz, the transfer efficiency can be obtained. Figure 7(b) shows the RCWPT
system working at resonant frequency, in which the output current and voltage are
1.172 A and 5.782 V, respectively. It can be calculated that the output power is
6.777 W. From Fig. 7(b), it can be seen clearly that the bulb is brighter than the
moment in Fig. 7(a), in which the output current and voltage are 0.8707 A and
2.782 V, respectively, and the output power is 2.422 W.
Once the state-of-the-art WPT technology is integrated into NCSLab, it is able to
provide remote access for the teachers and students. On one hand, the teacher can
clearly explain the RCWPT system through classroom demonstration. On the other
hand, the students can carry out the WPT experiment individually anytime anywhere
with customized control and monitoring interface. The integration brings technology
close to students with less cost and more convenience.
Integrating a Wireless Power Transfer System 287


Fig. 7. RCWPT system in NCSLab (a) working at sweep-frequency mode (frequency at
1.8075 kHz ± 1000 Hz, x = 0.4) (b) working at 1.8075 kHz

6 Conclusion

In this paper, a WPT system is deployed into the NCSLab framework. The integration
of WPT system into NCSLab benefits from various monitoring and control widgets of
NCSLab. The optimum frequency and best working point can be easily obtained by
setting the WPT system in the sweep-frequency mode using widgets of the NCSLab,
which shows the results in a visual and intuitive interface. Thus, the system can be
adapted to the best working point by resetting the frequency obtained previously,
288 Z. Lei et al.

which could make the system working at the best condition and achieve the highest
output power. The remote control platform provides flexibility for the experimenter to
remotely perform experiment anytime anywhere as long as the Internet is available,
which address the tuning issue as well as the safety issue at the same time. Using
NCSLab, the WPT system is able to be integrated into online laboratory for remote
experimentation for both classroom demonstration and experiment by students, which
brings state-of-the-art technology close to students.

Acknowledgement. This work was supported by the National Natural Science Foundation
(NNSF) of China under Grant 61374064.

1. Sample, A.P., Yeager, D.J., Powledge, P.S., Mamishev, A.V., Smith, J.R.: Design of an
RFID-based battery-free programmable sensing platform. IEEE Trans. Instrum. Meas. 57
(11), 2608–2615 (2008)
2. Kurs, A., Karalis, A., Moffatt, R., Joannopoulos, J.D., Fisher, P., Soljacic, M.: Wireless
power transfer via strongly coupled magnetic resonances. Science 317(5834), 83–86 (2007)
3. Inagaki, N.: Theory of image impendence matching for inductively coupled power transfer
systems. IEEE Trans. Microw. Theory Tech. 62, 901–908 (2014)
4. Kiani, M., Jow, U.-M., Ghovanloo, M.: Design and optimization of a 3-coil inductive link
for efficient wireless power transmission. IEEE Trans. Biomed. Circuits Syst. 5(6), 579–591
5. Sample, A.P., Meyer, D.A., Smith, J.R.: Analysis, experimental results, and range adaptation
of magnetically coupled resonators for wireless power transfer. IEEE Trans. Industr.
Electron. 58(2), 544–554 (2011)
6. Beh, T.C., Kato, M., Imura, T., Oh, S., Hori, Y.: Automated impedance matching system for
robust wireless power transfer via magnetic resonance coupling. IEEE Trans. Industr.
Electron. 60(9), 3689–3698 (2013)
7. Deng, Q., Liu, J., Czarkowski, D., Kazimierczuk, M.K., Bojarski, M., Zhou, H., Hu, W.:
Frequency-dependent resistance of litz-wire square solenoid coils and quality factor
optimization for wireless power transfer. IEEE Trans. Industr. Electron. 63(5), 2825–2837
8. Zhou, H., Zhu, B., Hu, W., Liu, Z., Gao, X.: Modelling and practical implementation of
2-coil wireless power transfer systems. J. Electr. Comput. Eng. 27, 1–8 (2014)
9. Hu, W., Zhou, H., Deng, Q., Gao, X.: Optimization algorithm and practical implementation
for 2-coil wireless power transfer systems. Am. Control Conf. (ACC) 2014, 4330–4335
10. Kang, S.H., Choi, J.H., Harackiewicz, F.J., Jung, C.W.: Magnetic resonant three-coil WPT
system between off/in-body for remote energy harvest. IEEE Microwave Wirel. Compon.
Lett. 26(9), 741–743 (2016)
11. Moon, S.C., Kim, B.C., Cho, S.Y., Ahn, C.H., Moon, G.W.: Analysis and design of a
wireless power transfer system with an intermediate coil for high efficiency. IEEE Trans.
Industr. Electron. 61(11), 5861–5870 (2014)
12. Yin, J., Lin, D., Lee, C.K., Hui, S.Y.R.: A systematic approach for load monitoring and
power control in wireless power transfer systems without any direct output measurement.
IEEE Trans. Power Electron. 30(3), 1657–1667 (2015)
Integrating a Wireless Power Transfer System 289

13. RamRakhyani, A.K., Lazzi, G.: Interference-free wireless power transfer system for
biomedical implants using multi-coil approach. Electron. Lett. 50(12), 853–855 (2014)
14. Lai, J., Zhou, H., Lu, X., Yu, X., Hu, W.: Droop-based distributed cooperative control for
microgrids with time-varying delays. IEEE Trans. Smart Grid 7(4), 879–891 (2016)
15. Lu, X., Yu, X., Lai, J., Guerrero, J.M., Zhou, H.: Distributed secondary voltage and
frequency control for islanded microgrids with uncertain communication links. IEEE Trans.
Indus. Inf. doi:10.1109/TII.2016.2541693
16. Nedic, Z.: Demonstration of collaborative features of remote laboratory NetLab. In: 2012 9th
International Conference on Remote Engineering and Virtual Instrumentation (REV), pp. 1–4
17. Henke, K., Vietzke, T., Hutschenreuter, R., Wuttke, H.D.: The remote lab cloud
‘’. In: 2016 13th International Conference on Remote Engineering and
Virtual Instrumentation (REV), pp. 37–42 (2016)
18. Stefka, P., Zakova, K.: Displacement measurements versus time using a remote inclined
plane laboratory. In: 2016 13th International Conference on Remote Engineering and Virtual
Instrumentation (REV), pp. 435–439 (2016)
19. Hu, W., Liu, G.-P., Zhou, H.: Web-based 3-D control laboratory for remote real-time
experimentation. IEEE Trans. Industr. Electron. 60(10), 4673–4682 (2013)
20. Hu, W., Zhou, H., Liu, Z.-W., Zhong, L.: Web-based 3D interactive virtual control
laboratory based on NCSLab framework. Int. J. Online Eng. 10(6), 10–18 (2014)
21. Lei, Z., Hu, W., Zhou, H., Zhong, L., Gao, X.: A DC motor position control system in a 3D
real-time virtual laboratory environment based on NCSLab 3D. Int. J. Online Eng. 11(3),
49–55 (2015)
22. Hu, W., Liu, G.-P., Rees, D., Qiao, Y.: Design and implementation of web-based control
laboratory for test rigs in geographically diverse locations. IEEE Trans. Industr. Electron. 55
(6), 2343–2354 (2008)
23. Santana, I., Ferre, M., Izaguirre, E., Aracil, R., Hernández, L.: Remote laboratories for
education and research purposes in automatic control systems. IEEE Trans. Industr. Inf. 9(1),
547–556 (2013)
24. Maiti, A., Maxwell, A.D., Kist, A.A.: Features, trends and characteristics of remote access
laboratory management systems. Int. J. Online Eng. 10(2), 30–37 (2014)
25. Lei, Z., Hu, W., Zhou, H.: Deployment of a web-based control laboratory using HTML5. Int.
J. Online Eng. 12(7), 18–23 (2016)
26. Hu, W., Lei, Z., Zhou, H., Liu, G.-P., Deng, Q., Zhou, D., Liu, Z.-W.: Plug-in free web
based 3-D interactive laboratory for control engineering education. IEEE Trans. Industr.
Electron. doi:10.1109/TIE.2016.2645141
Spreading the VISIR Remote Lab Along Argentina.
The Experience in Patagonia

Unai Hernandez-Jayo1 ✉ , Javier Garcia-zubia1, Alejandro Francisco Colombo2,

( )

Susana Marchisio , Sonia Beatriz Concari3, Federico Lerro3, María Isabel Pozzo4,

Elsa Dobboletta4, and Gustavo R. Alves5

University of Deusto, Avda Universidades, 24, 48007 Bilbao, Spain
Universidad Nacional de la Patagonia San Juan Bosco, Ciudad Universitaria Km 4,
9005 Comodoro Rivadavia, Chubut, Argentina
Universidad Nacional de Rosario, Maipu 1065, 2000 Rosario, Santa Fe, Argentina
Rosario Institute of Research in Educational Sciences (IRICE-CONICET-UNR),
Ocampo y Esmeralda, Rosario, Argentina
Polytechnic of Porto, R. Dr. Roberto Frias, 4200-465 Porto, Portugal

Abstract. The learning of technical and science disciplines requires experi‐

mental and practical training. Hands-on labs are the natural scenarios where prac‐
tical skills can be developed but, thanks to Information and Communication
Technologies (ICT), virtual and remote labs can provide a framework where
Science, Technology, Engineering and Mathematics (STEM) disciplines can also
be developed. One of these remote labs is the Virtual Instruments System in
Reality (VISIR), specially designed to practice in the area of analog electronics.
This paper aims at describing how this remote lab is being used in the Universidad
Nacional de la Patagonia San Juan Bosco (UNPSJB - Argentina), in the frame‐
work of the VISIR+ (“This project has been funded with support from the Euro‐
pean Commission. This publication reflects the views only of the authors, and the
Commission cannot be held responsible for any use which may be made of the
information contained therein”.) project funded by the Erasmus+ Program, one
institution without previous experiences with remote labs.

1 Introduction

The Virtual Instrument Systems in Reality (VISIR) is a well-known remote lab that has
been discussed many times in this conference and in many articles published in journals.
Being designed and developed by Prof. Ingvar Gustavsson in Sweden almost 10 years
ago [1], this remote lab has been set-up in different European institutions. University of
Deusto was the first institution that purchased and deployed the VISIR outside Sweden,
and it was followed by other universities in Spain, Austria and Portugal. After the
expansion of the remote lab platform, the VISIR Consortium created around it aimed at
sharing experiences and experiments using the VISIR as a learning tool which helps
students and teachers to achieve the learning outcomes of subjects related to analogue
electronics. With the goal of spreading the knowledge about the VISIR, the VISIR

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_27
Spreading the VISIR Remote Lab Along Argentina 291

+ project was presented to ERASMUS+ European Union call, being finally accepted
in July 2015 [2]. To fulfil that goal, European universities that have the experience of
using the VISIR will transfer it to Latin American institutions, namely Higher Education
Institutions with engineering careers/courses.
VISIR+ has two well-differenced well distinctive stages: during the first one, insti‐
tutions from Latin America must deploy the physical elements, instruments and compo‐
nents of the VISIR remote lab. This stage is supported by staff from BTH (Sweden), the
developers of the remote lab. In a second stage, the other European Universities involved
in the project will help their Latin American partners to exploit the resources of the
VISIR remote lab as a learning tool, sharing with them their experiences along these
This paper rather than being focused on describing the VISIR+ aims at exploring
the results of the first training action that was held in Rosario (Argentina) in September,
2016. During this training action, staff from University of Deusto introduced the VISIR
remote lab to more than 25 trainers, lecturers and professors from different parts of
Argentina that were interested in discovering the possibilities offered by the VISIR. The
sessions started with an introduction to remote lab as many of the attendants were novel
in these environments.
The goal of this paper is to show not only the experiences during this training action,
but also the first intensive use of the VISIR by lecturers and students from Universidad
Nacional de la Patagonia San Juan Bosco (UNPSJB).

2 Scope of Training Action 2

One of the expected results of the project is a set of educational modules for engineering
courses comprising the use of hands-on, simulated and remote labs, following an
enquiry-based methodology. It implies the inclusion of the VISIR remote lab in theo‐
retical and practical lessons with students, within a variety of courses related to electric
and electronic circuits. In order to fulfill that objective, the project VISIR+ has two
training actions in associated Latin American institutions partners of the Project.
The first training action in the framework of VISIR+ project took place at Facultad
de Ciencias Exactas, Ingeniería y Agrimensura (FCEIA) from Universidad Nacional de
Rosario (UNR) in September 2016. The training was developed during three days,
combining oral presentations, workshops and practical activities with VISIR. The
training sessions were led by two research professors of the Universidad de Deusto, who
are experts in the use of VISIR, plus three UNR teachers who usually use remote labo‐
ratory practice in their subjects of Electronic Engineering courses. Also present was one
researcher of from Instituto Rosario de Investigación en Ciencias de la Educación
(IRICE), member of the VISIR+ project, with the aim of taking records of from the
training sessions.
This training action at FCEIA targets all teachers with lecture duties in Engineering
courses related to electric and electronic circuits, plus two representatives from each of
the two UNR associated partners: Facultad Regional Rosario of the Universidad Tecno‐
lógica Nacional and Instituto Politécnico Superior of the UNR. As this training action
292 U. Hernandez-Jayo et al.

was also considered one key moment for dissemination at a regional level, there were
also invited academic authorities, PhD students and teachers from other institutions
nearby UNR.
Three teachers from different Argentine Universities were also invited to participate
in this training action. They were selected by Consejo Federal de Decanos de Ingeniería
(CONFEDI). The participation of CONFEDI as an associated partner provides the
conditions for creating an additional impact at the national level in Argentina. The three
teachers attended the training sessions as regional coordinators of a project that
CONFEDI is carrying out in Argentina, –to encourage the subsequent dissemination of
the use of VISIR in the Engineering faculties. Belonging to this last target group, is the
professor of the Universidad Nacional de la Patagonia San Juan Bosco, whose experi‐
ence using VISIR is presented in this paper.
During this training action, staff from Universidad de Deusto introduced the VISIR
remote lab to 26 trainers, lecturers and professors from different parts of Argentina that
were interested in discovering the possibilities offered by the VISIR (Fig. 1).

Fig. 1. Training action at Universidad Nacional de Rosario (Argentina)

Due to some administrative delays related to the import process, UNR still lacked
the necessary equipment to support training. This inconvenience was overcome by using
the VISIR platform of Universidad de Deusto, via Internet.
The sessions started with an introduction to remote lab because many of the attend‐
ants were novels in these environments. The training program included aspects related
with the design, implementation and the evaluation of educational modules with VISIR.
In addition it included application examples selected from those available on Web Lab
Deusto, to prove the adaptability of VISIR to different institutional cultures and its
universality in terms of experiments with electric and electronic circuits. The teachers
focused on both, technical and didactic aspects, especially in order to scaffold student’s
learning and foster their autonomy, namely by allowing them to conduct real experi‐
ments over the Internet. Once the training was completed, and to encourage both the
teachers’ motivation on the use of VISIR and the immediate application of what was
Spreading the VISIR Remote Lab Along Argentina 293

learned to the classroom context, attendees were asked to plan an educational activity
using VISIR contextualizing the plan in their own subject, career and institution.

2.1 Immediate Outputs of Training Action

A Satisfaction Questionnaire (SQ) was designed by the members of VISIR+ Project in
charge of Qualitative research, from the Research Institute of Education Sciences
(IRICE-CONICET) in Argentina and from the Instituto Politécnico do Porto (IPP) from
Portugal. The SQ had a twofold objective: measuring the immediate impact of TA on
target audience and evaluating possible scenarios for VISIR implementations in HE
institutions. The SQ was given to the 19 TA participants at UNR and the questions
focused on three main aspects of the TA: (1) the workshop (objectives and time allotted)
and the lecturers (interaction with participants); (2) the use of technological equipment,
i.e. VISIR Lab, as regards the didactic implications and practical use; and (3) the partic‐
ipant’s expectations on TA2. All questions were presented in the form of statements and
a Likert scale from 1 to 5, being (1) Unsatisfactory and (5) Excellent. Table 1 below
sums up the results.

Table 1. TA impact/outcomes
Workshop Technological Participants
equipment expectations
Excellent 48.17% 6% 47%
Highly satisfactory 43.83% 26% 43%
Above average 8.00% 68% 10%

Most participants scored the workshop as excellent (48.17%) and highly satisfactory
(43.83%). Only 8% found the workshop above average. The evaluation of the workshop
included the overt explanation of the TA objectives, the time allotted, the instructors’
participation and the extent to which technological equipment had enhanced the effec‐
tiveness of teaching and learning. As regards the actual use of the technological equip‐
ment, namely VISIR Remote Lab, the answers ranged from too easy to use (i.e. excel‐
lent) 6%, easy to use (i.e. highly satisfactory) 26%, and just right (i.e. above average)
68%. Finally, TA met participants’ expectations by 47% as excellent, 43% as highly
satisfactory and 10% above average.
An open question was also included in the SQ in order to provide a qualitative
perspective to the evaluation by eliciting reflection on positive and negative aspects of
the whole experience. Three main categories aroused from the reading of participants
answers: equipment potential, clear presentation, time. Most of the participants argued
that the training action raised awareness about the potential of VISIR equipment not
only by presenting the possibilities of actual use in the classroom but also by giving
participants the chance to experiment during the sessions. Secondly, most participants
pointed out the presentation approach facilitated their understanding of VISIR technical
and pedagogical use. Finally, participants referred to the need of more time to extend
the TA experience: the schedule was constrained to some slots for actual connection to
VISIR via University of Deusto WebLab which participants considered limited.
294 U. Hernandez-Jayo et al.

3 Early Use of the VISIR in Patagonia

One of the participants of TA which took place at UNR from Universidad Nacional de
la Patagonia San Juan Bosco (UNPSJB) in Comodoro Rivadavia (Argentina) imple‐
mented VISIR Remote Lab in his subject Theory of Circuits. The subject Theory of
Circuits is in the second year of Electronic Engineering at the Engineering College of
UNPSJB. VISIR Remote Lab learning tool was introduced to the subject to give students
more options on real circuit experiments. To the traditional lab activities, practice was
added to allow students to analyze and interpret the forced temporal response to a resis‐
tive, inductive and capacitive circuit (RLC). In this type of practice students had to
experiment on a real circuit, i.e. select components and instruments, make the connec‐
tions, set the instruments and carry out the measurement. Before the practice, students
made the modeling, calculus and simulation of the target phenomenon.
The modeling developed by students was based on the circuit theory from which set
physic magnitudes had to be calculated, expressions of variables obtained and results inter‐
preted. The behavior of the model was also simulated by means of appropriate software and
the results were compared. In the next stage, students carried out the experiments using
VISIR Lab and contrasted the results against calculus and simulation drawing conclusions
from results. To organize the tasks, a lab guideline was designed where the objective of the
practice, the activities preliminary to real circuit experiment and procedures were made
explicit. Students had access to the remote lab and all necessary information about VISIR
from the subject webpage ( and
links to WebLab-Deusto from University of Deusto, Spain.

3.1 Students’ Use of VISIR During the Experimental Practice

The students carried out the activities individually in the computer room of the Electronic
Department. At the beginning of the activities, a professor guided students in the use of
VISIR Lab about how to access to the remote lab by means of assigned users’ names and
passwords. Then students carried out the selection of components, the wiring, the instru‐
ment configuration and the measurement following the procedure given and the objectives
set for the practice. During this process, students shared with classmates the results of each
individual experience, their learning and conclusions, this time being the role of professor
that of a moderator.
To analyze and interpret the behavior of electric variables of RLC circuits, the guide
suggested the model shown in the figure with R1 = 100 Ω, C1 = 2.20 nF, L1 = 10 mH
(Fig. 2).
Spreading the VISIR Remote Lab Along Argentina 295

Fig. 2. Experimental practice exercise

The procedure established that the circuit should be wired on the “protoboard”,
generate a square signal, 500 Hz frequency and 1 V PP amplitude, and obtain the signals
Vg and VL from the oscilloscope from which attenuation and resonance frequencies
should be measured (theoretical magnitudes are α = R1/2L1 ω0 = 1/(L1 * C1)1/2 respec‐
tively). To obtain the attenuation frequency students observed from the oscilloscope the
time τ = 1/α by which VL falls to a 37% of its minimum value. To determine the reso‐
nance frequency, they observed the period T of the sinusoid and calculated ω0 = 2π/T.
The results obtained from the experience using the VISIR Remote Lab were then
compared to the previous activities. Students submitted a report with the description of
the practice carried out and the conclusions drawn to the professor (Fig. 3).

Fig. 3. Practical implementation and results at VISIR remote lab

3.2 Impact

Adopting the new tool VISIR Remote Lab to carry out the experiments turned out to be
an appealing option for both students and professors. The tool is accessible and has an
outstanding graphic interface. During the experience, there was an immediate adoption
to the remote lab and the tool resulted intuitive, especially to students who most of the
time anticipated teachers’ explanations about use. Probably, being familiar with similar
real instruments at the UNPSJB lab, students did not need to read manuals or additional
online information about VISIR.
Many aspects from the subject Circuit Theory syllabus were strengthened using a
remote lab, namely the teaching objectives, the management, the task organization, the
296 U. Hernandez-Jayo et al.

accessibility and the relation and integration with other pedagogical means and

3.3 Analysis of the Experience

The VISIR instance of the University of Deusto is deployed on the WebLab-Deusto
RLMS (Remote Laboratory Management System) [3], which offers a set of adminis‐
tration tools in order to analyse the performance of the users during their remote exper‐
imentation sessions.
If this analysis is focused only on the UNPSJB target group, the following conclu‐
sions can be obtained:
• The number of students involved in experience were 11.
• The total number of uses of the lab has been 46. On average, a student has accessed
to the lab 4 times.
• The total time of all the sessions has been 79215.06 s, that is 7201.37 s per user.
• The maximum number of access per day was 23, being 3561 s the maximum period
per day (Fig. 4).

Fig. 4. Analysis of uses of VISIR by students from UNPSJB

If the experience of most active user (unpsjb_1) is studied, we can obtain the
following information:
• The user with the account unpsjb_1 has accessed to the lab 11 times. The total time
spent by the user on the lab has been 11780.19 s.
• The day that the user spent more time on the lab was on October, 18, being 60 1 h
performing experiments on the lab. From this session, the following information can
be obtained:
– The user performed 31 experiments on the lab. This does not mean that the user built
31 experiments, but he/she executed 31 times one or different experiment on the lab.
– This session was before the last one, and he/she did not perform any work circuit.
This means that he/she did not try to build any not allowed circuit or measurement.
Spreading the VISIR Remote Lab Along Argentina 297

– During the whole session, the circuit under test was the same and it was built by the
user in the same way. He/she only changed the configuration of the instruments to
obtain a better resolution of the measurement and then a better understanding of the
circuit behaviour.

4 Conclusion

The outcomes defined for VISIR+ project are the natural evolution of the use of the
VISIR remote lab during the last 10 years. This remote lab has been tested and used by
all the European partners involved in the project, so now it is high time it was deployed
in other regions as Latin American. Then, all the experiences and experiments developed
for ten years are going to be shared between all the institutions of the project. The Project
implemented Training Actions to bridge these experiences between European and Latin
American institutions. This paper shows how the VISIR instance deployed at University
of Deusto is being used by Universidad Nacional de la Patagonia San Juan Bosco
(UNPSJB) in Comodoro Rivadavia. However, this is only the first step of the VISIR
spreading in Latin American countries. According to the working plan of the project,
two VISIR platforms will be deployed on Argentina, making easier and faster its use by
other Argentinean institutions.

Acknowledgment. The authors would like to acknowledge the support given by the European
Commission to the VISIR+ project through grant 561735-EPP-1-2015-1-PT-EPPKA2-CBHE-


1. Gustavsson, I., Nilsson, K., Zackrisson, J., Garcia-Zubia, J., Hernández-Jayo, U., Nafalski, A.,
Nedic, Z., Gol, O., Machotka, J., Pettersson, M.I., Lago, T., Hkansson, L.: On objectives of
instructional laboratories, individual assessment, and use of collaborative remote laboratories.
IEEE Trans. Learn. Technol. 2(4), 263–274 (2009)
2. Alves, G.R., Fidalgo, A., Marques, A., Viegas, C., et. al.: Spreading remote lab usage. A
System – A Community – A Federation. In: CISPEE Conference, Vila Real, Portugal, 19–
21 October 2016
3. Orduña, P., Bailey, P.H., Delong, K., López-De-Ipiña, D., García-Zubia, J.: Towards federated
interoperable bridges for sharing educational remote laboratories. Comput. Hum. Behav. 30,
389–395 (2014)
Educational Scenarios Using Remote Laboratory VISIR
for Electrical/Electronic Experimentation

Felix Garcia-Loro1, Ruben Fernandez2, Mario Gomez2, Hector Paz2, Fernando Soria2,
María Isabel Pozzo3, Elsa Dobboletta3, André Fidalgo3,4, Gustavo Alves4,
Elio Sancristobal1, Gabriel Diaz1, and Manuel Castro1 ✉
( )

UNED, Madrid, Spain
UNSE, Santiago del Estero, Argentina,,,
IRICE-CONICET, Rosario, Argentina,
IPP, Porto, Portugal

Abstract. In 2015, Electrical and Computer Engineering Department (DIEEC)

of the Spanish University for Distance Education (UNED) in Spain started
together with the Santiago del Rosario National University (UNSE, Argentina)
and with the support of the Research Institute of Education Sciences of Rosario
(IRICE-CONICET, Argentina) under the Coordination of the Polytechnic Insti‐
tute of Porto (IPP, Portugal) the new development and deployment of the VISIR
system inside the UNSE University as part of the VISIR+ Project.
The main objective of the VISIR+ Project is to extend the current VISIR
network in South America, mainly in Argentina and Brazil, with the support and
patronage of the European Union Erasmus Plus program inside the Capacity
Building program and as part of an excellence network future development inte‐
gration framework. This extension of VISIR nodes reconfigure in 2016 a new
project, PILAR, that as part of the Erasmus Plus projects will allow the Strategic
Partnership to develop a new federation umbrella over the existing nodes and

Keywords: Remote laboratory · VISIR · Educational scenarios

1 Context

Experimentation has been always a pillar in which educational institutions trust to

narrow the gap between academic and industrial worlds. The experimentation allows
students the interaction with real components, equipment and instruments, the verifica‐
tion of the theoretical laws governing the behavior of electric and electronic circuits or
the analysis of non-desired effects such as noise on output signals, temperature effects
on components, behavior of different component technologies, etc. Unfortunately, labo‐
ratory resources are limited because of their availability, costs, etc. This limitation

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_28
Educational Scenarios Using Remote Laboratory VISIR 299

induces in students a path to address practical experiences separately from theoretical

contents, just if they were two activities non related one to the other.

2 Orientations on the Work

The emergence of remote laboratories has provided new horizons in the learning process
and has brought new challenges in teaching design. Remote laboratories are being used
in many different ways and with different strategies as in-person laboratories have been
used traditionally.
Remote laboratories are a new tool to complement in-person laboratory, simulators
and virtual laboratories. The pool resulting by all the available possibilities provides a
wide range of possibilities when designing a course in which experimentation plays a
key role.
Virtual Instruments System In Reality (VISIR) is a remote lab for wiring electric
and electronic circuits experiments that has been used, in the Electrical and Computer
Engineering Department (DIEEC) of the Spanish University for Distance Education
(UNED), within several subjects from different engineering degrees, master subjects,
expertise courses, Small Private Online Courses (SPOCs) and Massive Online Open
Courses (MOOCs); providing satisfactory results with regarding to either it’s perform‐
ance or skills acquired by students.
The whole system, formed by all the actors and all the strategies used in the diverse
scenarios, have been analyzed in order to define a new learning environment, with the
objective of achieving an improved system in which all the teaching/learning scenarios
must have room, solving the inconveniences experimented separately and in their inter‐
action between them.

3 Approach Used

The main advantage of remote laboratories versus in-person laboratories lies in its access
availability without temporal nor geographical restrictions; The main advantage of
VISIR, when comparing with other electronic remote laboratories, lies in his concurrent
access: multiple users interacting with the remote laboratory simultaneously, designing
the same or different circuits and monitoring the same or different signals in real time,
as in an in-person laboratory room with replicated workbenches.
The experience reached in the integration of remote laboratory VISIR, mainly in
distance education, and the data collected from students’ feedback, logs related with the
remote laboratory interaction, surveys, etc. have allowed identifying the needs for
improvement and/or redesign.
All the data gathered from the LMS (Learning Management System) platforms have
been obtained analyzing the different databases ((PostgreSQL, MongoDB). To evaluate
VISIR behavior (accurately of the measurements, response managing requests’ over‐
load, etc.) and to inspect students’ interaction in the laboratory (common mistakes
typical from VISIR, number of accesses, etc., its database (MySQL).
300 F. Garcia-Loro et al.

VISIR, (number of accesses, etc.) has been analyzed its database (MySQL) and its
logs (over 51 million of lines from the logs).
Inside the VISIR+ and PILAR projects, as well as previously inside the use of the
remote laboratory VISIR in the distance and online learning courses at UNED and the
MOOCs delivered using the VISIR system, a wide use has been obtained and published
regarding this use, [1–9].
According to Ursutiu et al. [10] and its reference to Learning by Experience from
Haynes any experience for learning involves a number of steps:
• Experiencing/doing with the instructor’s help or not;
• Sharing/what happened?
• Processing/analyzing;
• Generalizing;
• Applying.
Using this experience, the process developed to be more effective inside the new
starting of new installations, [11–16], (hardware, software and educational uses inside
the High Educational Academic environments) sites with the VISIR remote laboratory
and software are:
1. Share publications and tutorials regarding the use of VISIR inside electrical and
electronics engineering courses.
2. Share the use of VISIR remotely to allow the new teachers access to start working
with the VISIR system.
3. Start a first time face-to-face experience with some of the decision making teachers
and academic administrators regarding the feasibility and best practice of the use of
VISIR inside the target institution.
4. Start several synchronous sessions (using some collaborative environment, like
Moodle, videoconference facilities, etc.) with the new teachers and personnel
involved in the new deployment to allow a fast starting access as well as the first
touch of the system. During these preliminary sessions the expert or monitor will
show the main functions and specifications of the VISIR system as well as some
starting simple examples and use in the same environments and working area of the
future implementations.
5. Develop the face-to-face delivery with all the people involved in the on-site imple‐
mentation as well as with some possible new target institution members in the area
of the local University to try to have a core users target that will have in the future
the local use.
6. Have a local experience on the use and development of the educational implemen‐
tation of the VISIR remote examples with the local students, inside the classroom
and as well as with remote access to extend the experience of the use and as comple‐
mentary use of the remote laboratory.
7. Develop and extend the teaching experience from the local institution to all the core
new institutions inside the local area to reinforce the knowledge and implementation
as well as to develop new local strategies and synergies.
Educational Scenarios Using Remote Laboratory VISIR 301

8. Realize a formal evaluation and quality assurance of all the process involved during
the implementation of the previous new acquisition and development of the VISIR
remote laboratory deployment.

4 Outcomes

The integration of remote laboratories in online learning environments, together with

good practices in designing practical experiences, can alleviate the disadvantages of
remote laboratories compared to in-person laboratories, without leaving behind their
inherent advantages. What’s more, the strategy of using diverse and complementary
options in the same course (as in-person laboratories, remote laboratories and/or simu‐
lators) provides a broad range of capabilities and an easier assimilation of the experi‐
mental advantages in the academic domain, [17–19].
Students have been able to complete the different activities and tasks from different
courses and educative platforms, to interact with the remote lab, etc. So, for students,
the different systems used have accomplished its function: to provide the remote labo‐
ratory along with theoretical contents.
Previous experience for the UNED system implementation, communities and plat‐
form, aLF and INTECA videoconference system, allow the implementation of the
remote laboratories as well as the support systems inside UNED Abierta, [20–24],
However, for teaching staffs it has been no possible to track the students’ interaction
over the different actors, so it has not been possible to cross the information obtained
from them. A new whole system, taking into account all the inconveniences and diffi‐
culties found, has been developed and is being deployed for the opening academic year,

5 Conclusions

The results show the ductility of VISIR remote laboratory in different learning scenarios.
Together with VISIR, it is needed a well-designed course, contents and experimental
experiences in order to obtain satisfactory results since, not only VISIR, a remote labo‐
ratory is a tool: it is a means, not an end in itself.
A LMS platform with the necessary tools for a deeper analysis of the students’
learning process and that integrates both environments (courses’ platforms and remote
laboratory) seems necessary in order to evaluate the convenience of the supplementary
documentation (videos, documents, activities, etc.) and their relationship with learning
and disengaging.
All these findings led to a new and more inclusive structure for the whole system in
order to a better exploiting of the experimental resources and, mainly, to create a new
learning environment intended for the analysis of the learning process for further
302 F. Garcia-Loro et al.

Acknowledgments. The authors acknowledge the support of the eMadrid project (Investigación
y Desarrollo de Tecnologías Educativas en la Comunidad de Madrid) - S2013/ICE-2715, VISIR
+ project (Educational Modules for Electric and Electronic Circuits Theory and Practice following
an Enquiry-based Teaching and Learning Methodology supported by VISIR) Erasmus+ Capacity
Building in Higher Education 2015 nº 561735-EPP-1-2015-1-PT-EPPKA2-CBHE-JP and PILAR
project (Platform Integration of Laboratories based on the Architecture of visiR), Erasmus
+ Strategic Partnership nº 2016-1-ES01-KA203-025327.
And to the Education Innovation Project (PIE) of UNED, GID2016-17-1, “Prácticas remotas
de electrónica en la UNED, Europa y Latinoamérica con Visir - PR-VISIR”, from the Academic
and Quality Vicerectorate and the IUED (Instituto Universitario de Educación a Distancia) of the


1. Tawfik, M., Sancristobal, E., Martin, S., Gil, R., Diaz, G., Colmenar, A., Peire, J., Castro, M.,
Nilsson, K., Zackrisson, J., Håkansson, L., Gustavsson, I.: Virtual Instrument Systems in
Reality (VISIR) for remote wiring and measurement of electronic circuits on breadboard.
IEEE Trans. Ind. Electr. 6(1), 60–72 (2013)
2. Haertel, T., Terkowsky, C.: Creativity versus adaption: a paradox in higher engineering
education. Int. J. Creativity Probl. Solving 26(2), 105–119 (2016)
3. VISIR+ Project. Educational Modules for Electric and Electronic Circuits Theory and
Practice following an Enquiry-based Teaching and Learning Methodology supported by
VISIR - Erasmus+ Capacity Building in Higher Education 2015 nº 561735-EPP-1-2015-1-
PT-EPPKA2-CBHE-JP. Accessed 15 Nov 2016
4. PILAR Project. Platform Integration of Laboratories based on the Architecture of visiR -
Erasmus+ Strategic Partnership nº 2016-1-ES01-KA203-025327.
SpacesStore/2d88ecb1-3db1-4a29-93c1-dd2802eec4f6. Accessed 15 Nov 2016
5. Naef, O.: Real laboratory, virtual laboratory or remote laboratory: what is the most effective
way? Intl. J. Online Eng. 2(3), 1–7 (2006)
6. Hanson, B., Culmer, P., Gallagher, J., Page, K., Read, E., Weightman, A., Levesley, M.:
ReLOAD: Real Laboratories Operated at a Distance. IEEE Trans. Learn. Technol. 2(4), 331–
341 (2009)
7. Nedic, Z., Machotka, J., Nafalski, A.: Remote laboratories versus virtual and real laboratories.
In: 34th ASEE/IEEE Frontiers in Education Conference, T3E-1, pp. 1–6, November 2003
8. Coble, A., Smallbone, A., Bhave, A., Watson, R., Braumann, A., Kraft, M.: Delivering
authentic experiences for engineering students and professionals through e-labs. In: IEEE
EDUCON, pp. 1085–1090 (2010)
9. Sancristobal, E., Martin, S., Gil, R., Orduna, P., Tawfik, M., Pesquera, A., Diaz, G., Colmenar,
A., Garcia-Zubia, J., Castro, M.: State of art, initiatives and new challenges for virtual and
remote labs. In: IEEE 12th International Conference on Advanced Learning Technologies,
ICALT, pp. 714–715, July 2012
10. Ursutiu, D., Samoila, C., Jinga, V.: Remote experiment and creativity. Int. J. Creativity Probl.
Solv. 26(2), 47–80 (2016)
11. Potkonjak, V., Vukobratovic, M., Jovanovic, K., Medenica, M.: Virtual mechatronic/robotic
laboratory - a step further in distance learning. Comput. Educ. 55, 465–475 (2010)
Educational Scenarios Using Remote Laboratory VISIR 303

12. Jara, C.A., Candelas, F.A., Puente, S.T., Torres, F.: Hands-on experiences of undergraduate
students in automatics and robotics using a virtual and remote laboratory. Comput. Educ.
57, 2451–2461 (2011)
13. Rojko, A., Hercog, D., Jezernik, K.: Power engineering and motion control web laboratory:
design, implementation, and evaluation of mechatronics course. IEEE Trans. Ind. Electron.
57(10), 3343–3354 (2010)
14. Vivar, M.A., Magna, A.R.: Design, implementation and use of a remote network lab as an
aid to support teaching computer network. In: Third International Conference on Digital
Information Management, ICDIM, London (UK), 13–16 November 2008
15. Gustavsson, I., Nilsson, K., Lagö, T.L.: The visir open lab platform. In: Azad, A.K.M., Auer,
M.E., Harward, V.J. (eds.) Internet Accessible Remote Laboratories: Scalable E-Learning
Tools for Engineering and Science Disciplines Engineering Science Reference, pp. 294–317
(2012). ISBN 978-1-61350-186-3
16. Sheridan, T.B.: Descartes, Heidegger, Gibson, and God: towards an eclectic ontology of
presence. Presence Teleoperators Virtual Env. 8(5), 551–559 (1999)
17. Ma, J., Nickerson, J.V.: Hands-on, simulated, and remote laboratories: a comparative
literature review. ACM Comput. Surv. 38(3), 1–24 (2006)
18. Lang, D., Mengelkamp, C., Jager, R.S., Geoffroy, D., Billaud, M., Zimmer, T.: Pedagogical
evaluation of remote laboratories in eMerge project. Eur. J. Eng. Educ. 32(1), 57–72 (2007)
19. Lindsay, E.D., Good, M.C.: Effects of laboratory access modes upon learning outcomes. IEEE
Trans. Educ. 48(4), 619–631 (2005)
20. INTECCA | ¿Qué es AVIP? (2016).
inteccainfo/plataforma-avip/que-es-avip/. Accessed 15 Nov 2016
21. VISIR SIG: VISIR Special Interest Groups (SIG). http:// Accessed 15 Nov 2016
22. UNED-DIEEC: VISIR – Electronics Remote Lab « Research on Technologies for Engineering
Education (2016). Accessed 15 Nov 2016
23. Openlabs: OpenLabs - ElectroLab (2016).
ElectroLab. Accessed 26 Jan 2016
24. UNED Abierta: UNED Abierta (2015). Accessed 15 Nov 2016
25. IMS Global Learning Consortium:, IMS Global Learning Consortium. https:// Accessed 15 Nov 2016
26. IMS Global Learning Consortium: IMS Global Learning Tools
Interoperability Basic LTI Implementation Guide.
implementation-guide. Accessed 15 Nov 2016
27. Coursera Technology: LTI Integration.
platform/lti/. Accessed 26 Jan 2016
28. OAuth Community Site. Accessed 15 Nov 2016
29. IMS Global Learning Consortium:, Learning Tools Interoperability®
Background. Accessed
15 Nov 2016
30. Weblab-Deusto: WebLab-Deusto, documentation. Authentication. https://weblabdeusto. Accessed 15 Nov 2016
31. Moodle: Moodle - Open-source learning platform. Accessed 15 Nov
32. Swope, J.J.: A Comparison of Five Free MOOC Platforms for Educators: EdTech. (2014).
platforms- educators. Accessed 15 Nov 2016
Use and Application of Remote and
Virtual Labs in Education
Robot Online Learning Through Digital Twin
Experiments: A Weightlifting Project

Igor Verner1 ✉ , Dan Cuperman1, Amy Fang2, Michael Reitman3,

( )

Tal Romm1,3, and Gali Balikin1,3

Technion – Israel Institute of Technology, Haifa, Israel,,,
Massachusetts Institute of Technology, Boston, MA, USA
PTC Inc., Haifa Office, Haifa, Israel

Abstract. This paper proposes and explores an approach in which robotics

projects of novice engineering students focus on development of learning robots.
We implemented a reinforcement learning scenario in which a humanoid robot
learns to lift a weight of unknown mass through autonomous trial-and-error
search. To expedite the process, trials of the physical robot are substituted by
simulations with its virtual twin. The optimal parameters of the robot posture for
executing the weightlifting task, found by analysis of the virtual trials, are trans‐
mitted to the robot through internet communication. The approach exposes
students to the concepts and technologies of machine learning, parametric design,
digital prototyping and simulation, connectivity and internet of things. Pilot
implementation of the approach indicates its potential for teaching freshman and
HS students, and for teacher education.

Keywords: Robot learning · Weightlifting · Virtual twin · Internet of Things

1 Introduction

The view of robots as systems that repetitively perform preprogrammed behaviors under
automatic control is changing. The increasing complexity of robot hardware and
growing sophistication of robot tasks in unstructured dynamic environments make it
difficult to preprogram robots for every possible scenario, urging the development of
robots capable of adapting to changes, coordinate movements, and learn new behaviors
[1]. Robot intelligence technologies used to implement these capabilities are based on
methods of machine learning, simulation, and cloud computing.
The two machine learning approaches, mainly applied to teach new skills to robots,
are imitation learning and reinforcement learning [2, 3]. In imitation learning, the robot
records and imitates the target movement demonstrated by the instructor. In reinforce‐
ment learning (RL), the robot is not directly instructed, but through autonomous trial-
and-error search, it determines and records the action which optimizes the performance

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_29
308 I. Verner et al.

criterion. The opportunities to teach robots new skills or adapt existing skills to new
situations by using the RL approach have been intensively investigated, while one of
the main challenges is to reduce the experimentation time and the wear and tear on the
robot. A recommended method to cope with this challenge is by means of an empirical
predictive model which is autonomously generated and used to guide robot actions [4].
Simulation in robotics is a software tool for design and testing robot behaviors on a
virtual robot before implementing them on a real robot. Some of the benefits of using
robotics simulations, listed in [5], directly relate to reinforcement learning. In particular,
performing robot trials in a virtual environment allows experimental data to be generated
faster, more easily, and in any desired quantity, thus significantly speed-up the learning
process. Modern computer aided design systems provide means for creating virtual
models which accurately resemble the geometric and mechanical characteristics of the
real robots.
Cloud robotics is a method to enhance functionality of a robot by using remote
computing resources of memory, computational power, and connectivity [6]. In robot
learning, connection to the intended cloud platform enables the accumulation, storage,
and processing of data of robot trials and other relevant information on the Web server,
and transmit the data to the robot. The platform can serve a hub of the Internet of Things
(IoT) network, through which robots can share between them the learned skills and
communicate with other systems.
The goal of our research project is to propose and explore an approach in which the
challenge of implementation of robot learning is used as a thread for teaching the
discussed robot intelligence technologies to high school and first-year engineering
students. In this approach, the student is assigned to implement a robot task in which the
desired behavior cannot be pre-programmed, but has to be learned by the robot. In such a
project the student teaches the robot to acquire the skill by implementing a reinforcement
learning process supported by simulation modeling and cloud communication.
Our research is an ongoing multi-case study conducted at the Technion Center for
Robotics and Digital Technology Education through collaboration with the PTC Israel
Office. Participants are 1st and 2nd year students from MIT doing summer internship
projects in our lab, high school (HS) students participating in our outreach activities,
and Technion students studying technology education. We utilize robots constructed by
students using the ROBOTIS Bioloid Premium kit ( and software
tools by PTC, namely, the 3D modeling system Creo Parametric, and the IoT platform
Our project so far has passed three research phases. In the first phase, university
students implemented a RL scenario in which a humanoid robot, through a series of
trials, learns to adapt its body tilt angle for lifting different weights [7]. In the second
phase, a group of high school students, mentored by a faculty staff member and our
university students, constructed animal-like robots and implemented different RL
scenarios, utilizing the approach tested in the first phase. The focus of this paper is the
third phase, in which university students apply reinforcement learning, 3D modeling,
and cloud communication in order to implement a scenario in which a humanoid robot
learns to manipulate multiple joints to maintain its stability when lifting different
Robot Online Learning Through Digital Twin Experiments 309

2 Robot Weightlifting Task

Pick-and-place manipulations by fixed-base robots are widely explored and studied in

industrial robotics. On the other hand, planning basic handling tasks such as weight‐
lifting, to be executed by humanoid robots, is yet an evolving research topic [8, 9]. In
the weightlifting task, if the mass and size of the weight are known to the robot, then its
posture can be controlled in the open loop. The control policy is to prevent the robot
from falling down by maintaining its static and dynamic stability [9]. If the weight’s
mass and size are unknown, the close loop control is needed. Here, the control policy
can be determined analytically, based on the dynamic model of the robot and data from
force and torque sensors [8]. Rosenstein et al. [10] noted that analytic solutions for
humanoid robot weightlifting can be complex. They proposed an alternative approach
based on reinforcement learning through trial-and error.
Recently, performing weightlifting tasks by humanoid robots has become a chal‐
lenge in educational robotics addressed to university and even high school students [11].
Michieletto et al. [12] used a weightlifting task as a challenge of their “Autonomous
Robotics” course for MSc students majoring in computer science. The task was imple‐
mented trough robot learning from a human demonstrator with the aid of Microsoft
Kinect. Weightlifting by a humanoid robot was also posed as a benchmark assignment
of the robot competition FIRA HuroCup for university and school students [12, 13]. The
assignment was posed without reference to robot learning.
Our motivation to explore the educational challenge of humanoid robot weightlifting
came from developing a fetch-and-carry robot for the RoboWaiter contest [14] in which
we introduced the humanoid league [15]. In the contest assignment, the mass, size, and
location of the weight were predetermined. Following the contest project, we turned to
the new challenge of lifting a weight when its mass is unknown to the humanoid robot.
As mentioned in the introduction, in the first phase of our project, undergraduate students
constructed and programmed a humanoid robot that coped with the new challenge and
learned to lift an unknown weight by a series of trials and errors [7]. The robot was built
from the ROBOTIS Bioloid Premium kit and had 18 degrees of freedom, an acceler‐
ometer, a Bluetooth communication module, an IR sensor, and a sound sensor. The robot
was programmed using RoboPlus software.
The reinforcement learning scenario was as follows: The robot is given an unknown
weight while sitting down. The mass of the weight is estimated by measuring the angular
velocity of the robot shoulder joints in the way described in [7]. Then, the robot performs
weightlifting trials for different values of body tilt angles, each time attempting to stand
up from the sitting position. The robot evaluates whether it succeeded or failed the task
by determining if it remains standing or has toppled over, based on data provided by its
accelerometer [7].
Because of the memory limitation of the robot controller, the robot can store results
of a limited number of trials and only until the robot is powered off. Therefore, the
empirical data acquired from the robot trials were stored on a local computer. The
computer communicated with the robot via a Bluetooth interface supported by a Python
script. Based on these data, the computer provided the robot with the tilt angle value
suitable for successfully lifting the specific weight.
310 I. Verner et al.

In the following section we will discuss the way in which the robot weightlifting
task has been implemented in our current study.

3 Development of Robot Learning Mechanisms

The developed robot learning environment is presented in Fig. 1. It consists of three

components: the robot, the simulator (digital twin) and the cloud (ThingWorx). The
constructed robot is essentially the same as in the first stage of our project, but was
upgraded by adding grippers to suit barbell lifting. The digital twin is a virtual counter‐
part of the robot created to test robot functioning in the simulation mode instead of testing
the physical robot. The ThingWorx server is connected with the robot through the local
computer used as a routing point. ThingWorx also receives and analyzes data from the
simulator and sends recommendations for weightlifting posture to the robot upon

Fig. 1. The implemented robot ML

3.1 Construction and Calibration of Virtual Twin

The virtual robot was created using the Creo modeling software. We took the computer
designed models of parts of the Bioloid premium kit from the ROBOTIS website and
imported them to Creo. Using these parts, we assembled the virtual robot in the same
order as the construction of the physical robot. Because the models of the robot parts on
the website do not have assigned weights, we weighed the parts and added the mass
properties to each of the parts in Creo. We also assigned to each joint of the virtual robot
the same range of motion as of the corresponding joint of the physical robot.
After assembling the virtual robot, we calibrated the model to have the same balance
characteristics as the physical robot. During this step, we modified several parts of the
model, such as its motors, where we took into account its uneven distribution of mass.
Robot Online Learning Through Digital Twin Experiments 311

Then, we compared the balance characteristics of the physical robot and its digital twin.
We tested the balance of the physical and virtual robots in the same posture shown in
Fig. 1, for different values of mass of the weight. For the virtual robot, the calculation
was made using the “center of gravity analysis” and “sensitivity analysis” features of
Creo. The maximal mass that each robot can hold in this posture without losing stability
was determined. We calculated that the discrepancy between the physical robot and its
virtual twin was less than 3%.

3.2 Simulation Analysis

The objective of the simulation analysis was to optimize reinforced learning of the
physical robot. Two possible approaches to such analysis were considered: real-time
on-line simulation and batch offline simulation. The latter approach was chosen as more
simple and applicable to cloud-based learning. The approach implements massive multi-
parametric analysis (virtual testing) of weightlifting by the digital twin to create a
“space” of possible solutions and refine it into a “sub-space” of optimal solutions. Then
the optimal solutions are stored on the IoT platform and used in on-line communication
with the physical robot.
The problem for the simulation analysis was defined as follows: for the weight of a
given mass, test the balance of the virtual robot in its various postures over the range of
possible bending angles at the hip, knee, and ankle joints. The concept of a “virtual
sensor” was utilized – using capabilities of Creo, we have attached a “sensor” to the
center of gravity of the digital twin, to analyze the balance of the robot for different
combinations of possible angles of the joints. The angles ranged within 100° for the hip
and knee joints and within 90° for the ankle joint. We divided the ranges into intervals
of 10°. This resulted in 10 angles for the hip joint, 10 angles for the knee joint, and 9
angles for the ankle joint, which means that there were a total of 900 robot postures to
analyze for each value of mass of the weight.
The two parameters calculated for each posture of the virtual twin were: d, the
distance between the center of gravity and the center of the foot, and h, the robot’s height.
The distance d is used to evaluate the robot stability, and the height h is a parameter
which characterizes the robot’s posture when completing the task. Using “Design Study”
capability of Creo, the simulator determined various combinations of angles that allow
the robot to lift the given weight. By further analysis, the optimal combination (in which
the robot is in balance, h is maximal and d is minimal for this h) was found. We note
that the number of virtual trials can be reduced by using the Dynamic Analysis capability
of Creo.

3.3 Cloud Management of Robot Learning Data

Each optimal solution, generated through simulation analysis for each mass value,
contained three parameters (hip, knee and ankle angles). Database of these optimal
solutions has been uploaded to a data table in IoT platform ThingWorx.
We defined a function in ThingWorx which, upon getting the weight value as input,
utilizes the data table, and returns the corresponding angle values of the optimal solution.
312 I. Verner et al.

When the physical robot has to lift a weight, it first measures the weight mass, and sends
its value to the ThingWorx server. In response, the robot receives the values of the three
angles suggested based on the simulation analysis. Then the robot executes the lifting
using those values.
To visualize the online communication between the robot and ThingWorx, we
created a mashup web page which serves as a dashboard for displaying parameters of
the robot weightlifting trial (the weight’s mass and the three angles). The mashup is
shown in Fig. 2.

Fig. 2. The mashup displaying the data table in ThingWorx

4 Educational Implementation

As noted in the introduction, our research project explores an approach to teaching robot
intelligence technologies to high school and first-year engineering students by engaging
them in teaching robots to learn. The instructional design in the project is implemented
by the authors of this paper. A faculty staff member develops instruction in robot
construction and programming and conducts courses for school and freshman students.
Two more instructional designers are Technion mechanical engineering graduates
working at PTC and pursuing an additional degree in science and technology education
in our faculty. In the research project, which is part of their studies, they develop
instructional units on 3D modeling and internet communication in robotics and mentor
school and freshman students in these topics. The strategic planning, project coordina‐
tion and supervision of the instructional designers are made by a faculty member who
also guides school and freshman students in pedagogical concepts relevant to robot
learning. Consultancy regarding the modeling and communication technologies and
software systems was given by PTC.
Robot Online Learning Through Digital Twin Experiments 313

Three first-year students majoring in mechanical engineering at MIT have performed

robot learning projects in our lab, two students in 2015 and one in 2016. The learning
activities in the 2015 project are discussed in [7]. In the 2016 project the student
constructed and programmed the robot; created and calibrated the virtual twin;
programmed the balance analysis procedure for weightlifting; implemented virtual trials
and transmitted the results to the cloud data table; presented her work at PTC and faculty
seminars; and wrote a project report.
We implement the proposed approach by teaching intelligent technologies through
outreach courses to students of a high school in Haifa that has recently established an
engineering systems program. In the second stage of our research project, during the
2015–2016 academic year, we conducted pilot courses in 3D modeling and robotics to
11th graders. In the modeling course, the students learned to design and analyze
computer models of robots using Creo.
In the robotics course, they constructed various robots using the Bioloid kit and
implemented different scenarios of reinforcement learning. The students applied the
knowledge acquired in our courses in the project developed for participation in the
FIRST Robotics Competition. The school team participated in the 2016 international
competition in St. Louis and won the Rookie All Star Award for “implementing the
mission of FIRST to inspire students to learn more about science and technology.”

5 Conclusion

The approach proposed in our research extends the scope of educational robotics, which
traditionally focuses on practices with preprogramed robots. Results of our research
indicate that the challenge of developing learning robots can engage novice engineering
students in experiential learning of innovative concepts and technologies, such as
machine learning, parametric design, digital prototyping and simulation, connectivity
and internet of things. We found that those concepts and technologies are within the
grasp of understanding of freshman and HS students. In the next phase of the research,
we will practically implement our approach in an outreach course and we anticipate that
the evaluation of this experience will lead to the development of strategies for learning
with learning robots.


1. Peters, J., Lee, D.D., Kober, J., Nguyen-Tuong, D., Bagnell, D., Schaal, S.: Robot learning.
In: Siciliano, B., Khatib, O. (eds.) Springer Handbook of Robotics, pp. 357–394. Springer
2. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge
3. Kormushev, P., Calinon, S., Caldwell, D.G.: Reinforcement learning in robotics: applications
and real-world challenges. Robotics 2(3), 122–148 (2013)
4. Nguyen-Tuong, D., Peters, J.: Model learning for robot control: a survey. Cogn. Process.
12(4), 319–340 (2011)
314 I. Verner et al.

5. Reckhaus, M., Hochgeschwender, N., Paulus, J., Shakhimardanov, A., Kraetzschmar, G.K.:
An overview about simulation and emulation in robotics. In: Proceedings of SIMPAR, pp.
365–374 (2010)
6. Kehoe, B., Patil, S., Abbeel, P., Goldberg, K.: A survey of research on cloud robotics and
automation. IEEE Trans. Autom. Sci. Eng. 12(2), 398–409 (2015)
7. Verner, I., Cuperman, D., Krishnamachar, A., Green S.: Learning with learning robots: a
weight-lifting project. In: Robot Intelligence Technology and Applications, vol. 4, pp. 319–
327. Springer (2017)
8. Harada, K., Kajita, S., Saito, H., Morisawa, M., Kanehiro, F., Fujiwara, K., Kaneko, K.,
Hirukawa, H.: A humanoid robot carrying a heavy object. In: Proceedings of the IEEE
International Conference on Robotics and Automation, pp. 1712–1717 (2005)
9. Arisumi, H., Miossec, S., Chardonnet, J.R., Yokoi, K.: Dynamic lifting by whole body motion
of humanoid robots. In: IEEE/RSJ International Conference on Intelligent Robots and
Systems, pp. 668–675 (2008)
10. Rosenstein, M.T., Barto, A.G., Van Emmerik, R.E.: Learning at the level of synergies for a
robot Weightlifter. Robot. Auton. Syst. 54(8), 706–717 (2006)
11. Kuo, C.H., Kuo, Y.C., Chen, T.S.: Process modeling and task execution of FIRA weight-
lifting games with a humanoid robot. In: Conference Towards Autonomous Robotic Systems,
pp. 354–365. Springer, Heidelberg (2012)
12. Michieletto, S., Tosello, E., Pagello, E., Menegatti, E.: Teaching humanoid robotics by means
of human teleoperation through RGB-D sensors. Robot. Auton. Syst. 75, 671–678 (2016)
13. Anderson, J., Baltes, J., Cheng, C.T.: Robotics competitions as benchmarks for AI research.
Knowl. Eng. Rev. 26(1), 11–17 (2011)
14. Ahlgren, D.J., Verner, I.M.: Socially responsible engineering education through assistive
robotics projects: the RoboWaiter competition. Int. J. Soc. Robot. 5(1), 127–138 (2013)
15. Verner, I.M., Cuperman, D., Cuperman, A., Ahlgren, D., Petkovsek, S., Burca, V.:
Humanoids at the assistive robot competition RoboWaiter 2012. In: Robot Intelligence
Technology and Applications 2012, pp. 763–774. Springer, Heidelberg (2013)
Interactive Platform for Embedded Software Development

Galyna Tabunshchyk1 ✉ , Dirk Van Merode2, Peter Arras3, Karsten Henke4, and
( )

Vyacheslav Okhmak1
Zaporizhzhya National Technical University, Zaporizhia, Ukraine
Thomas More Mechelen-Antwerpen, Mechelen, Belgium
Faculty of Engineering Technology, KU Leuven, Leuven, Belgium
Integrated Communication Systems Group, Ilmenau University of Technology, Ilmenau,

Abstract. This paper describes a didactional system which is aimed at

supporting remote experiments in developing software for Embedded Systems.
As basis for this system is used the Raspberry Pi which provides a variety of
possibilities at low costs. Demo experiments and possibilities for learning soft‐
ware development for embedded systems are described.

Keywords: Remote experiments · Raspberry Pi · Reliability study

1 Introduction

Internet of Things (IoT), from just a definition nowadays transforms into a global
approach for the functioning/controlling of things in the world. Things refer to anything
which is used, controlled, measured and is connectable to the internet. IoT refers to all
kind of devices and (powerful) microcontroller systems that can read sensors, do some
preliminary digital signal processing and send output over the Internet to a variety of
users, being other machines or human users. The emergence of very powerful multi-
core microcontrollers, large sized working memories and a wide variety of commercial
of the shelf sensors enables this new, exciting and challenging market. It is projected
that the average amount of microcontrollers per person is rapidly growing and will
continue to grow in the next few years [1]. In their new Internet of Things report, Busi‐ projects there will be 34 billion devices connected to the internet by
2020 [2].
It should be clear that there are great job opportunities for specialists in this specific
high-skilled field of expertise. These specialists should have a profound knowledge of
both hard- and software aspects of the system, in interfacing with sensors, in using
embedded operating systems or real-time operating systems, but also on networking.

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_30
316 G. Tabunshchyk et al.

The task for higher educational institutes (HEI) is to deliver highly skilled engineers
and developers to the labor market who have the knowledge to design, build, operate
and problem solve these devices. Especially when it deals with Industrial Internet of
Things, where these systems are deployed in an industrial environment to run process-
critical applications, quality issues of the combined hard- and software become
extremely important.
It is the task of HEI’s to provide the most efficient and effective means for learning
the necessary skills. To acquire skills, technology education at any level requires exper‐
imenting and practicing in labs. For embedded systems and digital systems opportunities
for creating new laboratories as remote labs offer the possibility to learn about these new
features (IoT) by using it. The main characteristics for robust embedded system are
reliability, availability, failsafe and dependability [3]. For this reason it is obvious
important to teach students techniques not only to use but also to develop reliable soft‐
ware for embedded systems.
This paper describes the development of remote experiments for the study of
embedded systems and the reliability of the embedded software.

2 Embedded Systems Software Development Tasks

To come to a working and reliable embedded system (ES), there is a great variety of
tasks to be solved. Besides the hardware design there is the different software tasks which
need to be implemented in the embedded system. For this paper we only look at some
aspects of the software development.
We focus on two main tasks which a ES-engineer needs to solve: the manipulation
of data elements on different levels and the collection and processing of data.
Any lab – including remote labs - should offer enough possibilities for students to
experiment and offer measurable learning outcomes, associated with experimenting. In
other words, care should be taken that the remote lab is more than a demonstration lab,
but a real experiment – although controlled from a distance.
When developing remote experiments as teaching/learning aid, one should bear in
mind the same questions as when developing any other didactical method: namely think
carefully on the learning outcomes and teaching approach. The learning outcomes will
point out what and how students will need to learn and also point on how to evaluate [4].
These observations clearly show that remote labs not only have advantages but also
are the cause of many challenges when considering the construction of remote experi‐
ments. The advantages for students is clear: the 24/7 availability to experiment and
repeat experiments can motivate students to achieve a deeper learning on the topics. The
challenges for the construction of the lab are to make it user-friendly, efficient in
achieving the learning outcomes and motivating and attractive to students. Another
major challenge is potential distant evaluation and feedback for the students on mistakes/
good and bad practices they used [3].
In order to improve teaching in diagnostic methods for embedded systems, a remote
lab (Interactive Lab Platform (ILP)) that examines reliability problems in real time was
constructed [4] based on the Raspberry Pi platform which we named Informational
Interactive Platform for Embedded Software Development Study 317

Systems on Reliability Tasks (ISRT). Raspberry Pi is a basic embedded system and as

a single-board computer it is often used in cyber-physical systems and IoT applications.
Possible operating systems for the Raspberry Pi are Debian Linux in Raspbian Jessie
distribution or Ubuntu Linux. Raspberry Pi has Input/Output for low level control and
as it offers the possibility of installing Linux on it, so a web server and databases can be
installed. For these reasons and extended range of possibilities, the platform was chosen
as the basis for the remote experiments. This allows us to collect data for further analysis.
In Sect. 3 a short description of the platform architecture and the different experi‐
ments are discussed.

3 Interactive Lab Platform Description

3.1 Architecture of the Platform

The heart of the ISRT design is a Raspberry Pi Model B. The ISRT is also equipped
with an expansion board [4] developed at Thomas More Mechelen Antwerpen (TMMA)
University College, a MI0283QT Adapter v2, Wi-Fi, BLE4 adapters for signal variation
tasks, camera for video transmission tasks, a display for online compilation of the
programs and results overview.
The program is designed on the principles of MVC (Model-View-Controller). For
software development the following elements were used: NodeJS platform with inte‐
grated auxiliary module [6]; framework Expresses; as programming language Java‐
Script; library bcm2835 [5].

3.2 Platform Possibilities

The outcomes of the remote labs are reached with different exercises. For a first explo‐
ration of the system there are demo examples.
Next there are assignments on ISRT software development and checking.

ISRT Demo Examples. For a first exploration of the hardware architecture, which can
be built with a Raspberry Pi as server, four Demo modules were developed which realize
different ways of ISRT system usage:

3.2.1 Manipulation with LEDs on the Expansion Board

In this demo the Raspberry Pi was fitted with the expansion board [5]. Students tasks is
to convert a number from one system into another (binary to hex, hex to binary, oct to
binary). A camera is used to display the expansion board display containing the number
in hex or oct and the number in the binary system is displayed by the LEDs. So students
have a visual for the task and the solution. The complexity of the task progressively
increases each next step. The next task is reachable only after the correct answer on the
conversion. The time spent on solving each number by the student is recorded.
The main outcome of this demo experiment is to provide students with the knowledge
in the bcm2835 library and its possibilities for LED manipulation.
318 G. Tabunshchyk et al.

This experiment can also be used for the assessment of the knowledge of first year
bachelor students in the binary system.

3.2.2 Manipulation with a Light Sensor

The expansion board [5] also contains light and temperature sensors. This demo experi‐
ment allows to change the distance between light and sensor and to measure luminosity
and to build a chart representing the relation between distance and luminosity.
The main outcome of this demo experiment is to provide students with the knowledge
in storing and processing of information from different types of sensors.

3.2.3 Face Detection Demo

The face detection demo lets students check the time which is needed for the face detec‐
tion algorithm on the OpenCV Python libraries [8]. There are two possibilities: to work
either with the Raspberry Pi Professional Infrared Camera OV5647 (internal) or with a
standard Web-camera (external) (Fig. 1).

Fig. 1. Face detection demo

Experiments for face recognition are developed with the standard OpenCV library.
Main study outcomes is the knowledge of the standard OpenCV library and influence
of the type and strength of light and the type of the camera on the time-delay in facial
recognition. Understanding delay times and times of execution is very important for real
time applications in general, and for using video-streaming of live pictures in particular.

3.2.4 GSM Module Manipulation (Fig. 2)

One of the common tasks in ES to provide 24/7 access to remote working systems. To
provide robustness of such system it should provide access to it by all possible protocols
Wi-Fi, BLE, GSM.
Interactive Platform for Embedded Software Development Study 319

Fig. 2. GSM demo

For the GSM module manipulation the SIM900 was provided. Students can send an
SMS on a Ukrainian provider or system and can show the last SMS sent to the GSM
module (Fig. 2).
Main outcomes are that students understand the pipeline of communication which
they use and how this is affected by different components of the system.

ISRT Software Development. Students can practice their skills on the remote system
and have assignments on the platform for formal evaluation.
The assignment is to control different remote experiments by a self-generated
This task is performed on the built-in editor on the Raspberry Pi. Programming mode
allows the user to create and run their applications developed on C ++ and Python on
the platform Raspberry Pi directly from the panel (Fig. 3).

Fig. 3. Remote software development

Access to the software development interfaces of the ISRT is allowed only to regis‐
tered users. As for all processes there is provided a log file for control and check if
students participated.
After login, the user gets access to the programming page. At this page there is the
list of all stored programs of the user. As such students can edit previously developed
programs or can create new ones from scratch. After the user chooses a file, the code
editor will be shown and the user can write/edit the required code. When selecting the
“compile” button, the program is sent to the server. If compilation is successful the user
is forwarded to the output page. If there are errors, they are displayed on the code editor
320 G. Tabunshchyk et al.

in a red frame. At the output page it is possible to execute the program, clear the output
screen, see the real time video of the experiment and return to the editing.
This ISRT-lab allows access to the laboratory equipment 24/7.
All developed experiments are used for the courses in bachelor and master study for
the Embedded System Software Development Modules in Zaporizhzhya National Tech‐
nical Universities.
Examples of the tasks are:
– to develop a program in Python for adding binary numbers and displaying the results
of the addition on the display of the TMMA expansion board;
– to create a system loop and measure the Mean Time to Failure (MTF) for the SIM900;
– to create a program for face detection and compare the response time to the same
algorithms from OpenCV library (Fig. 4).

Fig. 4. Remote led manipulation example

For the master students the idea of re-usage of systems is implemented. Master
students have the task to develop a measurement system for a certain defined purpose
(e.g. ecological measurements, climate control measurements etc.). They can (re)use the
developed templates on the system for the development of a their own personal meas‐
urement system, providing reliability tests for the specified lab hardware.

4 Conclusion

Questions of software and hardware reliability are of great importance. For embedded
systems it is a challenge as soft- and hardware reliability should be solved simultane‐
ously. For the tasks of building reliable software a low-cost system was developed,
which allow students to remotely practice skills in the embedded software development
in C++ and Python for Raspberry Pi. The developed remote lab makes a variety of tasks
possible on embedded platforms from basics of embedded systems to calculation of the
reliability characteristics. Future work is to provide built-in solutions for automated
testing of the plugged-in components.
Interactive Platform for Embedded Software Development Study 321

Acknowledgement. In work was done in the frame of international project Tempus 544091-
TEMPUS-1-2013-1-BE-TEMPUS-JPCR [DesIRE] [7]. We want to thank EmSys Group from
Thomas More Mechelen-Antwerpen University College for their support of our work with the
TMMA expansion board.


1. IDC (2016): Worldwide Internet of Things Forecast Update 2016–2020. Doc # US42082716. Accessed 17 Jan 2017
2. Camhi, J.: BI Intelligence projects 34 billion devices will be connected by 2020, Internet of
Things report, BI Intelligence (2015). Accessed 17 Jan 2017
3. Kozik, T., Simon, M., Arras, P., Olvecky, M., Kuna, P.: Remotely controlled experiments. In:
Noga, H., Cernansky, P., Hrmo, R. (eds.) Nitra, Slovacia: Univerzity Konstantina Filozofa v
Nitre (2016)
4. Tabunshchyk, G.: Remote experiments for reliability studies of embedded systems. In:
Tabunshchyk, G., Van Merode, D., Arras, P., Henke, K. (eds.) Proceedings of XIII
International Conference on REV2016, Madrid, Spain, 24–26 February 2016, pp. 68–71.
UNED (2016)
5. Raspberry Pi b/b+ compatible expansion board.
6. Platform NodeJS.
7. Tabunshchyk, G., Van Merode, D., Petrova, O., Ochmak, V.: Multipurpose educational system
based on Raspberry Pi. In: Proceedings of the International Symposium on Embedded Systems
and Trends in Teaching Engineering, Nitra, Slovakia, 12–15 September, pp. 202–206 (2016)
8. Brahmbhatt, S.: Practical OpenCV (Technology in Action), 1st edn., 244 p. Apress, New York
9. DesIRE Project website. Accessed 17 Jan 2017
Integrated Complex for IoT Technologies Study

Anzhelika Parkhomenko ✉ , Artem Tulenkov, Aleksandr Sokolyanskii,

( )

Yaroslav Zalyubovskiy, and Andriy Parkhomenko

Zaporizhzhya National Technical University, Zaporizhzhya, Ukraine

Abstract. As known, Internet today is not only environment of communication

and information exchange between people, but it is a tool and technology of
interaction between customers, “things” and devices. Therefore, industry wants
effectively design, create and deploy modern smart connected products and need
the relevant professionals with a wide breath of knowledge and skills from busi‐
ness intelligence, hardware engineering, information security and Internet of
Things (IoT). Integration of IoT study into curriculum is an actual task, because
gives real possibilities to enhance students’ competitiveness in a rapidly changing
labor market.
The purpose of this work is realization of practical-oriented approaches and
methods in educational process of future IT-professionals based on REIoT
complex - Smart House&IoT lab integrated with remote lab RELDES. Complex
based on platforms Arduino, Raspberry Pi and OpenHAB. OpenHAB REST API
has been used for integration remote lab RELDES with Smart House&IoT lab.
This allows get remote access to Smart House&IoT lab experiments and their
states as well as status updates or sending of commands for experiments.
REIoT complex application in educational process gives students all advan‐
tages of remote experiments and possibilities of different IoT platforms, sensors,
actuators, protocols, interfaces practical study.

Keywords: Internet of Things · Remote laboratory · Embedded Systems · Smart

House · Practically-oriented teaching methods

1 Introduction

Today the IoT technologies greatly extend the possibilities of collecting, analysis and
distribution of data, which humanity can transform into information and knowledge.
The Internet of Things opens new perspectives and gives more opportunities to increase
economic efficiency by automating processes in various fields of activity [1]. At the
beginning of 2016 the main segments for IoT applying were Manufacturing, Energy and
Transportation [2]. The impact of the IoT on companies’ activities is increasing. Smart,
connected products and the data they generate are transforming traditional business
functions, sometimes significantly [3].
Of course, there are still many issues that must be solved: more and more new unique
IP-addresses, sensors’ autonomous power supply, IoT devices certification, security,
protection of personal information, etc. [1].

© Springer International Publishing AG 2018

M.E. Auer and D.G. Zutin (eds.), Online Engineering & Internet of Things,
Lecture Notes in Networks and Systems 22, DOI 10.1007/978-3-319-64352-6_31
Integrated Complex for IoT Technologies Study 323

But even today, thanks to IoT technologies, the world begins to interact with physical
and virtual “things” and devices in other way. For this reason, companies need more
and more appropriate experts for development and implementation of new technologies
for effective interaction of customers and “things”. A lot of companies have already
sharply felt a lack of such specialists.
Therefore, the task of IoT technologies teaching is relevant and focused on formation
of students’ knowledge in the field of IoT modern software and hardware, and practical
skills in application of existing platforms and devices.

2 Concept of REIoT Complex

IoT technologies’ teaching is not a simple task. On the one side, there is a lot of web-
sites, webinars, documentation and materials on this topic [4–12]. On the other side,
even the interpretations of the term «IoT» in various papers sometimes significantly
differs. In addition, a huge number of different devices and platforms for creation of IoT
systems are described but sometimes it is problematic for students and teachers to sort
out the necessary information in this variety. Therefore, the task of creation of IoT
teaching-learning environment for future professionals training is an urgent.
As well as the author of [13], we have accepted the following definition as a basis:
«The Internet of Things (IoT) is the interconnection of uniquely identifiable embedded
computing devices within the existing Internet infrastructure». In this case it is possible
to distinguish several practically-oriented educational tasks: learning of Embedded
Systems (ES) software and hardware, analysis of existing approaches to ES design,
studying of the principles of ES interaction and connection to the Internet [14, 15].
As known, ES can be used in conjunction with Sensors/Actuators for collecting
the information and turning the collected or received information into actions. Also
the ES can use a range of technologies for connecting with other devices or the
Internet (Wi-Fi, Bluetooth, RFID, Ethernet, GSM, CDMA and so on) [16]. More
often, the concept of IoT is inseparably connected with something smart: Smart
House, Smart Transport, Smart City, Smart Businesses and so on [17]. Technolo‐
gies of Smart House creating are interesting and useful for students, as they allow to
make our life more comfortable, safe and to provide resource saving. That is why,
REIoT complex for IoT technologies study and investigations includes two inte‐
grated parts - Smart House&IoT laboratory and laboratory RELDES (REmote Labo‐
ratory for Development of Embedded Systems) [18–20]. Integration of remote labo‐
ratory into educational process expands opportunities of distance learning and gives
the students all advantages of remote experiment [21–27].
REIoT complex architecture is shown in Fig. 1. The concept of Smart House&IoT
lab’s stand passed several stages of design (Fig. 2).
324 A. Parkhomenko et al.

Fig. 1. REIoT complex architecture

Fig. 2. Concept of Smart House&IoT lab stand

Eventually we have used two the most popular embedded platforms for IoT smart
devices - Arduino and Raspberry Pi [13, 17, 27], as well as OpenHAB (Open Home
Automation Bus). OpenHAB is the software for integrating different home automation
systems and technologies into one general solution that allows comprehensive automa‐
tion rules and offers uniform user interfaces [28].

3 Features of REIoT Complex Realization

Smart House&IoT lab set of experiments is based on Arduino NANO V3 boards

(Fig. 3). They are: Solar station, Lighting control, Climate control, Access control,
Safety control, Zone control, Presence control, Ventilation, Illumination control.
Integrated Complex for IoT Technologies Study 325

Fig. 3. Smart House&IoT lab structure

Minicomputer Raspberry Pi performs the role of the lab server with installed
OpenHAB platform. Additional libraries Modbus TCP Binding are used for connection
it with Arduino boards with USB interface and Modbus RTU protocol. Sequential line
RS-232 is used for communication between electronic devices.
Each Arduino board contains a program, which handles input and output data.
Arduino uses open Modbus Master- Slave library. With the help of this library the
holding registers of sign or non-sign type which available for recording and reading
were created in each board (subsystem). Registers contain 8 or 16 elements of 16 bits
length each. Thus, the structure for data exchange is created. Each Arduino board oper‐
ates as Master. OpenHAB platform reads or adds data to registers when it interrogates
the devices. Each element of the register is correlated to individual parameters of sensors
or actuators.
The laboratory includes IP camera D-Link DCS-2121 which transmits video
streaming for users to view the experiments’ current status. This IP-camera is a complete
system with a built-in CPU and Web-server that transmits high quality video with reso‐
lution of 1280 × 1024 pixels and speed 10 frames per second. IP-camera and computer
are connected via Ethernet cable and interact using protocol TCP/IP. Router D-Link
DIR-300 allows to connect to the laboratory via Wi-Fi and also to add devices using
network cables.
For the integration of two REIoT complex parts as well as for Smart House&IoT lab
administration OpenHAB REST API was used [29]. To access Smart House&IoT lab
experiments, RELDES administration system sends HTTP GET request to the OpeHAB
REST API and receives results in JSON format.
326 A. Parkhomenko et al.

On receiving the list of available in the Smart House&IoT lab experiments, RELDES
include them into the total list of experiments (Fig. 4) and after that a queue, statistics
and other functions are available for them.

Fig. 4. RELDES interface

Subsequently, to carry out the experiment, RELDES refers REST methods to the
Smart House&IoT lab, for example HTTP PUT and HTTP GET requests are used for
illumination level change and result control.
In order to start the streaming broadcast, we have used the utility ffmpeg [30].
FFmpeg is a set of free libraries with open source code that allow record, convert and
transmit digital audio and video in various formats. Library ffmpeg starts to catch video
from our camera with resolution 1280 × 1024, codes it to MPEG format with 10 fps and
bitrate 800kbit/s, and after that uses HTTP for sending to local server, which sends this
video stream to the end user.
In order to divide video for blocks (to cut and select the part of video for each
experiment), the filter “crop” is used. As a result, we have got some video fragments for
each experiment or for group of experiments.

4 Students’ Knowledge and Practical Skills

The experiment «Solar station» is intended for studying the basics of solar energy and
the principles of the solar battery power. The main components are solar panel (6 V,
70 mA), Li-Ion battery and the charger for Li-Ion batteries. The experiment «Climate
control» is intended to study the basics of climate control using digital temperature and
humidity sensors DHT-11 and air quality analog sensor MQ135. The experiment «Zone
control» based on optical pair with photodiode, allows controlling the status of the
perimeter and it reacts in case of the disturbed perimeter. The experiment «Presence
control» is intended for studying the principles of the lighting systems or systems that
Integrated Complex for IoT Technologies Study 327

control the human presence in the room. The experiment is based on the presence control
devices which control the reactions using pyroelectric sensor and Fresnel lens (pyro‐
electric motion sensor). The experiment «Ventilation» allows studying the basics of
recovery units and air flow control. The experiment is built with using the electrical
loads driver L298E, and it controls the loads with wide-width modulation. The experi‐
ment «Light control» is intended for studying the basics of load objects remote control
using relay module and wide-width modulation in multiple channels. The main compo‐
nents are: LED strip, RGB LED strip and load driver L293E. The experiment «Illumi‐
nation control» allows the students to perform light level control of the different areas
using photo-resistors (Fig. 5). The experiment «Access control» is compounded with
RFID reader module cards and trinkets RC522. The experiment is intended for studying
the principles of access and authorization systems. The experiment «Safety control»
allows the students to study the principles of security systems that react to exceptional
situation as the motion in controlled zone. The subsystem can be in the state of zones’
control and the state of sensors’ indication. The experiment is based on pyroelectric
motion sensor with using the sound alarm. Several experiments can be performed simul‐
taneously. In this case, the students study the principles of interaction between subsys‐
tems, define the process logic, create effects, evaluate the reaction of the elements and
analyze the results.

Fig. 5. Experiment «Illumination control» web-page
328 A. Parkhomenko et al.

The students can use standard and create original Actions within Scripts and Rules
for execution OpenHAB specific operations. For example, such Actions as Telegram,
my.openHAB and other can be used for Smart House&IoT lab events notification or
feedback. Thus, connection to the Telegram allows sending messages to Telegram
clients from a bot-client (for example - sending notifications to the user about the air
conditioner turn on/off). With my.openHAB, students can connect to OpenHAB from
any device from everywhere with the Internet connection, to provide access to other
users as well as to keep all activities and events in the cloud my.openHAB.
The administration of events executed by OpenHAB can be realized with MailCon‐
trol binding. It provides the possibility of receiving commands sent via email in JSON
format. The following types of commands can be sent: decimal, HSB, increase –
decrease, on – off, open – closed, percent, stop – move, string, up – down. Therefore,
one of the practical tasks for the students can be the development of desktop or mobile
applications for OpenHAB connection and control.
Also the integration with Google calendar for REIoT complex is possible. Students
can create events and manage the system on schedule (on/off lighting, air conditioning,
open/close the door for a predetermined time, etc.).
So, the students acquire the IoT technologies knowledge and practical skills by
performing remote control experiments, studying the descriptions of experiments,
carrying out various scenarios and