You are on page 1of 12

FINAL 24/02/2015

1) Qué quieren decir las siglas MDD y NFP? ¿Cómo se relacionan? Indicar uno de los desafíos de
investigar MDD.
2) Cómo se puede potenciar la fiabilidad?
3) Traduzca el párrafo que habla de aumentar la tolerancia al error de aplicación.
4) Cuándo se usa el algoritmo de votación? ¿Cuál es la relación entre votante y estado de
máquina-UML?
5) Cómo se puede hacer para corregir el comportamiento errático del sistema?

Model-driven development (MDD) is an evolutionary step that changes the focus of software
development from code to models, with the purpose of automating the code generation from
models. MDD emphasis on models facilitates also the analysis of nonfunctional properties (NFP)
(such as performance, scalability, reliability, security, safety, or usability) of the software under
development based on its models. These NFPs are finally responsible for the required quality of
the software. Among them, we address in this paper the dependability NFP. Dependability
encompasses availability, reliability, safety, integrity, and maintainability as proposed in.

Many formalisms and tools for NFP analysis have been developed over the years. For example,
queueing networks, stochastic Petri nets, stochastic process algebras, fault trees, or probabilistic
timed automata. One of the MDD research challenges is to bridge the gap between software
models and dependability analysis models. An emerging approach for the analysis of different
NFPs, dependability included, is given in Figure 1. It consists of the following steps: (a) to extend
the software models used for development with annotations describing dependability properties;
(b) to transform automatically the annotated software model to the formalism chosen for
dependability analysis; (c) to analyze the formal model using existing solvers; (d) to assess the
software based on the results and give feedback to designers. Such a modeling → analysis →
assessment approach can be applied to any software modeling language, be it general purpose
such as the Unified Modelling Language (UML), or domain specific such as AADL or SysML .

In the case of UML-based software development, the extensions required for NFP-specific
annotations are defined as UML profiles, which provide the additional advantage of being
processed by standard UML tools without any change in the tool support. OMG adopted the
MARTE profile, which extends UML for the real-time domain, including support for the
specification of schedulability and performance NFPs. We use the dependability modeling and
analysis (DAM) profile to extend the UML models with dependability concepts and then transform
the extended UML model into a Deterministic and Stochastic Petri Net (DSPN) model. The results
of the DSPN model are converted to the software domain and are used to assess system
dependability measures.

The work formalized the methodology in Figure 1. In this paper, we rigorously apply this
formalization, through a case study, in the context of UML-based development. Section 3
accomplishes the modeling step of the methodology. Section 4 applies the transformation step.
Section 5 focusses on the analysis step. Section 6 explores the assessment step.

2. Case Study: The Voter


According to Avižienis et al., the means developed to attain system dependability in the past 50
years can be grouped into four categories: fault tolerance, fault prevention, fault removal, and
fault forecasting. The case we present pertains to the fault tolerance field, which aims to improve
dependability by avoiding service failures in the presence of faults.

Fault tolerance provides different well-known techniques mainly based on error detection and
system recovery. Voting as well as software and hardware replication are the techniques we use
here. Concretely, we present a voter mechanism whose purpose is to mask faults arising in
computations carried out with data acquired by a sensor.

We are considering a sensor which monitors (a part of) a generic plant, such as an industrial
automation system. The sensor periodically sends raw collected data to an application that carries
out a heavy and critical computation with it. We replicate the computation through different
nodes with the purpose of increasing the fault tolerance of the application. However, it can
happen that one or more of the replicas are affected by faults, that is, they do not complete their
computations as scheduled, may be due to a node failure, a memory leak or another software bug.
Our system deals with this situation by implementing a voting mechanism to mask one fault, that
is, the system provides results despite the presence of a fault.

Voting algorithms are often used along with recovery mechanisms, which bring back the system to
a healthy state when the voting cannot be accomplished, that is, when the faults cannot be
masked. For the sake of simplicity, we will not consider recovery strategies in this example.

We propose an initial UML design of the voter containing a deployment diagram and a set of state
machines (UML-SMs). The design model illustrates the following: (i) how dependability techniques
can be modeled with UML behavioral diagrams and DAM annotations introduce dependability
parameters; (ii) how DAM leverages this design for dependability analysis purposes.

The deployment diagram, Figure 2(e), depicts the hardware nodes in which the identified software
components (sensor, application, and replicas) execute and also the communication networks
linking them. We consider a fully distributed system architecture to increase dependability. In fact,
the distribution of the components is a principle in dependability modeling.

The voter exhibits a discrete behavior for which UML-SMs are well suited. According to the UML
interpretation, a SM specifies the behavioral pattern for the objects populating a class, as in the
case of the UML-SM for the three voting replicas (Figure 1(c)). Alternatively, a UML-SM can also
specify the behavior of a software component, such as the application, voter or sensor embedded
components.

3. Dependability Modeling

UML-SMs are widely used to pragmatically model the “correct” behavior of a system, that is the
behavior in absence of faults. However, dependability modeling demands to specify also the
system behavior under different fault assumptions, and to characterize the system failures.
Furthermore, in case of repairable systems, the repair and reconfiguration activities that remove
basic or derived failures from the system need to be modeled. In order to define the system fault
assumptions, a software engineer has to consider the following main issues: (1) which components
can be affected by faults and in which states, (2) the maximum number of faults that can
concurrently affect the system components, (3) the complete fault characterization, such as the
fault occurrence rate.

Failure characterization consists in determining the failure modes and, in particular, the system
failure states.

UML does not provide sufficient capabilities for a complete and rigorous modeling of all the
aforementioned dependability concerns. However, the DAM profile augments a UML design with
annotations that target the dependability specification. Being constructed as a specialization of
MARTE, DAM ensures compatibility with the UML diagrams. The MARTE part of interest to DAM is
the one devoted to quantitative analysis, also known as GQAM. In fact, DAM specializes GQAM,
creating a framework for the specification and analysis of dependability.

3.1. State Machines Specification

Our UML-SMs specification illustrates how the engineer can model specific dependability
techniques while describing the system normal behavior. Concretely, we have leveraged the UML-
SMs to propose a design for the voting mechanism and computation by a replica.

FINAL 09/12/2014
1) Cual es la problemática del siglo 21 a la que hace referencia el texto? ¿Quiénes la investigan y
cómo lo hacen?
2) Actualmente que se hace sobre este problema?
3) Cual es el tema principal del artículo? que buscaban los autores? en que se basaban?
4) ¿De las soluciones prouestas cuales hacen referencia a dispositivos muy pequeños?
Mencionarlas y dar todos los detalles posibles acerca de una de ellas.
5) Traducir el párrafo que hace referencia a una comparación.

Sustainable and efficient utilization of available energy resources is perhaps the fundamental
challenge of the 21st century. Academic and industrial communities have invested significant
efforts in developing new solutions to address energy-efficiency challenges in several areas,
including telecommunications, green buildings and cities, and the smart grid.

Advanced modeling and simulation methodologies are an essential aspect of the comprehensive
performance evaluation that precedes the costly prototyping activities required for complex,
large-scale computing, control, and communication systems.

Currently, there is increased interest in using a multi-pronged approach to find solutions to the
energy problem, including alternative and renewable energy sources, communication and control
systems to enable new green and net-zero energy concepts for buildings and cities, energy-
efficient processing datacenters, green communications and energy-harvesting devices, and new
smart grid architectures and applications, among others.
IN THIS ISSUE

The cover features in this special issue describe the latest advances in modeling and simulation of
smart and green computing systems, which are critical to sustainable economic growth and
environmental conservation in several application domains.

In “Toward Uninterrupted Operation of Wireless Sensor Networks,” Swades De from the Indian
Institute of Technology and Riya Singhal from Oracle address the challenge of wirelessly
transferring radio frequency energy to nearby devices, thereby reducing the need for an
operational battery. To increase energy efficiency, the RF energy transfer occurs over multiple
hops, while the immediate neighbors always exchange the sensing data. The authors describe the
analytical modeling of the conditions under which multihop energy transfer is viable and efficient.
Such a network has a significant impact because it eliminates the need to periodically retrieve
sensors for maintenance and battery replacement.

In “Bond Graph Modeling for Energy-Harvesting Wireless Sensor Networks,” Prabhakar T. Venkata
and his colleagues from the Indian Institute of Science and Delft University of Technology, the
Netherlands, discuss novel approaches using bond graphs to accurately predict both sensor and
network lifetimes. In this context, a bond graph is formed by using energy bonds to link several
subsystems together, which in turn represent the dynamic system’s connections. Although each
subsystem can be independent, including the energy source, storage buffer, microcontroller,
radio, and wireless channel, each still requires a finite amount of energy for correct and error-free
operation.

“Fundamentals of Green Communications and Computing: Modeling and Simulation,” by Murat


Kocaoglu, Derya Malak, and Ozgur B. Akan from Koc University, Turkey, describes new metrics
such as a holistic carbon-footprint-based method for evaluating energy savings that looks at the
larger picture of a device’s operation, beginning with the resources spent to create it and including
the energy expended in keeping it active. The authors discuss a technique for quantifying
minimum energy consumption in networks and explore open issues related to a layered Internet
architecture, where energy consumption is calculated as transferred information bits. They also
review current strategies for simulation and standardization of green networks and comment on
best practices, while highlighting open issues that merit the research community’s further
attention.

In contrast with articles focusing on tiny sensors, “SimWare: A Holistic Warehouse-Scale Computer
Simulator,” by Sungkap Yeo and Hsien-Hsin S. Lee from Georgia Institute of Technology, delves
into energy-efficient design and simulation of datacenters, which typically exhibit a 20 percent
yearly increase in energy needs. To report the power and energy breakdown of a given datacenter,
SimWare offers a simulation environment for analyzing power consumption in a system’s servers,
cooling units, and fans. It also incorporates physical features of heat circulation and the effect of
ambient temperature. All this information helps to accurately predict actual energy usage and
effectively optimize corrective parameters for the system. The authors demonstrate SimWare’s
effectiveness by comparing its performance against traces obtained from publicly available
warehouse-scale models.
“Using Datacenter Simulation to Evaluate Green Energy Integration,” by Baris Aksanli, Jagannathan
Venkatesh, and Tajana Šimunic ́ Rosing from the University of California, San Diego, presents an
interesting comparison of various datacenter simulation models. Since deploying a new energy-
management policy merely for verification purposes is both time-consuming and impractical, a
model that provides support for a quick comparison is valuable. Each method can focus on one
limited aspect, including the impact of dynamic power management, dynamic voltage and
frequency scaling, and quick job response time, but the methods must be comprehensively
evaluated using common criteria. The authors also present a case study illustrating how
researchers can use the GENSim simulator to design and evaluate green energy integration into
datacenters.

We anticipate that the solutions proposed to address the problems described in this issue’s cover
features will prove to be as interesting and exciting to the readers as they were for the guest
editorial team. We optimistically look forward to a technology-assisted greener and smarter world.

FINAL 28/07/2015
1) ¿Que es una estructura de datos? ¿Qué relación tiene con la nota?
2) Ventajas y Desventajas de los chips multinucleo
3) Traduzca el párrafo que describe el funcionamiento de las "listas de prioridad"
4) Explique cuál es el problema cuando varios intentan acceder al primer item al mismo tiempo
5) ¿Que alternativa sugirió el equipo de Shavit?

Parallelizing common algorithms

Every undergraduate computer-science major takes a course on data structures, which describes
different ways of organizing data in a computer’s memory. Every data structure has its own
advantages: Some are good for fast retrieval, some for efficient search, some for quick insertions
and deletions, and so on.

Today, hardware manufacturers are making computer chips faster by giving them more cores, or
processing units. But while some data structures are well adapted to multicore computing, others
are not. In principle, doubling the number of cores should double the efficiency of a computation.
With algorithms that use a common data structure called a priority queue, that’s been true for up
to about eight cores — but adding any more cores actually causes performance to plummet.

At the Association for Computing Machinery’s Symposium on Principles and Practice of Parallel
Programming in February, researchers from MIT’s Computer Science and Artificial Intelligence
Laboratory will describe a new way of implementing priority queues that lets them keep pace with
the addition of new cores. In simulations, algorithms using their data structure continued to
demonstrate performance improvement with the addition of new cores, up to a total of 80 cores.

A priority queue is a data structure that, as its name might suggest, sequences data items
according to priorities assigned them when they’re stored. At any given time, only the item at the
front of the queue — the highest-priority item — can be retrieved. Priority queues are central to
the standard algorithms for finding the shortest path across a network and for simulating events,
and they’ve been used for a host of other applications, from data compression to network
scheduling.

With multicore systems, however, conflicts arise when multiple cores try to access the front of a
priority queue at the same time. The problem is compounded by modern chips’ reliance on caches
— high-speed memory banks where cores store local copies of frequently used data.

“As you’re reading the front of the queue, the whole front of the queue will be in your cache,”
says Justin Kopinsky, an MIT graduate student in electrical engineering and computer science and
one of the new paper’s co-authors. “All of these guys try to put the first element in their cache and
then do a bunch of stuff with it, but then somebody writes to it, and it invalidates everybody else’s
cache. And this is like an order-of-magnitude slowdown — maybe multiple orders of magnitude.”

Loosening up

To avoid this problem, Kopinsky; fellow graduate student Jerry Li; their advisor, professor of
computer science and engineering Nir Shavit; and Microsoft Research’s Dan Alistarh, a former
student of Shavit’s, relaxed the requirement that each core has to access the first item in the
queue. If the items at the front of the queue can be processed in parallel — which must be the
case for multicore computing to work, anyway — they can simply be assigned to cores at random.

But a core has to know where to find the data item it’s been assigned, which is harder than it
sounds. Data structures generally trade ease of insertion and deletion for ease of addressability.
You could, for instance, assign every position in a queue its own memory address: To find the fifth
item, you would simply go to the fifth address.

But then, if you wanted to insert a new item between, say, items four and five, you’d have to copy
the last item in the queue into the first empty address, then copy the second-to-last item into the
address you just vacated, and so on, until you’d vacated address five. Priority queues are
constantly being updated, so this approach is woefully impractical.

An alternative is to use what’s known as a linked list. Each element of a linked list consists of a
data item and a “pointer” to the memory address of the next element. Inserting a new element
between elements four and five is then just a matter of updating two pointers.

Road less traveled

The only way to find a particular item in a linked list, however, is to start with item one and follow
the ensuing sequence of pointers. This is a problem if multiple cores are trying to modify data
items simultaneously. Say that a core has been assigned element five. It goes to the head of the
list and starts working its way down. But another core is already in the process of modifying
element three, so the first core has to sit and wait until it’s done.

The MIT researchers break this type of logjam by repurposing yet another data structure, called a
skip list. The skip list begins with a linked list and builds a hierarchy of linked lists on top of it. Only,
say, half the elements in the root list are included in the list one layer up the hierarchy. Only half
the elements in the second layer are included in the third, and so on.
The skip list was designed to make moving through a linked list more efficient. To find a given item
in the root list, you follow the pointers through the top list until you identify the gap into which it
falls, then move down one layer and repeat the process.

But the MIT researchers’ algorithm starts farther down the hierarchy; how far down depends on
how many cores are trying to access the root list. Each core then moves some random number of
steps and jumps down to the next layer of the hierarchy. It repeats the process until it reaches the
root list. Collisions can still happen, particularly when a core is modifying a data item that appears
at multiple levels of the hierarchy, but they become much rarer.

FINAL 12/2015
1- ¿Cuál es el tema del que trata el artículo?
2- ¿Qué hace la organización "Wikileaks"?
3- Describir las etapas por las que pasó "Wikileaks" entre el 2010 y el 2012.
4- Traducir el párrafo que habla sobre los miles de mails que recibieron. PÁRRAFO 10! "Despite
its years-long lack..."
5- ¿Qué otros proyectos menciona el artículo?, ¿porqué se implementaron? y ¿cuál es la opinión
de Assange al respecto?

WikiLeaks' Groundbreaking New Submission System for Your Secrets


It’s taken close to half a decade. But WikiLeaks is back in the business of accepting truly
anonymous leaks.

On Friday, the secret-spilling group announced that it has finally relaunched a beta version of its
leak submission system, a file-upload site that runs on the anonymity software Tor to allow
uploaders to share documents and tips while protecting their identity from any network
eavesdropper, and even from WikiLeaks itself. The relaunch of that page—which in the past
served as the core of WikiLeaks’ transparency mission—comes four and a half years after
WikiLeaks’ last submission system went down amid infighting between WikiLeaks’ leaders and
several of its disenchanted staffers.

“WikiLeaks publishes documents of political or historical importance that are censored or


otherwise suppressed. We specialise in strategic global publishing and large archives,” reads the
new page, along with the .onion url specific to Tor for a “secure site where you can anonymously
upload your documents to WikiLeaks editors.”

“We thought, ‘This is ready, it should be opened,'” WikiLeaks spokesperson Kristinn Hrafnsson told
WIRED in an interview. “We’re hoping for a good flow of information through this gateway.”

In a statement posted to the WikiLeaks website, the group’s founder Julian Assange wrote that the
new system is the result of “four competing research projects” launched by the group, and that it
has several less-visible submission systems in addition to the public one it revealed Friday.
“Currently, we have one public-facing and several private-facing submission systems in operation,
cryptographically, operationally and legally secured with national security sourcing in mind,”
Assange writes.

The long hiatus of WikiLeaks’ submission system began in October of 2010, as the site’s
administrators wrestled with disgruntled staff members who had come to view Assange as too
irresponsible to protect the group’s sources. Defectors from the group seized control of the leak
platform, along with thousands of leaked documents. Control of that leak system was never
returned to WikiLeaks, and the defectors eventually destroyed the decryption keys to the leaks
they’d taken, rendering them useless.

WikiLeaks vowed in 2011 to relaunch its submission system, announcing that the leaks page would
reappear on the one-year anniversary of its massive Cablegate release of State Department
documents. But that date came and went with no new submission system. In the following years,
Assange seemed to become preoccupied with WikiLeaks’ financial difficulties, including a lawsuit
against PayPal, Visa, Mastercard and Bank of America for cutting off payments to the group, as
well as his own legal struggles. Accusations of sex crimes in Sweden and fears of espionage
charges in the United States have left him trapped for nearly three years in London’s Ecuadorean
embassy, the country that has offered him asylum. The goal of getting Wikileaks back in the
anonymous leak submission game got sidelined.

The group, and Assange in particular, has also become more focused on the modern surveillance
challenges to any truly anonymous leaking system. That, too, has delayed WikiLeaks’ willingness to
create a new target for intelligence agencies trying to intercept leaks. “If you ask if the submission
from five years ago was insecure, well, it would be today,” says Hrafnsson. “We’ve had to rethink
this and rework it, and put a lot of expertise into updating and upgrading it.”

Hrafnsson declined to comment on what new security measures WikiLeaks has put into place. He
was willing to say that the submission system has already been online—though not linked from
the main WikiLeaks site—for weeks as it’s been tested. “As always, we’ve wanted to to make sure
we can deliver on the promise that people can give us information without being traced,” he says.
Though the site remains in “beta,” Hrafnsson adds that “we wouldn’t have made it available unless
we considered it to be as safe as it’s possible to be.”

Despite its years-long lack of a leak portal, WikiLeaks had continued to publish documents over the
last few years, never revealing where they got them. In some cases they appear to have been
directly shared with WikiLeaks by hackers, as was the case with the massive collections of emails
from the private intelligence firm Stratfor and the Syrian government. Or in other cases, the group
has simply organized and republished already-public leaks, as with its searchable index of the
emails stolen by hackers from Sony Pictures Entertainment.

But few of those leaks have been as significant as those it obtained while its submission system
was still online, most notably the leaks from Chelsea Manning that included millions of classified
files from the Iraq and Afghan wars as well as hundreds of thousands of secret State Department
communiqués.
In the years since WikiLeaks ceased to offer its own Tor-based submission system, others have
sought to fill the gap. Projects like GlobaLeaks and SecureDrop now offer open-source systems
that have replicated and improved on WikiLeaks’ model of using Dark Web servers to enable
anonymous uploads. SecureDrop in particular has been adopted by mainstream news sites such as
the New Yorker, Gawker, Forbes, the Guardian, the Intercept and the Washington Post.

In his statement on the WikiLeaks site, Assange notes that those projects are “both excellent in
many ways, [but] not suited to WikiLeaks’ sourcing in its national security and large archive
publishing specialities,” he writes. “The full-spectrum attack surface of WikiLeaks’ submission
system is significantly lower than other systems and is optimised for our secure deployment and
development environment.”

One former WikiLeaks staffer contacted by WIRED argues that with several more mainstream
outlets for leaks now available thanks to tools like SecureDrop, sources would be wiser to stay
away from WikiLeaks’ new submission system. “As a leaker…You’d have to be fucking insane to
trust Assange,” writes the former WikiLeaker, who asked for anonymity because his association
with WikiLeaks has never been publicly revealed. He points to WikiLeaks’ past decisions to publish
large troves of raw documents, rather than ones carefully filtered by journalists to avoid harming
innocent people. “Why would you go for Anarchist Punks Weekly instead of, say, the Guardian or
the Washington Post?”

FINAL 16/07/2014
1. ¿Qué pretende demostrar el autor en este artículo? ¿En qué se utilizan estos
emprendimientos?
2. Explique el concepto de “Big Data” y dé un ejemplo de su uso.
3. Defina el concepto de “IoT” y cuál es uno de los usos más importantes.
4. Traduzca el párrafo sobre “Big Data” y el diseño de motores aeronáuticos.
5. Realice una síntesis del ejemplo de uso de IoT con sensores inalámbricos.

When the Internet of Things meets Big Data


The ‘Internet of Things’ and ‘Big Data’ hold the promise of one big interconnected world that will
allow people to harmonise with machines and gadgets. Boris Sedacca investigates whether it is all
vapourware or whether there are real life examples in practice.

Manufacturing has been transformed by the size and complexity of machines on the factory floor
today, swathed with sensors that gather voluminous data to keep them ticking over. Aside from
internal machine control purposes or legal requirements to gather data in industries like
pharmaceuticals, captured data can also be analysed to predict failures for example.

Big data needs huge storage and supercomputers to be effective. Andrew Dean, pre-Sales
manager at OCF, has been involved with data storage integration on the University of Edinburgh’s
High End Computing Terascale Resources (HECToR) Cray XE6 supercomputer. This holds eight
petabytes of data with another 20 petabytes of disk backup and 92 petabytes of tape for less
frequently used files.

HECToR is funded by the Engineering and Physical Sciences Research Council (EPSRC), and the
Natural Environment Research Council (NERC). Scientists currently store highly complex
simulations on site at Edinburgh. OCF supplies data storage and processing to the automotive,
aerospace, manufacturing, oil & gas and pharmaceutical industries.

“We use IBM’s super-fast General Parallel File System (GPFS software which allows us to combine
multiple storage arrays into one file system, using four large DataDirect Networks (DDN) clusters,”
reveals Dean.

High performance computing is where Big Data happened before it was called Big Data, according
to Dr James Coomer, Senior Technical Advisor at DDN, where for years it has been quite normal to
think in terms of petabytes of storage at speeds of hundreds of gigabytes per second.

“In oil and gas industries, one of the biggest issues is gathering seismic data during exploration in
deserts and oceans, where hundreds or thousands of ultrasound beacons and sensors gather huge
amounts of sub-surface data,” Coomer exemplifies.

Data de-convolution

“This data needs to be de-convoluted and that requires two features of Big Data – huge data
ingest rates from the sensors and number crunching to map sections of land or oceans, in the
latter case with sensors trailing like serpents from ships. Some sensor data may have originated in
analogue or digital form, but by the time we see it, it is all digitised.

“Also in aerospace engine testing or Formula 1 car design crash testing, both of which are very
expensive, running simulations in advance in the latter case reduces the expense of a large
number of cars physically having to be crashed. Modelling accuracy is extremely high now in areas
like fluid dynamics around car bodies and wheels.

“In aero engine design, Big Data provides complex modelling of combustion rates and engine
noise. Hundreds of engineers and CAD designers can produce structural geometry data using
software like LS-Dyna, a popular finite element analysis (FEA) program, allowing them to chop up a
physical structure into numerous small pieces, each of which may need its own supercomputing
core.”

A parallel development to Big Data is the Internet of Things (IoT), which can be anything with an IP
address, preferably with a RFID tag. RF bandwidth has recently been released in the UK from the
switch from analogue to digital TV transmission, known as the license-exempt UHF TV White Space
spectrum from 470-790MHz.

In February 2013, Neul announced what it claims to be the world’s first TV White Space
transceiver chip, Iceni. Adaptive digital modulation schemes and error correction methods can be
selected according to the trade-off between data rate and range required for a given application.
Neul maintains its technology costs less than $5 in volume from 2012 onwards, going down to a $1
chip set by 2014. It claims a battery life of 15 years for low bandwidth machine to machine (M2M)
applications such as smart meters.

IoT provides the glue for scattered apparatus

The Internet of Things (IoT) is the name used for a collection of network enabled, often wireless,
devices which communicate with each other via online storage on the cloud. Centrally hosted
online cloud storage allows access to data from anything attached to the internet.

For example, distillation columns are controlled by temperature. As the hot hydrocarbons rise,
they cool to the point where they liquefy and can be drawn off as one or another petroleum
product: gasoline, benzene, kerosene, and so forth.

The use of inexpensive wireless temperature sensors in large quantities along the length of a
distillation column will provide a very large amount of data to the operators that they have never
been able to get before, and that can be used for discovering process bottlenecks and optimising
processes. Such process optimisation is enabled by the IoT, according to Advantech.

Impulse marketing manager Ben Jervis claims Advantech selected his company as the first in
Europe to promote the IoT. According to Jervis, One of the most important uses of the IoT is
hardware monitoring, for example the temperature of a boiler where there would be a
thermometer located on the outside of the boiler that would usually be checked at regular
intervals by an operative. The IoT automates this process and wirelessly alerts the operative,
remotely if required, of any temperature fluctuations.

“The technology for IoT has been around for at least ten years, but is fairly recent in terms of how
it is uses and reports on data to monitor hardware and predict failures before they happen,”
asserts Jervis.

“You are essentially giving an IP address to an inanimate object, for example where a sensor may
be attached to a power cable. You can monitor the frequency of the electricity running through it
and if it gets to a certain level where you know it is going to fail, you can replace it before it causes
unscheduled downtime.

“We have implemented such a solution for an electricity substation to avoid the risk of power
failure to homes and businesses in the area. Previously, if a cable failed, the utility would send an
engineer to replace it and get power up and running again, which could take anything up to four
hours depending on how far the engineer had to travel from his previous job.

“By connecting wireless sensors to each cable, the utility can now monitor the frequency of the
electricity in conjunction with computer software that alerts when certain frequency fluctuations
mean that the cable is going to fail, two days ahead of the event. The utility can then make
provision to ensure there is no power outage, or can carry out the work at 1am in the morning
when power outage is not as serious.”'

You might also like