Professional Documents
Culture Documents
SMART MANUFACTURING
The History Behind Industry 4.0
dustrial Revolu
The industrial revolution in Britain came in to introduce machines into production by the
end of the 18th century (1760-1840). This included going from manual production to the use of
steam-powered engines and water as a source of power.
This helped agriculture greatly and the term “factory” became a little popular. One of the
industries that benefited a lot from such changes is the textile industry, and was the first to adopt
such methods. It also constituted a huge part of the British economy at the time.
However, such revolutionary approaches to industry were put to an end with the start of
World War I. Mass production, of course, was not put to an end, but only developments within
the same context were made and none of which can be called industrial revolutions.
Perhaps the third one is much more familiar to us than the rest as most people living
today are familiar with industries leaning on digital technologies in production. However, the
third industrial revolution is dated between 1950 and 1970.
It is often referred to as the Digital Revolution, and came about the change from analog
and mechanical systems to digital ones.
Others call it the Information Age too. The third revolution was, and still is, a direct result of the
huge development in computers and information and communication technology.
The Definition of the Fourth Industrial Revolution and How It Is Different From the Third
This means that machines will operate independently, or cooperate with humans in
creating a customer-oriented production field that constantly works on maintaining itself. The
machine rather becomes an independent entity that is able to collect data, analyze it, and advise
upon it.
The rapid changes in the information and communication technologies (ICT) have broken
the boundaries between virtual reality and the real world. The idea behind Industry 4.0 is to
create a social network where machines can communicate with each other, called the Internet of
Things (IoT) and with people, called the Internet of People (IoP).
This way, machines can communicate with each other and with the manufacturers to
create what we now call a cyber-physical production system (CPPS). All of this helps industries
integrate the real world into a virtual one and enable machines to collect live data, analyze them,
and even make decisions based upon them.
The fourth Industrial Revolution and the third industrial innovation wave of the
Industrial Internet
As mentioned, in the US, GE and a range of other industrial players (including non-
American ones who are also members of the “Plattform Industrie 4.0”) launched the
Industrial Internet Consortium.
The difference between Industry 4.0 and the Industrial Internet, however, is that,
originally, the Industrial Internet was seen as the third industrial innovation wave. So, a third
wave of innovation instead of a fourth revolution in the industry.
It only shows how relative revolutionary terms are as the three industrial Internet
innovation waves respectively were:
The Industrial Revolution. The real one and more or less a combination of the first and
second revolution in the Industry 4.0 view.
The Internet Revolution: ‘computing power and the rise of distributed information
networks’.
The Industrial Internet: what is called the fourth industrial revolution in Industry 4.0.
And it’s here that also in Industry 4.0 we find those eternal hurdles. The Boston
Consulting Group, among others, identified:
This might still seem complex but, then again, cyber-physical systems are complex.
Moreover, the term isn’t new and is better known in an engineering and industry context.
It fits more in the Operational Technology (OT) side of the converging IT/OT world
which is typical in Industry 4.0 and the Industrial Internet. So, if you want to understand Industry
4.0 or the Industrial Internet, you’ll need an understanding of some essential operational,
production and mechanics terms.
Cyber-physical systems in the Industry 4.0 view are based on the latest control systems,
embedded software systems and also an IP address (the link with the Internet of Things becomes
clearer, although strictly both are not the same but they certainly are twins as we see in the next
‘chapter’.
They result to new possibilities in areas such as structural health monitoring, track and
trace, remote diagnosis, remote services, remote control, condition monitoring, systems health
monitoring and so forth.
And it’s with these possibilities, enabled by networked and communicating cyber-
physical modules and systems, that realities such as the connected or smart factory, smart health,
smart cities, smart logistics etc. are possible as mentioned previously.
In the original definitions, going back over a decade, IP addresses where not specifically
mentioned in cyber-physical systems.
In 2008, Professor Edward A. Lee from the University of California, Berkeley, defined
Cyber-Physical Systems as follows: “Cyber-Physical Systems (CPS) are integrations of
computation and physical processes. Embedded computers and networks monitor and control the
physical processes, usually with feedback loops where physical processes affect computations
and vice versa”.
On his page on the Berkeley website, Professor Lee links to cyberphysicalsystems.org where you
find his definition and a CPS concept map in the form of a mind map where you can click the
various components to read more. For the German Industrie 4.0 academia and industry people,
CPS (and that bridging of cyber/digital and physical) was key in Industry 4.0.
Cyber-physical systems also include dimensions of simulation and twin models, smart
analytics, self-awareness (self-configuration) and more . We’ve tackled some of these
topics, including digital twins, previously.
Hopefully, the essence of the concept, context and reality of the evolution towards cyber-
physical systems has become a bit clearer now. Note: there is a difference between cyber-
physical systems and cyber-physical manufacturing systems or cyber-physical production
systems (CPSS) where we move from the technological component to the far more important
process and application dimension.
Cyber-Physical Systems
As mentioned above, a cyber-physical system aims at the integration of computation and
physical processes. This means that computers and networks are able to monitor the physical
process of manufacturing at a certain process. The development of such a system consists of
three phases:
The Internet of Things is what enables objects and machines such as mobile phones and
sensors to “communicate” with each other as well as human beings to work out solutions. The
integration of such technology allows objects to work and solve problems independently. Of
course, this is not entirely true as human beings are also allowed to intervene.
However, in case of conflicting goals, the case is usually raised to higher positions.
According to Hermann, Pentek, and Otto, ““things” and “objects” can be understood as CPS.
Therefore, the IoT can be defined as a network in which CPS cooperate with each other through
unique addressing schemas.
It is easy to see that in today’s world each and every electronic device is more likely to be
connected to either another device, or to the internet. With the huge development and diversity in
electronic and smart devices, obtaining more and more of them creates complexities and
undermines the utility of each added device.
Smart phones, tablets, laptops, TVs or even watches are becoming more and more
interconnected, but the more you buy, the added value of the last device becomes
unrecognizable. The Internet of Services aims at creating a wrapper that simplifies all connected
devices to make the most out of them by simplifying the process. It is the customer’s gateway to
the manufacturer.
Internet of Things and cyber-physical systems: similar characteristics
The main Internet of Things use case in manfacturing, from a spending perspective,
concerns manufacturing operations
Cyber-physical systems are also equipped with sensors, actuators and all the other
elements which are part of the Internet of Things. Cyber-physical systems, just like the Internet
of Things need connectivity. The exact connectivity technologies which are needed depend on
the context (in both).
The Internet of Things consists of objects with embedded or attached technologies that
enable them to sense data, collect them and send them for a specific purpose. Depending on the
object and goal this could be capturing data regarding movement, location, presence of gasses,
temperature, ‘health’ conditions of devices, the list is endless. This data as such is just the
beginning, the real value starts when analyzing and acting upon them, in the scope of the IoT
project goal.
IoT devices can also receive data and instructions, again depending on the ‘use case’. All
this applies to cyber-physical systems as well, which are essentially connected objects. There are
more similar characteristics but you see how much there is in common already.
Moreover, the new capabilities which are enabled by cyber-physical systems, such
as structural health monitoring, track and trace and so forth are essentially what we call
Internet of Things use cases.
In other words: what you can do with the Internet of Things. Some of them are used in a
cross-industry way, beyond manufacturing.
Below are two examples of CPS-enabled capabilities we tackled previously and how
they really are IoT uses cases.
Track and trace possibilities in practice lead to multiple IoT use cases in, among others,
healthcare, logistics, warehousing, shipping, mining and even in consumer-oriented Internet of
Things use cases. There are ample applications of the latter with numerous solutions and
technologies. You can track and trace your skateboard, your pets, anything really, using IoT.
Smart factories are a key feature of Industry 4.0. A smart factory adopts a so called Calm-
system. A calm system is a system that is able to deal with both the physical world as well as the
virtual. Such systems are called “background systems” and in a way operate behind the scene. A
calm system is aware of the surrounding environment and the objects around it.
It also can be fed with soft information regarding the object being manufactured such as
drawings and models. According to Hermann, Pentek, and Otto.
The main Internet of Things use case in manufacturing, from a spending perspective,
concerns manufacturing operations
Cyber-physical systems are also equipped with sensors, actuators and all the other
elements which are part of the Internet of Things. Cyber-physical systems, just like the Internet
of Things need connectivity. The exact connectivity technologies which are needed depend on
the context (in both).
The Internet of Things consists of objects with embedded or attached technologies that
enable them to sense data, collect them and send them for a specific purpose. Depending on the
object and goal this could be capturing data regarding movement, location, presence of gasses,
temperature, ‘health’ conditions of devices, the list is endless. This data as such is just the
beginning, the real value starts when analyzing and acting upon them, in the scope of the IoT
project goal.
IoT devices can also receive data and instructions, again depending on the ‘use case’. All
this applies to cyber-physical systems as well, which are essentially connected objects. There are
more similar characteristics but you see how much there is in common already.
The Smart Factory also goes by other names, which basically mean the same thing. The
more popular ones include “Smart Manufacturing” and “Intelligent Factory”. The names that are
more in use, however, are “Smart Factory” and “Factory of the Future”.
There are several ways to define and describe the Smart Factory. Some describe it as a
factory that will be comprised of systems that are significantly more intelligent and dynamic than
the systems currently in use in manufacturing processes. Others say it is a factory that is certainly
more flexible, and operates under the concept that the processes will be interlinked across
networks.
When we speak of factories getting smart, it means that these manufacturing concerns
will focus more on utilizing their best talent, and establishing or building industrial infrastructure
that are designed to handle increased connectivity among all sensors and devices that are
involved in a complete production line. This connected factory design that has become
associated with the phrase “Smart Factory” is expected to increase growth and add value to the
entire production chain.
Factories of today follow a certain system, depending on the nature of the operations and
functions, and they are organized accordingly. However, in the Smart Factory, which is designed
to be more intelligent and flexible, the organization will be done in a different manner. The
difference lies in the use of networking. It also has a broader application, since the organization
is not done on a per process basis. Instead, entire production chains are connected with each
other.
The usual system would have, for example, the suppliers and logistics working separately
from the product development process. In the same manner, factory and production planning will
deem itself to be independent from the enterprise resource planning. Within the Smart Factory,
their functions will impact each other.
The present setup of factories also follows a fixed program of operations, which could be
very limiting and restrictive. As such, they are not allowed to deviate from what has been
previously planned or programmed, even if doing so would enable production process to have
improved efficiencies. In a setting that adapts the Smart Factory, processes can be easily
improved as they, as well as the machinery and various equipment in use within the process, are
designed for self-optimization. Decision-making will also be quick, since autonomy is granted
within the processes.
There are many reasons why manufacturers would feel compelled to make that transition
to smart manufacturing and set up their own “factory of the future”.
Smart manufacturing enables real-time collection of data, real-time analysis of the same,
and consequently real-time decision-making.
This is considered to be the ultimate reason why anyone would want to have a Smart
Factory. Taking advantage of the benefits of connectivity, networks and communication sensors,
to name a few, will cut through the usual “long-way-around” operations. In any manufacturing
process, time is considered to be money, and the more time that is spent unnecessarily on a
single phase of the process is money wasted.
Streamlining operations
How many times have we come across manufacturers hitting themselves over the head
once they realize they are spending a lot more than they should on production processes that are
not really necessary, or could be simplified? Smart manufacturing makes streamlined operations
possible, so the company will have more savings.
Efficiency
The overall efficiency of the production process – and the organization as a whole – will
greatly benefit from the application of smart manufacturing concepts. This is closely related to
the results of having streamlined operations. Perhaps this is most apparent in energy
consumption. For example, the energy consumption on the old production process can be cut
down in almost half once the whole process has been reevaluated and shortened considerably by
doing away with stages that are not required or combining complimentary phases together.
It is a fact that not all factories are really employee- or worker-friendly. It could be that
the workers directly involved in the production process find their work environment unsafe or
uncomfortable. With the streamlining of operations, efficiencies are improved and management
will have more time to focus on the welfare of its employees, particularly their safety and
comfort. Thanks to automation, tasks that were usually performed manually will also be more
manageable.
Similarly, the tools and machines that will be used in the production process are also
designed to reduce worker fatigue, as well as pollution. Stress levels will be lowered
considerably, if only for the simple reason that the workplace is more conducive to working,
especially during prolonged periods.
And it is not just the lower-level workers that will benefit from this. Top management, or
those that have to make the decisions will also be aided by information that is updated and
accurate.
Out of the many – often confusing – descriptions or definitions of Smart Factory, we can
glean several of the key features or characteristics that define it.
Automation: Automation is said to be the key component of the Smart Factory. Through
automation, particularly connected automation, factory efficiency will be vastly
improved. This is thanks to labor costs and overhead being reduced, as the operations
have become more streamlined. A smart factory will utilize an infrastructure that is better
equipped to handle and manage a larger number of sensors and connected devices across
the production line, using industrial Ethernet protocols. Lately, we see more automation
options being developed, purposely to be integrated into factory machinery and
equipment, allowing them to communicate with other devices.
Industrial internet: Smart Factories are supported by the infrastructure known as
Industrial Internet, which is comprised of hundreds and thousands of sensors and devices
that are managed by one central command or operator. This serves as the link that
connects everything together.
Interconnection of systems within a system: In a Smart Factory, we are not talking of
just one system working within the manufacturing process. More often than not, you will
be looking at one system, which happens to be one among the several other systems that
are interconnected or interlinked within a bigger, broader system. Perhaps the most
identifiable systems within a Smart Factory are the cyber-physical systems and, thanks to
adapting smart manufacturing concepts, each cyber-physical system has autonomy to
make decisions on its own. In short, the Factory of the Future is pretty much a system of
systems that involves manufacturing systems, work piece carriers, assembly lines, and
automated workstations, all interconnected via the Internet of Things.
Pushing Research: The adoption of Industry 4.0 technologies will push research in
various fields such as IT security and will have its effect on the education in particular. A
new industry will require a new set of skills. Consequently, education and training will
take a new shape that provides such an industry will the required skilled labor.
Security: Perhaps the most challenging aspect of implementing Industry 4.0 techniques is
the IT security risk. This online integration will give room to security breaches and data
leaks. Cyber theft must also be put into consideration. In this case, the problem is not
individual, but can, and probably will, cost producers money and might even hurt their
reputation. Therefore, research in security is crucial.
Capital: Such transformation will require a huge investment in a new technology that
doesn’t sound cheap. The decision to make such transformation will have to be on CEO
level. Even then, the risks must be calculated and taken seriously. In addition, such
transformation will require a huge capital, which alienates smaller businesses and might
cost them their market share in the future.
Employment: While it still remains early to speculate on employment conditions with
the adoption of Industry 4.0 globally, it is safe to say that workers will need to acquire
different or an all-new set of skills. This may help employment rates go up but it will also
alienate a big sector workers. The sector of workers whose work is perhaps repetitive will
face a challenge in keeping up with the industry. Different forms of education must be
introduced, but it still doesn’t solve the problem for the elder portion of workers. This is
an issue that might take longer to solve and will be further analyzed later in this report.
Privacy: This not only the customer’s concern, but also the producers. In such an
interconnected industry, producers need to collect and analyze data. To the customer, this
might look like a threat to his privacy. This is not only exclusive to consumers. Small or
large companies who haven’t shared their data in the past will have to work their way to a
more transparent environment. Bridging the gap between the consumer and the producer
will be a huge challenge for both parties.
Industry 4.0 has a lot to promise when it comes to revenues, investment, and
technological advancements, but employment still remains one of the most mysterious
aspects of the new industrial revolution. It’s even harder to quantify or estimate the
potential employment rates.
What kind of new jobs will it introduce? What does a Smart Factory worker needs to
have to be able to compete in an ever changing environment such as this? Will such
changes lay off many workers? All of these are valid questions to the average worker.
Industry 4.0 might be the peak of technological advancement in manufacturing, but it still
sounds as if machines are taking over the industry. Consequently, it is important to
further analyze this approach in order to be able to draw conclusions on the demographics
of labor in the future. This will help workers of today prepare for a not so far future.
Given the nature of the industry, it will introduce new jobs in big data analysis, robot
experts, and a huge portion of mechanical engineers. In an attempt to determine the type
of jobs that Industry 4.0 will introduce or need more labor in, BCG has published
a reportbased on interviews with 20 of the industry’s experts to showcase how 10 of the
most essential use cases for the foundation of the industry will be affected.
The following are some of the important changes that will affect the demographics of
employment:
FINAL THOUGHTS
While speculations regarding privacy, security, and employment need more study, the
overall picture is promising. Such approach to manufacturing industries is truly
revolutionary.
UNIT II
ADDITIVE MANUFACTURING
2.1 Introduction
The term AM encompasses many technologies including subsets like 3D Printing, Rapid
Prototyping (RP), Direct Digital Manufacturing (DDM), layered manufacturing and additive
fabrication.
While the adding of layer-upon-layer approach is simple, there are many applications of AM
technology with degrees of sophistication to meet diverse needs including:
FDM
Process oriented involving use of thermoplastic (polymer that changes to a liquid upon the
application of heat and solidifies to a solid when cooled) materials injected through indexing
nozzles onto a platform. The nozzles trace the cross-section pattern for each particular layer with
the thermoplastic material hardening prior to the application of the next layer. The process
repeats until the build or model is completed and fascinating to watch. Specialized material may
be need to add support to some model features. Similar to SLA, the models can be machined or
used as patterns. Very easy-to-use and cool.
3DP
this involves building a model in a container filled with powder of either starch or plaster
based material. An inkjet printer head shuttles applies a small amount of binder to form a layer.
Upon application of the binder, a new layer of powder is sweeped over the prior layer with the
application of more binder. The process repeats until the model is complete. As the model is
supported by loose powder there is no need for support. Additionally, this is the only process that
builds in colors.
Additive manufacturing processes are classified into seven areas on the basis of:
These classifications have been developed by the ASTM International Technical Committee F42
on additive manufacturing technologies. The work of this Committee focuses on the promotion
of knowledge, stimulation of research, and implementation of technology through the
development of standards.
1. Photopolymerization
2. Material jetting
3. Binder jetting
4. Material extrusion
5. Powder Bed Fusion
6. Sheet Lamination
7. Direct Energy Deposition
Material Extrusion
Fused deposition modelling (FDM) is one of the most common and widely used
additive manufacturing technology. FDM was trademarked by Stratasys Inc., and hence the
separate name Fused filament fabrication (FFF) is used to avoid infringement issues.
Above: Material Extrusion type FDM 3D Printer
In this method, material in a filament form is drawn through a nozzle, is heated and then
extruded and deposited onto the build platform in a layer-by-layer process. FFF 3D Printers are
most commonly Cartesian type where the nozzle moves in X & Y-direction whereas the build
platform moves in the Z-direction.
Extrusion 3D printers are inexpensive and offer quick prototyping of simple parts. It is
usually used for printing household items, characters, toys, games and other similar products.
The parts have rough surface finish as the maximum resolution is around 100 microns.
VAT Photopolymerization
Vat Photopolymerization uses a vat of liquid photosensitive polymer resin. This resin
hardens on exposure to UV light. This property is used to build objects layer-by-layer.
The resins are polymer compound with additives for specific applications like Tough,
Flexible, Dental, etc. These processes impart high quality surface finish to the object. Most
common process in this category are Stereolithography (SLA) and Digital Light Processing
(DLP) and even Carbon’s trademarked Digital Light Synthesis (DLS) based on its patented
technology called Continuous Liquid Interface Production (CLIP).
The Powder-bed fusion process uses a laser or an electron beam to sinter, melt and fuse
the powder particles together while it traces the cross-section of the object to be created. On
completion of the first layer, the powder dispensing unit spreads a new layer of powder onto the
build platform and the printing continues for the next layer. This process continues till the
complete object is built.
Material Jetting
Material Jetting process operates in a similar way a regular two-dimensional inkjet printer works.
Material in the form of liquid droplets is dispensed from multiple printheads similar to those in
an inkjet printer. The material is photosensitive polymer which hardens on exposure to UV light
thereby building the part layer-by-layer.
Above: Half Glossy, Half Matte finish
This additive manufacturing technology is used for building parts with high dimensional
accuracy and smooth surface finish. In fact, parts can be printed in glossy as well as matte finish
with equal accuracy. It is a multi-material technology which enables full-color printing. The
technology is also called as Drop-on-Demand (DOD) as it uses printheads which dispense liquid
material to create wax-like parts. It is mostly used for creating investment casting patterns
Binder Jetting
Binder Jetting is similar to material jetting but it uses two materials in place of one. The
two materials include a powdered base material and a binder material. The binder is dispensed on
to the powdered material in the build chamber and acts as the binding agent for adhesion of
individual layers.
The binder is usually liquid and is dispensed from printheads which move in X & Y-
direction. The binder drops the binder as per the geometry of the object to be built. After each
layer of printing, the build chamber drops down and a new layer of powder is spread on top of
the previous layer and the printhead again traces the cross-section of the object and binds the
previous and current layer together.
This process is relatively fast but BJ parts are not recommended for use in structural
applications. The unused powder acts as a support to the object and it as such does not need any
support structure.
Sheet Lamination
The Sheet Lamination process includes two types of manufacturing techniques,
Ultrasonic Additive Manufacturing (UAM) and Laminated Object Manufacturing (LOM).
Laminated Object Manufacturing (LOM) uses sheets of paper as the base material and
adhesive in place of welding. The paper is fed with the help of rollers and a laser traces the cross-
section of the object. It uses a cross-hatch method during the printing process so the completed
part is easy to remove. Objects manufactured using LOM are not fit for structural use and can
only be used for aesthetic purpose.
In DED, a nozzle holds a material in a wire form which is known as a feed which moves
across multiple axis and an electron beam projector which melts the feed as it moves across
while tracing the object geometry. As it uses a laser, DED method is also called as Laser
engineered net shaping, 3D laser cladding, directed light fabrication or direct metal deposition.
In DED process the nozzle supplying the material is not restricted to any specific axis but
can be moved is various angles due to 4 to 5 axis machines. This method is not only used to
build new objects but it is also used for adding materials to existing models for repairs.
This additive manufacturing technology uses materials like Titanium, Cobalt Chrome and
Tantalum (a rare metal). The most common application of DED method is in Aerospace and
automotive industries.
Important Technologies of AM
In the next few slides, we are going to discuss these three AMtechnologies
1) Stereolithography Apparatus(SLA)
2) Fused deposition modeling (FDM)
3) Selective Laser SinteringSLS)
Stereolithography
One of the most important additive manufacturingtechnologies currently available.The
first ever commercial RP systems were resin-basedsystems commonly called stereolithography
or SLA.The resin is a liquid photosensitive polymer that cures orhardens Stereolithography when
exposed to ultravioletradiation.
This technique involves the curing or solicitation of a liquidphotosensitive polymer
through the use of the irradiation lightsource.
The source supplies the energy that is needed to induce achemical reaction (curing reaction),
bonding large no of smallmolecules and forming a highly cross-linked polymer
Fused deposition modeling
Fused Deposition Modeling (FDM), produced and developedby Stratasys, USA.
FDM uses a heating chamber to liquefy polymer that is fed into the system as a lament.
The lament is pushed into the chamber by a tractor wheelarrangement and it is this
pushing that generates theextrusion pressure.
The major strength of FDM is in the range of materials and the effective mechanical
properties of resulting parts madeusing this technology.
Parts made using FDM are amongst the strongest for anypolymerbased additive
manufacturing process.
Materials for FDM
The most popular material is the ABSplus material, which canbe used on all current Stratasys
FDM machines.
Some machines also have an option for ABS blended withPolycarbonate.
Note that FDM works best with polymers that are amorphous
In nature rather than the highly crystalline polymers.
This is because the polymers that work best are those that areextruded in a viscous paste rather
than in a lower viscosityform.
As in amorphous polymers, there is no distinct melting pointand the material increasingly softens
and viscosity lowers withincreasing temperature.
The viscosity at which these amorphous polymers can beextruded under pressure is high enough
that their shape will belargely maintained after extrusion, maintaining the extrusionshape and
enabling them to solidify quickly and easily.
Limitations of FDM
Sharp features or corners not possible to get;
Part strength is weak perpendicular to build axis;
More area in slices requires longer build times;
Temperature uctuations during production could lead toDelamination
Advantages
1) A distinct advantage of the SLS process is that because itis fully self-supporting
2) Parts possess high strength and stiffness
3) Good chemical resistance
4) Various finishing possibilities (e.g., metallization, stoveenameling, vibratory grinding, tub
coloring, bonding, powder,coating, ocking)
5) Complex parts with interior components,channels, can be built without trapping the material
inside.
6) Fastest additive manufacturing process
Disadvantages
SLS printed parts have surface porosity. Such porosity can besealed by applying sealant such as
cyanoacrylate.
Introduction to Reverse Engineering
It is the processes of extracting knowledge or designinformation from anything man-
made and reproducing it orreproducing anything based on the extracted information.
The process often involves disassembling something (amechanical device, electronic component,
computer program,or biological, chemical, or organic matter) and analyzing its
Components and workings in detail.
Motivation for Reverse Engineering
Interfacing: Reverse engineering can be used when a systemis required to interface to another
system.
Military or commercial espionage: Learning about anenemy's or competitor’s latest research by
stealing orcapturing a prototype and dismantling it.
Product security analysis: To examine how a product works, what are specifications of its
components, estimatecosts and identify potential patent infringement.
Academic/learning purposes: Reverse engineering forlearning purposes may be to understand the
key issues of anunsuccessful design and subsequently improve the design.
Saving money: when one finds out what a piece ofelectronics is capable of, it can spare a user
from purchase ofa separate product.
Generic AM Processes:
Additive Manufacturing Process Chain
A series of steps goes into the process chain required to generate a useful physicalpart
from the concept of the same part using additive manufacturing processes.
Depending on the technology and, at times the machines and components, theprocess chain is
mainly made up of six steps:
• Generation of CAD model of the design;
• Conversion of CAD model into AM machine acceptable format;
• CAD model preparation;
• Machine setup;
• Part removal;
• Post-processing.
These steps can be grouped or broken down and can look different from case tocase, but
overall the process chain from one technology remains similar to that of a different technology.
The process chain is also constantly evolving and can changeas the existing technologies develop
and new technologies surface. In this text, thefocus will be on the powder bed metal technology.
Therefore, the process chain forthis technology will be discussed in details, while other will be
roughly mentioned.
2.6 Post-processing
Depending on AM technology used to create the part, the purpose, and requirementsof
the finished part, the post-fabrication processes can vary in a wide range. Itcan require anything
from no post process to several additional steps of processingto change the surface, dimensions,
and/or material properties of the builtpart. Shown in Fig. 2.5 is an example of the unique surface
features on powder bedAM parts where partially melted particles are bound to the surfaces of
built parts.
These features, along with the weld-lines restoring the melt-pool in differentdirections
results in a new type of surface finish that is distinct from any existingmanufacturing processes.
In metal powder bed AM systems, the minimum requiredprocessing is removal of built part from
build plate and the removal of supportstructures from the built part. Removal of support
structures can be as simple asmanually breaking the supports from the surface of the part, but it
can also be aprocess that utilizes CNC tools to not only remove the support, but also to
achievedesired surface finish and/or dimension tolerance. In addition, the metal powder bedAM
systems can introduce large amounts of thermal stresses into the builtpart. Under these
conditions, the support structure serves as the mechanical“tie-downs” to hold the built part in
place and keep it in its intended geometry. Ifthe supports were to be removed from the part,
warpage in the part will occur.
A thermal annealing process can be used to relieve the thermal stresses in the partbefore
it is removed from the build plate, to prevent part warpage upon removalfrom the build plate.
Fig. 2.5 SEM images of SLM parts showing unique surface features unlike any other
currentmanufacturing processes42 2 Additive Manufacturing Process ChainHot Isostatic
Pressing, HIP, is a process where a component is subjected toelevated temperature and isostatic
pressure in a pressure vessel. At a temperature ofover 50% of the melting point of materials and
pressures above 100 MPa (can be ashigh as 300 MPa), the voids and porosity inside a powder
bed metal AM part canbe greatly reduced. Once processed by HP, the final bulk density can
reach morethan 95% of the true density of material. Under these extreme conditions of
pressureandtemperature, material in a component being treated not only undergoes
localizedplastic deformation, but the processes of creep and solid-state diffusionbonding are
allowed to take place. This enables the required shaping and masstransport around internal
defects to happen and to “heal” them, increasing the bulkdensity of the component.
Powder bed fusion (PBF) methods use either a laser or electron beam to melt and fuse
material powder together. Electron beam melting (EBM), methods require a vacuum but can be
used with metals and alloys in the creation of functional parts. All PBF processes involve the
spreading of the powder material over previous layers. There are different mechanisms to enable
this, including a roller or a blade. A hopper or a reservoir below of aside the bed provides fresh
material supply. Direct metal laser sintering (DMLS) is the same as SLS, but with the use of
metals and not plastics. The process sinters the powder, layer by layer. Selective Heat Sintering
differs from other processes by way of using a heated thermal print head to fuse powder material
together. As before, layers are added with a roller in between fusion of layers. A platform lowers
the model accordingly.
1. A layer, typically 0.1mm thick of material is spread over the build platform.
2. A laser fuses the first layer or first cross section of the model.
3. A new layer of powder is spread across the previous layer using a roller.
4. Further layers or cross sections are fused and added.
5. The process repeats until the entire model is created. Loose, unfused powder is remains in
position but is removed during post processing.
Process Parameters and Modelling:
Drying Time
After the printing of each layer, the entire print bed needs to be dried. The drying time
defines how long the print bed stays under the heater. During the drying period, the print head
moves into a cleaner reservoir to dissolve the excessive binder materials to avoid blocking.
Based on previous experimental experience, it is found that the drying time is critical because
shorter drying time is very likely to lead to a print head blockage.
The AM processes are hampered mainly due to their lowproductivity, relatively poor
surface quality and dimensionalstability as well as uncertainty regarding the mechanical
propertiesof the products. Therefore, those manufacturing attributesshould be optimized in order
for AM to get establishedin production. For the optimization of any manufacturing process,a
deepknowledge of the process itself is required. Thisknowledge could be gained, either by
experimentation or byanalyzing the physical mechanisms of the process. A model isthe abstract
representation of a process that establishes a relationbetween input and output quantities. The
real system issimulated by the models that aim to predict its performance. Models met in
literature can be divided into three majorcategories, namely analytical, numerical and empirical
ones,depending on the development approach. The analytical models are the output of the
process’s mathematical analysis,taking into consideration the physical laws and the
relevantphysical processes. The main advantage of such models is thatthe derived results can be
easily transferred to other pertinentprocesses. The limits of the analytical modelling are
determinedby the underlying assumptions. The empirical models,on the other hand, are the
outcome of a number of experiments,whose results are evaluated; one model type is chosen,the
coefficients are determined, and then the empirical modelcan be verified by further tests. The
quality of the models’results is limited in the special conditions of the specific process.
Their major advantage, compared with that of the analytical models, is that they require
minimum effort. Numericalmodels are in between, in essence, stem from the physics ofthe
process, but a numerical step-by-step method is employedover time in order for useful results to
be produced. Followingis a list of authors who have presented modelling attempts inAM.
Metal 3D printing is a burgeoning area at the moment. Many companies are investing in
metal 3D printing systems and the market is growing quickly. There are a number of different
metal 3D printing technologies out there. In this article we’ll give you an overview of the major
technologies and vendors in the space. In this installment we’re looking at Powder Bed Fusion.
Material Jetting:
In this introduction to Material Jetting 3D printing, we cover the basic principles of the
technology. After reading this article you will understand the fundamental mechanics of the
Material Jetting process and how these relate to its benefits and limitations.
I. First, the liquid resin is heated to 30 - 60oC to achieve optimal viscosity for printing.
II. Then the printhead travels over the build platform and hundreds of tiny droplets of
photopolymer are jetted/deposited to the desired locations.
III. A UV light source that is attached to the printhead cures the deposited material,
solidifying it and creating the first layer of the part.
IV. After the layer is complete, the build platform moves downwards one layer height and the
process repeats until the whole part is complete.
Printer Parameters
In Material Jetting, almost all process parameters are pre-set by the machine
manufacturer. Even the layer height is linked to each specific material, due to the complex
physics of droplet formation. The typical layer height used in Material Jetting is 16 - 32
microns.
Material Jetting is considered one of the most accurate 3D printing technologies. MJ
systems have a dimensional accuracy of ± 0.1% with a typical lower limit of ± 0.1 mm
(sometimes as low as ± 0.02 mm). Warping can occur, but it is not as common as in other
technologies, such as FDM or SLS, because printing happens at near room temperature. For this
reason very big parts can be printed with great accuracy. The typical build size is approximately
380 x 250 x 200 mm, while large industrial systems can be as big as 1000 x 800 x 500 mm.
A key advantage of Material Jetting is the ability to produce accurate multi-material and multi-
color prints that represent end products.
At the build area level, different parts can be printed in different materials or colors
simultaneously, speeding up the manufacturing process.
At the part level, different sections of a part can be designated to be printed in different
material or color (for example creating a stiff case with flexible buttons for prototyping
with haptic feedback).
At the material level, two or more printing resins can be mixed in different ratios before
dispensing, creating a "digital material" with specific physical properties, such as
hardness, stiffness or hue.
To designate a different material or color to particular areas of the part, the model must be
exported as separate STL files. When blending colors or material properties to create a digital
material, the design must be exported as an OBJ or VRML file, because these formats allow the
designation of special properties (such as texture or full color) on a per face or per vertex basis.
Applications of AM:
UNIT III
ROBOTICS IN MANUFACTURING
Is this a robot?
Or this…
Is this a robot?
Or these…
The word “robot” comes from the Czech word “robotnik”, meaning “to slave.” Factories started
using these machines in the early 1960’s to handle some of the more dangerous or mundane tasks
that humans didn’t want to do. However, they did more than fill unwanted factory jobs; they
completed the work with unprecedented speed and precision. Today, robots perform all kinds of
tasks and can be classified according to different criteria such as:
Type of movement
Application
Architecture
Brand
Ability to be collaborative
As labor costs rise and competition for low-wage overseas locations increases, more and more
manufacturers are utilizing robot technologies. In fact, 90 percent of all modern robots can be
found in factories.
There are six major types of industrial robots used for various tasks:
1. Articulated
Articulated robots have rotary joints that allow for a full range of motion. They can perform very
precise movements which makes them useful in manufacturing lines where they need to bend in
different directions. Multiple arms can be utilized for greater control or to execute multiple tasks
at once.
The main advantage of articulated robots is the flexibility and dexterity. They can move and
manipulate a variety of objects while performing small tasks with greater speed and consistency
than human workers.
2. Cartesian
Also referred to rectilinear or gantry robots, Cartesian robots have three linear joints that move in
different axes (X, Y, and Z). The rigid structure of these robots allows for advanced precision
and repeatability. They are often used in assembly lines for performing simple movements such
as picking up and moving bottles.
Due to the relatively simple design and mechanic structure, Cartesian robots are fairly
inexpensive to make. They can be the cheapest solution for simple pick and place operations or
other tasks that do not require extensive movement
3. Cylindrical
Just as the name suggests, cylindrical robots have a cylindrical work area. They feature a robotic
arm that is connected to a base via single joint, with one more linear joint connecting the arm’s
links. Basically, these machines feature a single robotic arm that moves up, down, and around a
cylindrical pole.
Cylindrical robots are used for assembly operations, handling, and spot welding. Their function
is similar to Cartesian robots, but may be more preferable in some applications due to their
ability to move between required points faster.
4. Spherical
Spherical robots are similar to, but more complex than, Cartesian or cylindrical robots. They
feature a robotic arm connected to a base via a twisting joint, giving the mechanism a spherically
shaped work area. This allows them to perform tasks that require movement in a three
dimensional space.
Spherical robots were some of the first industrial robots to be used in manufacturing for
construction and other dexterous tasks that require advanced control. Nowadays, though, they are
becoming less and less popular as articulated robots are more flexible.
5. SCARA
SCARA is an acronym that stands for Selective Compliance Assembly Robot Arm. These robots
have arms that behave similarly to a human arm in that the joints allow for both vertical and
horizontal movement. However, the “wrist” has limited motion which gives it an advantage for
many types of assembly work such as pick and place, kitting, packaging, and other material
handling applications.
6. Delta
These robots are built from jointed parallelograms connected to a single base, giving them a
spider-like appearance. This type of design is optimal for delicate, precise movements that are
useful in the food, pharmaceutical, and electronic industries.
Delta robots are used extensively for assembly and other applications that require high-speed
repetition. They are able to complete highly repetitive tasks, such as small part assembly, quickly
and perfectly each time. This is beneficial not only from an efficiency standpoint, but in terms of
health and safety as well. Such tasks have been found to cause musculoskeletal disorders in
humans over long periods of time.
Thanks to industrial robots, the manufacturing industry is on the verge of a revolution. As the
technology becomes more intelligent, efficient, and cost-effective, robots are being called on to
handle more complex tasks. But this doesn’t mean jobs are any harder to come by. In part 2,
we’ll discuss how robots have actually created jobs in the manufacturing industry.
Robots are changing the face of manufacturing. They are designed to move materials, as well as
perform a variety of programmed tasks in manufacturing and production settings. They are often
used to perform duties that are dangerous, or unsuitable for human workers, such as repetitious
work that causes boredom and could lead to injuries because of the inattentiveness of the worker.
Industrial robots are able to significantly improve product quality. Applications are performed
with precision and superior repeatability on every job. This level of reliability can be difficult to
accomplish any other way. Robots are regularly being upgraded, but some of the most precise
robots used today have a repeatability of +/-0.02mm. Robots also increase workplace safety.
The disadvantages to integrating robots into a business are the significant upfront cost. Also,
ongoing maintenance requirements can add to the overall cost. Yet, the long term ROI makes
manufacturing robots the perfect investment.
Material handling is the most prevalent application of industrial robots with 38% of the robots
being used for this purpose. Material handling robots can automate some of the most tedious,
mind-numbing, and unsafe tasks in a production line. The term material handling takes in a
variety of product movements on the manufacturing floor, such as part selection, transferring of
the part, packing, palletizing, loading and unloading and machine feeding.
With the introduction of collaborative robots into manufacturing with a low price—around
$20,000—the potential to revolutionize production lines is growing. A lighter, mobile plug and
play generation of cobots is arriving on the production floor to work safely alongside human
workers thanks to advances in sensor and vision technology, and computing power. Should an
employee get in their way, the robot will stop, thereby avoiding an accident.
29% of the robots used in manufacturing are welders. This segment mostly includes spot welding
and arc welding. More small manufacturers are introducing welding robots into their fabrication
line. The cost of welding robots is going down, making it easier to automate a welding process.
The robot may be directed by a predetermined program, be guided by machine vision, or follow
a combination of the two methods. The benefits of robotic welding have demonstrated to make it
a technology that helps many manufacturers increase precision, repeatability, and output.
Welding robots offer efficiency, reach, speed, load capacity, and enhanced performance for
welding parts of all shapes and sizes; and they support a wide range of intelligent functions such
as ready-to-use robotic vision, and collision avoidance.
Assembly operations encompass 10% of the robots used in manufacturing, including fixing,
press-fitting, inserting, and disassembling. This category of robotic applications has diminished
because of the introduction of different technologies such as force torque sensors and tactile
sensors that gives more sensations to the robot.
When it comes to putting parts together, assembly robots move faster and with greater precision
than a human, and an off-the-shelf tool can be installed quicker than with special-purpose
equipment. An assembly robot is easily reconfigured and it is a low-risk investment that satisfies
the demands of manufacturing, quality and finance all at the same time.
Assembly robots can be fitted with vision systems and force sensing. The vision system guides
the robot to pick up a component from a conveyor, reducing or eliminating the need for precise
location of the part; and visual serving allows a robot to rotate or move a piece to make it fit with
another piece. Force sensing helps with part assembly operations like insertion, giving the robot
controller feedback on how well the parts are fitting together or how much force is being applied.
Together, these sensing technologies are making assembly robots even more cost efficient.
Dispensing robots are used for painting, gluing, applying adhesive, and spraying. Only 4% of
the operational robots are doing dispensing. Dispensing robots offer greater control over the
placement of fluids, including arcs, beads, circles and repeated timed dots. The benefits of a
dispensing robot include reduced manufacturing time, consistent accuracy over rough and
uneven surfaces, and improved product quality.
Dispensing robots are available for 1-part and 2-part materials. The XYZ gantry robot system
applies adhesives, sealants and lubricants with precision placement directly onto parts with
repeatable accuracy. They are used for high payload, high-speed applications.
These robots can be used to form in-place gaskets, apply adhesives, and spray coatings.
The primary components of an automated dispensing system are the PC, the robot, and the
dispensing valve components. The robot implements a computer program to dispense fluid from
the valve in a specific pattern onto a workpiece.
The fluid is dispensed through valve system, which may be contact or non-contact. Contact
dispensing requires that the dispensing tip be placed close to the part. On systems that include a
CCD camera, the robot can automatically adjust the dispensing program for each workpiece,
allowing for variations in the workpiece position or orientation. To accomplish this, the software
compares the current workpiece location to within 0.098 in. of a reference location that is stored
as an image file in the program. If the robot detects a difference in the X and Y positions and/or
the angle of rotation of the workpiece, it adjusts the dispensing path to correct for the difference.
Many manufacturers finish their products through grinding, cutting, deburring, sanding,
polishing or routing. Material removal robots can refine product surfaces, using harsh, abrasive
methods to smooth out steel to precise spot removal for small parts like jewelry. Robot material
removal can not only perfect a company's product, but it will increase cycle times and production
rates, which will save money. By automating material removal processes, manufacturers increase
the safety level in their shops by protecting workers from harmful dust and fumes caused by
material removal applications.
Robot-based inspection systems are on the increase, as vision systems become increasingly
powerful and flexible, allowing for flaw detection on parts, guaranteeing correct part assembly.
The vision system finds and inspects a part accurately. Most importantly, integrators have to
make sure of getting very good positional accuracy and communicating that back to the robot
quickly.
Robot inspection systems are now measuring components, but as tolerances get tighter and
tighter, these tolerances become harder to satisfy. The robot moves from verifying a part’s
presence to actually measuring it.
Manufacturing robots are more affordable today than ever before. Standard robot models are
now mass-produced, making them more available to meet the ever-increasing demand. These
robots are more straightforward, and more conducive to plug and play installation. They are
designed to communicate more easily with one another, making for easier production assembly
because the resulting systems are more reliable and flexible. Manufacturing robots can handle
more, as they are constructed to offer complexity and toughness in diverse manufacturing
settings. Robots are the future of manufacturing.
Robotics is a domain in artificial intelligence that deals with the study of
creating intelligent and efficient robots.
Objective
Robots are aimed at manipulating the objects by perceiving, picking,
moving, modifying the physical properties of object, destroying it, or to
have an effect thereby freeing manpower from doing repetitive functions
without getting bored, distracted, or exhausted.
What is Robotics?
Robotics is a branch of AI, which is composed of Electrical Engineering,
Mechanical Engineering, and Computer Science for designing, construction,
and application of robots.
Aspects of Robotics
The robots have mechanical construction, form, or shape designed to
accomplish a particular task.
They have electrical components which power and control the machinery.
They contain some level of computer program that determines what, when
and how a robot does something.
AI Programs Robots
They need general purpose They need special hardware with sensors and
computers to operate on. effectors.
Robot Locomotion
Locomotion is the mechanism that makes a robot capable of moving in its environment. There
are various types of locomotions −
Legged
Wheeled
Combination of Legged and Wheeled Locomotion
Tracked slip/skid
Legged Locomotion
This type of locomotion consumes more power while demonstrating walk, jump, trot,
hop, climb up or down, etc.
It comes with the variety of one, two, four, and six legs. If a robot has multiple legs then
leg coordination is necessary for locomotion.
The total number of possible gaits (a periodic sequence of lift and release events for each of the
total legs) a robot can travel depends upon the number of its legs.
In case of a two-legged robot (k=2), the number of possible events is N = (2k-1)! = (2*2-1)! =
3! = 6.
Wheeled Locomotion
It requires fewer number of motors to accomplish a movement. It is little easy to implement as
there are less stability issues in case of more number of wheels. It is power efficient as
compared to legged locomotion.
Standard wheel − Rotates around the wheel axle and around the contact
Castor wheel − Rotates around the wheel axle and the offset steering joint.
Swedish 45o and Swedish 90o wheels − Omni-wheel, rotates around the contact point,
around the wheel axle, and around the rollers.
Slip/Skid Locomotion
In this type, the vehicles use tracks as in a tank. The robot is steered by moving the tracks with
different speeds in the same or opposite direction. It offers stability because of large contact
area of track and ground.
Components of a Robot
Robots are constructed with the following −
Power Supply − The robots are powered by batteries, solar power, hydraulic, or
pneumatic power sources.
Pneumatic Air Muscles − They contract almost 40% when air is sucked in them.
Muscle Wires − They contract by 5% when electric current is passed through them.
Sensors − They provide knowledge of real time information on the task environment.
Robots are equipped with vision sensors to be to compute the depth in the environment.
A tactile sensor imitates the mechanical properties of touch receptors of human
fingertips.
Computer Vision
This is a technology of AI with which the robots can see. The computer vision plays vital role in
the domains of safety, security, health, access, and entertainment.
Computer vision automatically extracts, analyzes, and comprehends useful information from a
single image or an array of images. This process involves development of algorithms to
accomplish automatic visual comprehension.
Power supply
Image acquisition device such as camera
A processor
A software
A display device for monitoring the system
Accessories such as camera stands, cables, and connectors
Tasks of Computer Vision
OCR − In the domain of computers, Optical Character Reader, a software to convert
scanned documents into editable text, which accompanies a scanner.
Face Detection − Many state-of-the-art cameras come with this feature, which enables to
read the face and take the picture of that perfect expression. It is used to let a user access
the software on correct match.
Object Recognition − They are installed in supermarkets, cameras, high-end cars such
as BMW, GM, and Volvo.
Agriculture
Autonomous vehicles
Biometrics
Character recognition
Forensics, security, and surveillance
Industrial quality inspection
Face recognition
Gesture analysis
Geoscience
Medical imagery
Pollution monitoring
Process control
Remote sensing
Robotics
Transport
Applications of Robotics
The robotics has been instrumental in the various domains such as −
Industries − Robots are used for handling material, cutting, welding, color coating,
drilling, polishing, etc.
Military − Autonomous robots can reach inaccessible and hazardous zones during war.
A robot named Daksh, developed by Defense Research and Development Organization
(DRDO), is in function to destroy life-threatening objects safely.
Medicine − The robots are capable of carrying out hundreds of clinical tests
simultaneously, rehabilitating permanently disabled people, and performing complex
surgeries such as brain tumors.
Exploration − The robot rock climbers used for space exploration, underwater drones
used for ocean exploration are to name a few.
Entertainment − Disney’s engineers have created hundreds of robots for movie making.
Manufacturing Applications for Robotics in 2018
1. Warehouse Logistics
Historically, most robotics applications were limited to assembly-line operations. As industrial
robots become more sophisticated and capable of assuming more responsibility, manufacturers
have begun exploring their use in the warehouse.
Automated robots navigate large storerooms and complex floor plans much more quickly, safely
and efficiently than their human counterparts, so warehouse-bound robots have the potential to
cut long-term costs significantly.
2. Aerospace
The aerospace industry is also exploring new and advanced applications in industrial robotics.
Although human researchers and development teams are necessary to conceptualize
breakthroughs, visualize new blueprints, and verify operability, the industry has delegated much
of the grunt work to industrial robots.
Boeing, which began operations in 1916 as the Boeing Airplane Company, is focused on
automation that will "improve employee safety by removing ergonomic risks," according to
spokesperson Nate Hulings. The company was among the first to deploy robots in the aerospace
sector.
3. Automotive Manufacturing
Top automotive manufacturers have used robotics for decades, and the trend is increasing as
robots become more affordable and efficient. North America received more than 20,000 new
units between 2011 and 2013, and a sharp uptick occurred at the beginning of 2014. Capital
investments have also experienced dramatic increases in recent years.
One reason behind the increasing acceptance of robots is their versatility. Factories can easily
modify or upgrade new designs to accommodate any tools or hardware they need for the job,
including air compressors. As pneumatic and air-powered equipment is necessary for many
stages of automotive manufacturing, it makes sense this industry is among the first to take robots
away from the assembly line and into roles of greater scope and accountability.
Robots easily build pallets, cut lumber to size and plane wood according to exact requirements.
Manufactured homes are growing in popularity, many of which next-gen robots assemble — at
least, in part. In some cases, builders even bring robots to the construction site for simple
framing jobs.
Instead of taking away the jobs of current workers, they work alongside them to handle the more
arduous or mundane tasks — and it results in a better product in the end.
Embracing the movement and working in tandem with technology is necessary to uncover the
true potential of next-gen robotics and to determine the exact role humans will play in automated
manufacturing.
UNIT IV
INTERNET OF THINGS
The Internet of things (IoT) is the network of devices such as vehicles, and home appliances
that contain electronics, software, sensors, actuators, and connectivity which allows these things
to connect, interact and exchange data.
The IoT involves extending Internet connectivity beyond standard devices, such
as desktops, laptops, smartphones and tablets, to any range of traditionally dumb or non-internet-
enabled physical devices and everyday objects. Embedded with technology, these devices can
communicate and interact over the Internet, and they can be remotely monitored and controlled.
History[edit]
The definition of the Internet of things has evolved due to convergence of multiple technologies,
real-time analytics, machine learning, commodity sensors, and embedded systems.[5]Traditional
fields of embedded systems, wireless sensor networks, control
systems, automation (including home and building automation), and others all contribute to
enabling the Internet of things.[6]
The concept of a network of smart devices was discussed as early as 1982, with a modified Coke
vending machine at Carnegie Mellon University becoming the first Internet-connected
appliance,[7] able to report its inventory and whether newly loaded drinks were cold or
not.[8] Mark Weiser's 1991 paper on ubiquitous computing, "The Computer of the 21st Century",
as well as academic venues such as UbiComp and PerCom produced the contemporary vision of
the IoT.[9][10] In 1994, Reza Raji described the concept in IEEE Spectrum as "[moving] small
packets of data to a large set of nodes, so as to integrate and automate everything from home
appliances to entire factories".[11] Between 1993 and 1997, several companies proposed solutions
like Microsoft's at Work or Novell's NEST. The field gained momentum when Bill
Joy envisioned Device to Device (D2D) communication as a part of his "Six Webs" framework,
presented at the World Economic Forum at Davos in 1999. [12]
The term "Internet of things" was likely coined by Kevin Ashton of Procter & Gamble,
later MIT's Auto-ID Center, in 1999,[13] though he prefers the phrase "Internet for things".[14] At
that point, he viewed Radio-frequency identification (RFID) as essential to the Internet of
things,[15] which would allow computers to manage all individual things. [16][17][18]
A research article mentioning the Internet of Things was submitted to the conference for Nordic
Researchers in Norway, in June 2002,[19] which was preceded by an article published in Finnish
in January 2002.[20] The implementation described there was developed by Kary Främling and
his team at Helsinki University of Technology and more closely matches the modern one, i.e. an
information system infrastructure for implementing smart, connected objects. [21]
Defining the Internet of things as "simply the point in time when more 'things or objects' were
connected to the Internet than people", Cisco Systems estimated that the IoT was "born" between
2008 and 2009, with the things/people ratio growing from 0.08 in 2003 to 1.84 in 2010. [22]
Applications[edit]
The integration of the Internet with building energy management systems in order to create
energy efficient and IOT-driven "smart buildings".[59]
The possible means of real-time monitoring for reducing energy consumption[60] and
monitoring occupant behaviors.[59]
The integration of smart devices in the built environment and how they might to know who
to be used in future applications.[59]
Industrial applications[edit]
Main article: Industrial Internet of Things
Manufacturing[edit]
The IoT can realize the seamless integration of various manufacturing devices equipped with
sensing, identification, processing, communication, actuation, and networking capabilities. Based
on such a highly integrated smart cyberphysical space, it opens the door to create whole new
business and market opportunities for manufacturing. [61] Network control and management
of manufacturing equipment, asset and situation management, or manufacturing process
control bring the IoT within the realm of industrial applications and smart manufacturing as
well.[62] The IoT intelligent systems enable rapid manufacturing of new products, dynamic
response to product demands, and real-time optimization of manufacturing production
and supply chain networks, by networking machinery, sensors and control systems together. [45]
Digital control systems to automate process controls, operator tools and service information
systems to optimize plant safety and security are within the purview of the IoT. [63] But it also
extends itself to asset management via predictive maintenance, statistical evaluation, and
measurements to maximize reliability.[64] Smart industrial management systems can also be
integrated with the Smart Grid, thereby enabling real-time energy optimization. Measurements,
automated controls, plant optimization, health and safety management, and other functions are
provided by a large number of networked sensors. [45]
The term industrial Internet of things (IIoT) is often encountered in the manufacturing industries,
referring to the industrial subset of the IoT. IIoT in manufacturing could generate so much
business value that it will eventually lead to the Fourth Industrial Revolution, so the so-
called Industry 4.0. It is estimated that in the future, successful companies will be able to
increase their revenue through Internet of things by creating new business models and improve
productivity, exploit analytics for innovation, and transform workforce. [65] The potential of
growth by implementing IIoT may generate $12 trillion of global GDP by 2030. [65]
IIoT system architecture, in its simplistic view, consists of three tiers: Tier 1: Devices, Tier 2: the
Edge Gateway, and Tier 3: the Cloud. [109] Devices include networked things, such as the sensors
and actuators found in IIoT equipment, particularly those that use protocols such as Modbus,
Zigbee, or proprietary protocols, to connect to an Edge Gateway. [109] The Edge Gateway consists
of sensor data aggregation systems called Edge Gateways that provide functionality, such as pre-
processing of the data, securing connectivity to cloud, using systems such as WebSockets, the
event hub, and, even in some cases, edge analytics or fog computing. [109] The final tier includes
the cloud application built for IIoT using the microservices architecture, which are usually
polyglot and inherently secure in nature using HTTPS/OAuth. It includes various database
systems that store sensor data, such as time series databases or asset stores using backend data
storage systems (e.g. Cassandra, Postgres). [109] The cloud tier in most cloud-based IoT system
features event queuing and messaging system that handles communication that transpires in all
tiers.[110] Some experts classified the three-tiers in the IIoT system as edge, platform, and
enterprise and these are connected by proximity network, access network, and service network,
respectively.[111]
Building on the Internet of things, the web of things is an architecture for the application layer of
the Internet of things looking at the convergence of data from IoT devices into Web applications
to create innovative use-cases. In order to program and control the flow of information in the
Internet of things, a predicted architectural direction is being called BPM Everywhere which is a
blending of traditional process management with process mining and special capabilities to
automate the control of large numbers of coordinated devices. [citation needed]
Network architecture[edit]
The Internet of things requires huge scalability in the network space to handle the surge of
devices.[112] IETF 6LoWPAN would be used to connect devices to IP networks. With billions of
devices[113] being added to the Internet space, IPv6 will play a major role in handling the network
layer scalability. IETF's Constrained Application Protocol, ZeroMQ, and MQTT would provide
lightweight data transport.
Fog computing is a viable alternative to prevent such large burst of data flow through
Internet.[114] The edge devices' computation power can be used to analyse and process data, thus
providing easy real time scalability. [citation needed]
Complexity[edit]
In semi-open or closed loops (i.e. value chains, whenever a global finality can be settled) the IoT
will often be considered and studied as a complex system[115] due to the huge number of different
links, interactions between autonomous actors, and its capacity to integrate new actors. At the
overall stage (full open loop) it will likely be seen as a chaoticenvironment
(since systems always have finality). As a practical approach, not all elements in the Internet of
things run in a global, public space. Subsystems are often implemented to mitigate the risks of
privacy, control and reliability. For example, domestic robotics (domotics) running inside a
smart home might only share data within and be available via a local network.[116] Managing and
controlling high dynamic ad hoc IoT things/devices network is a tough task with the traditional
networks architecture, Software Defined Networking (SDN) provides the agile dynamic solution
that can cope with the special requirements of the diversity of innovative IoT applications. [117]
Size considerations[edit]
The Internet of things would encode 50 to 100 trillion objects, and be able to follow the
movement of those objects. Human beings in surveyed urban environments are each surrounded
by 1000 to 5000 trackable objects.[118] In 2015 there were already 83 million smart devices in
people's homes. This number is about to grow up to 193 million devices in 2020 and will for sure
go on growing in the near future.[29]
The figure of online capable devices grew 31% from 2016 to 8.4 billion in 2017. [101]
Space considerations[edit]
In the Internet of things, the precise geographic location of a thing—and also the precise
geographic dimensions of a thing—will be critical.[119] Therefore, facts about a thing, such as its
location in time and space, have been less critical to track because the person processing the
information can decide whether or not that information was important to the action being taken,
and if so, add the missing information (or decide to not take the action). (Note that some things
in the Internet of things will be sensors, and sensor location is usually important. [120])
The GeoWeb and Digital Earth are promising applications that become possible when things can
become organized and connected by location. However, the challenges that remain include the
constraints of variable spatial scales, the need to handle massive amounts of data, and an
indexing for fast search and neighbor operations. In the Internet of things, if things are able to
take actions on their own initiative, this human-centric mediation role is eliminated. Thus, the
time-space context that we as humans take for granted must be given a central role in this
information ecosystem. Just as standards play a key role in the Internet and the Web, geospatial
standards will play a key role in the Internet of things. [121][122]
A solution to "basket of remotes"[edit]
Many IoT devices have a potential to take a piece of this market. Jean-Louis Gassée (Apple
initial alumni team, and BeOS co-founder) has addressed this topic in an article on Monday
Note,[123] where he predicts that the most likely problem will be what he calls the "basket of
remotes" problem, where we'll have hundreds of applications to interface with hundreds of
devices that don't share protocols for speaking with one another.[123] For improved user
interaction, some technology leaders are joining forces to create standards for communication
between devices to solve this problem. Others are turning to the concept of predictive interaction
of devices, "where collected data is used to predict and trigger actions on the specific devices"
while making them work together.[124]
Ethernet – General purpose networking standard using twisted pair and fiber optic links in
conjunction with hubs or switches.
Power-line communication (PLC) – Communication technology using electrical wiring to
carry power and data. Specifications such as HomePlug or G.hn utilize PLC for networking
IoT devices.
Standards and standards organizations[edit]
This section needs
expansion. You can help
by adding to it.(September
2016)
This is a list of technical standards for the IoT, most of which are open standards, and
the standards organizations that aspire to successfully setting them. [135][136]
Short
Long name Standards under development Other notes
name
Technology technology
Institute of
Electrical and Underlying communication technology
IEEE
Electronics standards such as IEEE 802.15.4
Engineers
Internet
Standards that comprise TCP/IP (the
IETF Engineering
Internet protocol suite)
Task Force
OCF (Open
Connectivity
Open Standards for simple devices
Foundation)
OCF Connectivity using CoAP(Constrained Application
supersedes OIC (Open
Foundation Protocol)
Interconnect
Consortium)
Protocol extensions
XMPP
of XMPP (Extensible Messaging and
XSF Standards
Presence Protocol), the open standard
Foundation
of instant messaging
Data security – At the time of designing IoT companies should ensure that data collection,
storage and processing would be secure at all times. Companies should adopt a "defence in
depth" approach and encrypt data at each stage. [145]
Data consent – users should have a choice as to what data they share with IoT companies and
the users must be informed if their data gets exposed.
Data minimization – IoT companies should collect only the data they need and retain the
collected information only for a limited time.
However, the FTC stopped at just making recommendations for now. According to an FTC
analysis, the existing framework, consisting of the FTC Act, the Fair Credit Reporting Act, and
the Children's Online Privacy Protection Act, along with developing consumer education and
business guidance, participation in multi-stakeholder efforts and advocacy to other agencies at
the federal, state and local level, is sufficient to protect consumer rights. [146]
A resolution passed by the Senate in March 2015, is already being considered by the
Congress.[147] This resolution recognized the need for formulating a National Policy on IoT and
the matter of privacy, security and spectrum. Furthermore, to provide an impetus to the IoT
ecosystem, in March 2016, a bipartisan group of four Senators proposed a bill, The Developing
Innovation and Growing the Internet of Things (DIGIT) Act, to direct the Federal
Communications Commission to assess the need for more spectrum to connect IoT devices.
Several standards for the IoT industry are actually being established relating to automobiles
because most concerns arising from use of connected cars apply to healthcare devices as well. In
fact, the National Highway Traffic Safety Administration (NHTSA) is preparing cybersecurity
guidelines and a database of best practices to make automotive computer systems more
secure.[148]
A recent report from the World Bank examines the challenges and opportunities in government
adoption of IoT.[149] These include –
Still early days for the IoT in government
Underdeveloped policy and regulatory frameworks
Unclear business models, despite strong value proposition
Clear institutional and capacity gap in government AND the private sector
Inconsistent data valuation and management
Infrastructure a major barrier
Government as an enabler
Most successful pilots share common characteristics (public-private partnership, local,
leadership)
GE Digital CEO William Ruh speaking about GE's attempts to gain a foothold in the market for
IoT services at the first IEEE Computer SocietyTechIgnite conference.
Lack of interoperability and unclear value propositions[edit]
Despite a shared belief in the potential of the IoT, industry leaders and consumers are facing
barriers to adopt IoT technology more widely. Mike Farley argued in Forbes that while IoT
solutions appeal to early adopters, they either lack interoperability or a clear use case for end-
users.[223] A study by Ericsson regarding the adoption of IoT among Danish companies suggests
that many struggle "to pinpoint exactly where the value of IoT lies for them". [224]
Privacy and security concerns[edit]
According to a recent study by Noura Aleisa and Karen Renaud at the University of Glasgow,
"the Internet of things' potential for major privacy invasion is a concern" [225] with much of
research "disproportionally focused on the security concerns of IoT."[225] Among the "proposed
solutions in terms of the techniques they deployed and the extent to which they satisfied core
privacy principles",[225] only very few turned out to be fully satisfactory. Louis Basenese,
investment director at Wall Street Daily, has criticized the industry's lack of attention to security
issues:
"Despite high-profile and alarming hacks, device manufacturers remain undeterred, focusing on
profitability over security. Consumers need to have ultimate control over collected data,
including the option to delete it if they choose...Without privacy assurances, wide-scale
consumer adoption simply won't happen."[226]
In a post-Snowden world of global surveillance disclosures, consumers take a more active
interest in protecting their privacy and demand IoT devices to be screened for potential security
vulnerabilities and privacy violations before purchasing them. According to the
2016 Accenture Digital Consumer Survey, in which 28000 consumers in 28 countries were
polled on their use of consumer technology, security "has moved from being a nagging problem
to a top barrier as consumers are now choosing to abandon IoT devices and services over
security concerns."[227] The survey revealed that "out of the consumers aware of hacker attacks
and owning or planning to own IoT devices in the next five years, 18 percent decided to
terminate the use of the services and related services until they get safety guarantees." [227] This
suggests that consumers increasingly perceive privacy risks and security concerns to outweigh
the value propositions of IoT devices and opt to postpone planned purchases or service
subscriptions.[227]
Traditional governance structures[edit]
Town of Internet of Things in Hangzhou, China
A study issued by Ericsson regarding the adoption of Internet of things among Danish companies
identified a "clash between IoT and companies' traditional governance structures, as IoT still
presents both uncertainties and a lack of historical precedence."[224] Among the respondents
interviewed, 60 percent stated that they "do not believe they have the organizational capabilities,
and three of four do not believe they have the processes needed, to capture the IoT
opportunity."[224] This has led to a need to understand organizational culture in order to
facilitate organizational design processes and to test new innovation management practices. A
lack of digital leadership in the age of digital transformation has also stifled innovation and IoT
adoption to a degree that many companies, in the face of uncertainty, "were waiting for the
market dynamics to play out",[224] or further action in regards to IoT "was pending competitor
moves, customer pull, or regulatory requirements."[224] Some of these companies risk being
'kodaked' – "Kodak was a market leader until digital disruption eclipsed film photography with
digital photos"[228] – failing to "see the disruptive forces affecting their industry" [229] and "to truly
embrace the new business models the disruptive change opens up."[229] Scott Anthony has
written in Harvard Business Review that Kodak "created a digital camera, invested in the
technology, and even understood that photos would be shared online" [229] but ultimately failed to
realize that "online photo sharing was the new business, not just a way to expand the printing
business."[229]
Business planning and models[edit]
According to 2018 study, 70–75% of IoT deployments were stuck in the pilot or prototype stage,
unable to reach scale due in part to a lack of business planning. [230][page needed]
Studies on IoT literature and projects show a disproportionate prominence of technology in the
IoT projects, which are often driven by technological interventions rather than business model
innovation.[231][232][improper synthesis?]
The Internet of Things (IoT) is defined as a paradigm in which objects equipped with
sensors, actuators, and processors communicate with each other to serve a meaningful
purpose. In this paper, we survey state-of-the-art methods, protocols, and applications in
this new emerging area. This survey paper proposes a novel taxonomy for IoT
technologies, highlights some of the most important technologies, and profiles some
applications that have the potential to make a striking difference in human life, especially
for the differently abled and the elderly. As compared to similar survey papers in the
area, this paper is far more comprehensive in its coverage and exhaustively covers most
major technologies spanning from sensors to applications.
1. Introduction
Today the Internet has become ubiquitous, has touched almost every corner of the globe,
and is affecting human life in unimaginable ways. However, the journey is far from over.
We are now entering an era of even more pervasive connectivity where a very wide
variety of appliances will be connected to the web. We are entering an era of the “Internet
of Things” (abbreviated as IoT). This term has been defined by different authors in many
different ways. Let us look at two of the most popular definitions. Vermesan et al. [1]
define the Internet of Things as simply an interaction between the physical and digital
worlds. The digital world interacts with the physical world using a plethora of sensors
and actuators. Another definition by Peña-López et al. [2] defines the Internet of Things
as a paradigm in which computing and networking capabilities are embedded in any kind
of conceivable object. We use these capabilities to query the state of the object and to
change its state if possible. In common parlance, the Internet of Things refers to a new
kind of world where almost all the devices and appliances that we use are connected to a
network. We can use them collaboratively to achieve complex tasks that require a high
degree of intelligence.
For this intelligence and interconnection, IoT devices are equipped with embedded
sensors, actuators, processors, and transceivers. IoT is not a single technology; rather it is
an agglomeration of various technologies that work together in tandem.
Sensors and actuators are devices, which help in interacting with the physical
environment. The data collected by the sensors has to be stored and processed
intelligently in order to derive useful inferences from it. Note that we broadly define the
term sensor; a mobile phone or even a microwave oven can count as a sensor as long as it
provides inputs about its current state (internal state + environment). An actuator is a
device that is used to effect a change in the environment such as the temperature
controller of an air conditioner.
The storage and processing of data can be done on the edge of the network itself or in a
remote server. If any preprocessing of data is possible, then it is typically done at either
the sensor or some other proximate device. The processed data is then typically sent to a
remote server. The storage and processing capabilities of an IoT object are also restricted
by the resources available, which are often very constrained due to limitations of size,
energy, power, and computational capability. As a result the main research challenge is to
ensure that we get the right kind of data at the desired level of accuracy. Along with the
challenges of data collection, and handling, there are challenges in communication as
well. The communication between IoT devices is mainly wireless because they are
generally installed at geographically dispersed locations. The wireless channels often
have high rates of distortion and are unreliable. In this scenario reliably communicating
data without too many retransmissions is an important problem and thus communication
technologies are integral to the study of IoT devices.
Now, after processing the received data, some action needs to be taken on the basis of the
derived inferences. The nature of actions can be diverse. We can directly modify the
physical world through actuators. Or we may do something virtually. For example, we
can send some information to other smart things.
The process of effecting a change in the physical world is often dependent on its state at
that point of time. This is called context awareness. Each action is taken keeping in
consideration the context because an application can behave differently in different
contexts. For example, a person may not like messages from his office to interrupt him
when he is on vacation.
Sensors, actuators, compute servers, and the communication network form the core
infrastructure of an IoT framework. However, there are many software aspects that need
to be considered. First, we need a middleware that can be used to connect and manage all
of these heterogeneous components. We need a lot of standardization to connect many
different devices. We shall discuss methods to exchange information and prevailing
standards in Section 7.
The Internet of Things finds various applications in health care, fitness, education,
entertainment, social life, energy conservation, environment monitoring, home
automation, and transport systems. We shall focus on these application areas in Section 9.
We shall find that, in all these application areas, IoT technologies have significantly been
able to reduce human effort and improve the quality of life.
2. Architecture of IoT
There is no single consensus on architecture for IoT, which is agreed universally.
Different architectures have been proposed by different researchers.
2.1. Three- and Five-Layer Architectures
The three-layer architecture defines the main idea of the Internet of Things, but it is not
sufficient for research on IoT because research often focuses on finer aspects of the
Internet of Things. That is why, we have many more layered architectures proposed in the
literature. One is the five-layer architecture, which additionally includes the processing
and business layers [3–6]. The five layers are perception, transport, processing,
application, and business layers (see Figure 1). The role of the perception and application
layers is the same as the architecture with three layers. We outline the function of the
remaining three layers.(i)The transport layer transfers the sensor data from the
perception layer to the processing layer and vice versa through networks such as
wireless, 3G, LAN, Bluetooth, RFID, and NFC.(ii)The processing layer is also
known as the middleware layer. It stores, analyzes, and processes huge
amounts of data that comes from the transport layer. It can manage and provide
a diverse set of services to the lower layers. It employs many technologies such
as databases, cloud computing, and big data processing
modules.(iii)The business layer manages the whole IoT system, including
applications, business and profit models, and users’ privacy. The business layer
is out of the scope of this paper. Hence, we do not discuss it further.
Another architecture proposed by Ning and Wang [7] is inspired by the layers of
processing in the human brain. It is inspired by the intelligence and ability of human
beings to think, feel, remember, make decisions, and react to the physical environment. It
is constituted of three parts. First is the human brain, which is analogous to the
processing and data management unit or the data center. Second is the spinal cord, which
is analogous to the distributed network of data processing nodes and smart gateways.
Third is the network of nerves, which corresponds to the networking components and
sensors.
2.2. Cloud and Fog Based Architectures
Let us now discuss two kinds of systems architectures: cloud and fog computing (see the
reference architectures in [8]). Note that this classification is different from the
classification in Section 2.1, which was done on the basis of protocols.
In particular, we have been slightly vague about the nature of data generated by IoT
devices, and the nature of data processing. In some system architectures the data
processing is done in a large centralized fashion by cloud computers. Such a cloud centric
architecture keeps the cloud at the center, applications above it, and the network of smart
things below it [9]. Cloud computing is given primacy because it provides great
flexibility and scalability. It offers services such as the core infrastructure, platform,
software, and storage. Developers can provide their storage tools, software tools, data
mining, and machine learning tools, and visualization tools through the cloud.
Lately, there is a move towards another system architecture, namely, fog computing [10–
12], where the sensors and network gateways do a part of the data processing and
analytics. A fog architecture [13] presents a layered approach as shown in Figure 2,
which inserts monitoring, preprocessing, storage, and security layers between the
physical and transport layers. The monitoring layer monitors power, resources, responses,
and services. The preprocessing layer performs filtering, processing, and analytics of
sensor data. The temporary storage layer provides storage functionalities such as data
replication, distribution, and storage. Finally, the security layer performs
encryption/decryption and ensures data integrity and privacy. Monitoring and
preprocessing are done on the edge of the network before sending data to the cloud.
Figure 2: Fog architecture of a smart IoT gateway.
Often the terms “fog computing” and “edge computing” are used interchangeably. The
latter term predates the former and is construed to be more generic. Fog
computing originally termed by Cisco refers to smart gateways and smart sensors,
whereas edge computing is slightly more penetrative in nature. This paradigm envisions
adding smart data preprocessing capabilities to physical devices such as motors, pumps,
or lights. The aim is to do as much of preprocessing of data as possible in these devices,
which are termed to be at the edge of the network. In terms of the system architecture, the
architectural diagram is not appreciably different from Figure 2. As a result, we do not
describe edge computing separately.
Finally, the distinction between protocol architectures and system architectures is not
very crisp. Often the protocols and the system are codesigned. We shall use the generic 5-
layer IoT protocol stack (architectural diagram presented in Figure 2) for both the fog and
cloud architectures.
2.3. Social IoT
Let us now discuss a new paradigm: social IoT (SIoT). Here, we consider social
relationships between objects the same way as humans form social relationships (see
[14]). Here are the three main facets of an SIoT system:(i)The SIoT is navigable. We
can start with one device and navigate through all the devices that are connected
to it. It is easy to discover new devices and services using such a social network
of IoT devices.(ii)A need of trustworthiness (strength of the relationship) is
present between devices (similar to friends on Facebook).(iii)We can use models
similar to studying human social networks to also study the social networks of IoT
devices.
2.3.1. Basic Components
In a typical social IoT setting, we treat the devices and services as bots where they can set
up relationships between them and modify them over time. This will allow us to
seamlessly let the devices cooperate among each other and achieve a complex task.
To make such a model work, we need to have many interoperating components. Let us
look at some of the major components in such a system.(1)ID: we need a unique
method of object identification. An ID can be assigned to an object based on
traditional parameters such as the MAC ID, IPv6 ID, a universal product code, or
some other custom method.(2)Metainformation: along with an ID, we need some
metainformation about the device that describes its form and operation. This is
required to establish appropriate relationships with the device and also
appropriately place it in the universe of IoT devices.(3)Security controls: this is
similar to “friend list” settings on Facebook. An owner of a device might place
restrictions on the kinds of devices that can connect to it. These are typically
referred to as owner controls.(4)Service discovery: such kind of a system is like a
service cloud, where we need to have dedicated directories that store details of
devices providing certain kinds of services. It becomes very important to keep
these directories up to date such that devices can learn about other
devices.(5)Relationship management: this module manages relationships with
other devices. It also stores the types of devices that a given device should try to
connect with based on the type of services provided. For example, it makes
sense for a light controller to make a relationship with a light sensor.(6)Service
composition: this module takes the social IoT model to a new level. The ultimate
goal of having such a system is to provide better integrated services to users. For
example, if a person has a power sensor with her air conditioner and this device
establishes a relationship with an analytics engine, then it is possible for the
ensemble to yield a lot of data about the usage patterns of the air conditioner. If
the social model is more expansive, and there are many more devices, then it is
possible to compare the data with the usage patterns of other users and come up
with even more meaningful data. For example, users can be told that they are the
largest energy consumers in their community or among their Facebook friends.
2.3.2. Representative Architecture
Most architectures proposed for the SIoT have a server side architecture as well. The
server connects to all the interconnected components, aggregates (composes) the services,
and acts as a single point of service for users.
The server side architecture typically has three layers. The first is the base layer that
contains a database that stores details of all the devices, their attributes, metainformation,
and their relationships. The second layer (Component layer) contains code to interact
with the devices, query their status, and use a subset of them to effect a service. The
topmost layer is the application layer, which provides services to the users.
On the device (object) side, we broadly have two layers. The first is the object layer,
which allows a device to connect to other devices, talk to them (via standardized
protocols), and exchange information. The object layer passes information to
the social layer. The social layer manages the execution of users’ applications, executes
queries, and interacts with the application layer on the server.
3. Taxonomy
Let us now propose taxonomy for research in IoT technologies (see Figure 3). Our
taxonomy is based on the architectural elements of IoT as presented in Section 2.
Figure 3: Taxonomy of research in IoT technologies.
The first architectural component of IoT is the perception layer. It collects data using
sensors, which are the most important drivers of the Internet of Things [15]. There are
various types of sensors used in diverse IoT applications. The most generic sensor
available today is the smartphone. The smartphone itself has many types of sensors
embedded in it [16] such as the location sensor (GPS), movement sensors (accelerometer,
gyroscope), camera, light sensor, microphone, proximity sensor, and magnetometer.
These are being heavily used in different IoT applications. Many other types of sensors
are beginning to be used such as sensors for measuring temperature, pressure, humidity,
medical parameters of the body, chemical and biochemical substances, and neural
signals. A class of sensors that stand out is infrared sensors that predate smartphones.
They are now being used widely in many IoT applications: IR cameras, motion detectors,
measuring the distance to nearby objects, presence of smoke and gases, and as moisture
sensors. We shall discuss the different types of sensors used in IoT applications in
Section 5.
Subsequently, we shall discuss related work in data preprocessing. Such applications
(also known as fog computing applications) mainly filter and summarize data before
sending it on the network. Such units typically have a little amount of temporary storage,
a small processing unit, and some security features.
The next architectural component that we shall discuss is communication. We shall
discuss related work (in Section 7) on different communication technologies used for the
Internet of Things. Different entities communicate over the network [17–19] using a
diverse set of protocols and standards. The most common communication technologies
for short range low power communication protocols are RFID (Radio Frequency
Identification) and NFC (Near Field Communication). For the medium range, they are
Bluetooth, Zigbee, and WiFi. Communication in the IoT world requires special
networking protocols and mechanisms. Therefore, new mechanisms and protocols have
been proposed and implemented for each layer of the networking stack, according to the
requirements imposed by IoT devices.
We shall subsequently look at two kinds of software components: middleware and
applications. The middleware creates an abstraction for the programmer such that the
details of the hardware can be hidden. This enhances interoperability of smart things and
makes it easy to offer different kinds of services [20]. There are many commercial and
open source offerings for providing middleware services to IoT devices. Some examples
are OpenIoT [21], MiddleWhere [22], Hydra [23], FiWare [24], and Oracle Fusion
Middleware. Finally, we discuss the applications of IoT in Section 9. We primarily focus
on home automation, ambient assisted living, health and fitness, smart vehicular systems,
smart cities, smart environments, smart grids, social life, and entertainment.
Let us first consider our novel contributions. Our paper looks at each and every layer in
the IoT stack, and as a result the presentation is also far more balanced. A novel addition
in our survey is that we have discussed different IoT architectures. This has not been
discussed in prior surveys on the Internet of Things. The architecture section also
considers newer paradigms such as fog computing, which have also hitherto not been
considered. Moreover, our survey nicely categorizes technologies based on the
architectural layer that they belong to. We have also thoroughly categorized the network
layer and tried to consolidate almost all the technologies that are used in IoT systems.
Such kind of a thorough categorization and presentation of technologies is novel to the
best of our knowledge.
Along with these novel contributions our survey is far more comprehensive, detailed, and
exhaustive as compared to other surveys in the area. Most of the other surveys look at
only one or two types of sensors, whereas we describe 9 types of sensors with many
examples. Other surveys are also fairly restricted when they discuss communication
technologies and applications. We have discussed many types of middleware
technologies as well. Prior works have not given middleware technologies this level of
attention. We cover 10 communication technologies in detail and consider a large variety
of applications encompassing smart homes, health care, logistics, transport, agriculture,
environment, smart cities, and green energy. No other survey in this area profiles so
many technologies, applications, and use cases.
First of all, let us look at the mobile phone, which is ubiquitous and has many types of
sensors embedded in it. In specific, the smartphone is a very handy and user friendly
device that has a host of built in communication and data processing features. With the
increasing popularity of smartphones among people, researchers are showing interest in
building smart IoT solutions using smartphones because of the embedded sensors
[16, 26]. Some additional sensors can also be used depending upon the requirements.
Applications can be built on the smartphone that uses sensor data to produce meaningful
results. Some of the sensors inside a modern smartphone are as follows.(1)The
accelerometer senses the motion and acceleration of a mobile phone. It typically
measures changes in velocity of the smartphone in three dimensions. There are
many types of accelerometers [27]. In a mechanical accelerometer, we have a
seismic mass in a housing, which is tied to the housing with a spring. The mass
takes time to move and is left behind as the housing moves, so the force in the
spring can be correlated with the acceleration. In a capacitive accelerometer,
capacitive plates are used with the same setup. With a change in velocity, the
mass pushes the capacitive plates together, thus changing the capacitance. The
rate of change of capacitance is then converted into acceleration. In a
piezoelectric accelerometer, piezoelectric crystals are used, which when
squeezed generate an electric voltage. The changes in voltage can be translated
into acceleration. The data patterns captured by the accelerometer can be used
to detect physical activities of the user such as running, walking, and
bicycling.(2)The gyroscope detects the orientation of the phone very precisely.
Orientation is measured using capacitive changes when a seismic mass moves
in a particular direction.(3)The camera and microphone are very powerful
sensors since they capture visual and audio information, which can then be
analyzed and processed to detect various types of contextual information. For
example, we can infer a user’s current environment and the interactions that she
is having. To make sense of the audio data, technologies such as voice
recognition and acoustic features can be exploited.(4)The magnetometer detects
magnetic fields. This can be used as a digital compass and in applications to
detect the presence of metals.(5)The GPS (Global Positioning System) detects
the location of the phone, which is one of the most important pieces of contextual
information for smart applications. The location is detected using the principle of
trilateration [28]. The distance is measured from three or more satellites (or
mobile phone towers in the case of A-GPS) and coordinates are
computed.(6)The light sensor detects the intensity of ambient light. It can be used
for setting the brightness of the screen and other applications in which some
action is to be taken depending on the intensity of ambient light. For example, we
can control the lights in a room.(7)The proximity sensor uses an infrared (IR)
LED, which emits IR rays. These rays bounce back when they strike some
object. Based on the difference in time, we can calculate the distance. In this
way, the distance to different objects from the phone can be measured. For
example, we can use it to determine when the phone is close to the face while
talking. It can also be used in applications in which we have to trigger some
event when an object approaches the phone.(8)Some smartphones such as
Samsung’s Galaxy S4 also have a thermometer, barometer, and humidity sensor
to measure the temperature, atmospheric pressure, and humidity, respectively.
We have studied many smart applications that use sensor data collected from
smartphones. For example, activity detection [29] is achieved by applying machine
learning algorithms to the data collected by smartphone sensors. It detects activities such
as running, going up and down stairs, walking, driving, and cycling. The application is
trained with patterns of data using data sets recorded by sensors when these activities are
being performed.
Many health and fitness applications are being built to keep track of a person’s health
continuously using smartphones. They keep track of users’ physical activities, diet,
exercises, and lifestyle to determine the fitness level and give suggestions to the user
accordingly. Wang et al. [30] describe a mobile application that is based completely on a
smartphone. They use it to assess the overall mental health and performance of a college
student. To track the location and activities in which the student is involved, activity
recognition (accelerometer) and GPS data are used. To keep a check on how much the
student sleeps, the accelerometer and light sensors are used. For social life and
conversations, audio data from a microphone is used. The application also conducts quick
questionnaires with the students to know about their mood. All this data can be used to
assess the stress levels, social life, behavior, and exercise patterns of a student.
Another application by McClernon and Choudhury [31] detects when the user is going to
smoke using context information such as the presence of other smokers, location, and
associated activities. The sensors provide information related to the user’s movement,
location, visual images, and surrounding sounds. To summarize smartphone sensors are
being used to study different kinds of human behavior (see [32]) and to improve the
quality of human life.
5.2. Medical Sensors
The Internet of Things can be really beneficial for health care applications. We can use
sensors, which can measure and monitor various medical parameters in the human body
[33]. These applications can aim at monitoring a patient’s health when they are not in
hospital or when they are alone. Subsequently, they can provide real time feedback to the
doctor, relatives, or the patient. McGrath and Scanaill [34] have described in detail the
different sensors that can be worn on the body for monitoring a person’s health.
There are many wearable sensing devices available in the market. They are equipped with
medical sensors that are capable of measuring different parameters such as the heart rate,
pulse, blood pressure, body temperature, respiration rate, and blood glucose levels [35].
These wearables include smart watches, wristbands, monitoring patches, and smart
textiles.
Moreover, smart watches and fitness trackers are becoming fairly popular in the market
as companies such as Apple, Samsung, and Sony are coming up with very innovative
features. For example, a smart watch includes features such as connectivity with a
smartphone, sensors such as an accelerometer, and a heart rate monitor (see Figure 4).
Figure 4: Smart watches and fitness trackers
(source: https://www.pebble.com/ and http://www.fitbit.com/).
Another novel IoT device, which has a lot of promise are monitoring patches that are
pasted on the skin. Monitoring patches are like tattoos. They are stretchable and
disposable and are very cheap. These patches are supposed to be worn by the patient for a
few days to monitor a vital health parameter continuously [15]. All the electronic
components are embedded in these rubbery structures. They can even transmit the sensed
data wirelessly. Just like a tattoo, these patches can be applied on the skin as shown in
Figure 5. One of the most common applications of such patches is to monitor blood
pressure.
Figure 5: Embedded skin patches (source: MC10 Electronics).
A very important consideration here is the context [34]. The data collected by the
medical sensors must be combined with contextual information such as physical activity.
For example, the heart rate depends on the context. It increases when we exercise. In that
case, we cannot infer abnormal heart rate. Therefore, we need to combine data from
different sensors for making the correct inference.
5.3. Neural Sensors
Today, it is possible to understand neural signals in the brain, infer the state of the brain,
and train it for better attention and focus. This is known as neurofeedback [36] (see
Figure 6). The technology used for reading brain signals is called EEG
(Electroencephalography) or a brain computer interface. The neurons inside the brain
communicate electronically and create an electric field, which can be measured from
outside in terms of frequencies. Brain waves can be categorized into alpha, beta, gamma,
theta, and delta waves depending upon the frequency.
Figure 6: Brain sensing headband with embedded neurosensors
(source: http://www.choosemuse.com/).
Based on the type of wave, it can be inferred whether the brain is calm or wandering in
thoughts. This type of neurofeedback can be obtained in real time and can be used to train
the brain to focus, pay better attention towards things, manage stress, and have better
mental well-being.
5.4. Environmental and Chemical Sensors
Environmental sensors are used to sense parameters in the physical environment such as
temperature, humidity, pressure, water pollution, and air pollution. Parameters such as the
temperature and pressure can be measured with a thermometer and barometer. Air quality
can be measured with sensors, which sense the presence of gases and other particulate
matter in the air (refer to Sekhar et al. [37] for more details).
Chemical sensors are used to detect chemical and biochemical substances. These sensors
consist of a recognition element and a transducer. The electronic nose (e-nose) and
electronic tongue (e-tongue) are technologies that can be used to sense chemicals on the
basis of odor and taste, respectively [38]. The e-nose and e-tongue consist of an array of
chemical sensors coupled with advance pattern recognition software. The sensors inside
the e-nose and e-tongue produce complex data, which is then analyzed through pattern
recognition to identify the stimulus.
These sensors can be used in monitoring the pollution level in smart cities [39], keeping a
check on food quality in smart kitchens, testing food, and agricultural products in supply
chain applications.
5.5. Radio Frequency Identification (RFID)
RFID is an identification technology in which an RFID tag (a small chip with an antenna)
carries data, which is read by a RFID reader. The tag transmits the data stored in it via
radio waves. It is similar to bar code technology. But unlike a traditional bar code, it does
not require line of sight communication between the tag and the reader and can identify
itself from a distance even without a human operator. The range of RFID varies with the
frequency. It can go up to hundreds of meters.
RFID tags are of two types: active and passive. Active tags have a power source and
passive tags do not have any power source. Passive tags draw power from the
electromagnetic waves emitted by the reader and are thus cheap and have a long lifetime
[40, 41].
There are two types of RFID technologies: near and far [40]. A near RFID reader uses a
coil through which we pass alternating current and generate a magnetic field. The tag has
a smaller coil, which generates a potential due to the ambient changes in the magnetic
field. This voltage is then coupled with a capacitor to accumulate a charge, which then
powers up the tag chip. The tag can then produce a small magnetic field that encodes the
signal to be transmitted, and this can be picked up by the reader.
In far RFID, there is a dipole antenna in the reader, which propagates EM waves. The tag
also has a dipole antenna on which an alternating potential difference appears and it is
powered up. It can then use this power to transmit messages.
RFID technology is being used in various applications such as supply chain management,
access control, identity authentication, and object tracking. The RFID tag is attached to
the object to be tracked and the reader detects and records its presence when the object
passes by it. In this manner, object movement can be tracked and RFID can serve as a
search engine for smart things.
For access control, an RFID tag is attached to the authorized object. For example, small
chips are glued to the front of vehicles. When the car reaches a barricade on which there
is a reader, it reads the tag data and decides whether it is an authorized car. If yes, it
opens automatically. RFID cards are issued to the people, who can then be identified by a
RFID reader and given access accordingly.
The low level data collected from the RFID tags can be transformed into higher level
insights in IoT applications [42]. There are many user level tools available, in which all
the data collected by particular RFID readers and data associated with the RFID tags can
be managed. The high level data can be used to draw inferences and take further action.
5.6. Actuators
Let us look at some examples of actuators that are used in the Internet of Things. An
actuator is a device, which can effect a change in the environment by converting
electrical energy into some form of useful energy. Some examples are heating or cooling
elements, speakers, lights, displays, and motors.
The actuators, which induce motion, can be classified into three categories, namely,
electrical, hydraulic, and pneumatic actuators depending on their operation. Hydraulic
actuators facilitate mechanical motion using fluid or hydraulic power. Pneumatic
actuators use the pressure of compressed air and electrical ones use electrical energy.
As an example, we can consider a smart home system, which consists of many sensors
and actuators. The actuators are used to lock/unlock the doors, switch on/off the lights or
other electrical appliances, alert users of any threats through alarms or notifications, and
control the temperature of a home (via a thermostat).
A sophisticated example of an actuator used in IoT is a digital finger, which is used to
turn on/off the switches (or anything which requires small motion) and is controlled
wirelessly.
6. Preprocessing
As smart things collect huge amount of sensor data, compute and storage resources are
required to analyze, store, and process this data. The most common compute and storage
resources are cloud based because the cloud offers massive data handling, scalability, and
flexibility. But this will not be sufficient to meet the requirements of many IoT
applications because of the following reasons [43].(1)Mobility: most of the smart
devices are mobile. Their changing location makes it difficult to communicate
with the cloud data center because of changing network conditions across
different locations.(2)Reliable and real time actuation: communicating with the
cloud and getting back responses takes time. Latency sensitive applications,
which need real time responses, may not be feasible with this model. Also, the
communication may be lossy due to wireless links, which can lead to unreliable
data.(3)Scalability: more devices means more requests to the cloud, thereby
increasing the latency.(4)Power constraints: communication consumes a lot of
power, and IoT devices are battery powered. They thus cannot afford to
communicate all the time.
To solve the problem of mobility, researchers have proposed mobile cloud computing
(MCC) [44]. But there are still problems associated with latency and power. MCC also
suffers from mobility problems such as frequently changing network conditions due to
which problems such as signal fading and service degradation arise.
As a solution to these problems, we can bring some compute and storage resources to the
edge of the network instead of relying on the cloud for everything. This concept is known
as fog computing [11, 45] (also see Section 2.2). The fog can be viewed as a cloud,
which is close to the ground. Data can be stored, processed, filtered, and analyzed on the
edge of the network before sending it to the cloud through expensive communication
media. The fog and cloud paradigms go together. Both of them are required for the
optimal performance of IoT applications. A smart gateway [13] can be employed
between underlying networks and the cloud to realize fog computing as shown in
Figure 7.
Figure 7: Smart gateway for preprocessing.
The features of fog computing [11] are as follows:(1)Low latency: less time is
required to access computing and storage resources on fog nodes (smart
gateways).(2)Location awareness: as the fog is located on the edge of the
network, it is aware of the location of the applications and their context. This is
beneficial as context awareness is an important feature of IoT
applications.(3)Distributed nodes: fog nodes are distributed unlike centralized
cloud nodes. Multiple fog nodes need to be deployed in distributed geographical
areas in order to provide services to mobile devices in those areas. For example,
in vehicular networks, deploying fog nodes at highways can provide low latency
data/video streaming to vehicles.(4)Mobility: the fog supports mobility as smart
devices can directly communicate with smart gateways present in their
proximity.(5)Real time response: fog nodes can give an immediate response
unlike the cloud, which has a much greater latency.(6)Interaction with the cloud:
fog nodes can further interact with the cloud and communicate only that data,
which is required to be sent to the cloud.
The tasks performed by a smart gateway [46] are collecting sensor data, preprocessing
and filtering collected data, providing compute, storage and networking services to IoT
devices, communicating with the cloud and sending only necessary data, monitoring
power consumption of IoT devices, monitoring activities and services of IoT devices, and
ensuring security and privacy of data. Some applications of fog computing are as follows
[10, 11]:(1)Smart vehicular networks: smart traffic lights are deployed as smart
gateways to locally detect pedestrians and vehicles through sensors, calculate
their distance and speed, and finally infer traffic conditions. This is used to warn
oncoming vehicles. These sensors also interact with neighboring smart traffic
lights to perform traffic management tasks. For example, if sensors detect an
approaching ambulance, they can change the traffic lights to let the ambulance
pass first and also inform other lights to do so. The data collected by these smart
traffic lights are locally analyzed in real time to serve real time needs of traffic
management. Further, data from multiple gateways is combined and sent to the
cloud for further global analysis of traffic in the city.(2)Smart grid: the smart
electrical grid facilitates load balancing of energy on the basis of usage and
availability. This is done in order to switch automatically to alternative sources of
energy such as solar and wind power. This balancing can be done at the edge of
the network using smart meters or microgrids connected by smart gateways.
These gateways can analyze and process data. They can then project future
energy demand, calculate the availability and price of power, and supply power
from both conventional and alternative sources to consumers.
7. Communication
As the Internet of Things is growing very rapidly, there are a large number of
heterogeneous smart devices connecting to the Internet. IoT devices are battery powered,
with minimal compute and storage resources. Because of their constrained nature, there
are various communication challenges involved, which are as follows
[19]:(1)Addressing and identification: since millions of smart things will be
connected to the Internet, they will have to be identified through a unique
address, on the basis of which they communicate with each other. For this, we
need a large addressing space, and a unique address for each smart
object.(2)Low power communication: communication of data between devices is
a power consuming task, specially, wireless communication. Therefore, we need
a solution that facilitates communication with low power consumption.(3)Routing
protocols with low memory requirement and efficient communication
patterns.(4)High speed and nonlossy communication.(5)Mobility of smart things.
IoT devices typically connect to the Internet through the IP (Internet Protocol) stack. This
stack is very complex and demands a large amount of power and memory from the
connecting devices. The IoT devices can also connect locally through non-IP networks,
which consume less power, and connect to the Internet via a smart gateway. Non-IP
communication channels such as Bluetooth, RFID, and NFC are fairly popular but are
limited in their range (up to a few meters). Therefore, their applications are limited to
small personal area networks. Personal area networks (PAN) are being widely used in
IoT applications such as wearables connected to smartphones. For increasing the range of
such local networks, there was a need to modify the IP stack so as to facilitate low power
communication using the IP stack. One of the solutions is 6LoWPAN, which incorporates
IPv6 with low power personal area networks. The range of a PAN with 6LoWPAN is
similar to local area networks, and the power consumption is much lower.
The leading communication technologies used in the IoT world are IEEE 802.15.4, low
power WiFi, 6LoWPAN, RFID, NFC, Sigfox, LoraWAN, and other proprietary protocols
for wireless networks.
7.1. Near Field Communication (NFC)
Many times, data from a single sensor is not useful in monitoring large areas and
complex activities. Different sensor nodes need to interact with each other wirelessly.
The disadvantage of non-IP technologies such as RFID, NFC, and Bluetooth is that their
range is very small. So, they cannot be used in many applications, where a large area
needs to be monitored through many sensor nodes deployed in diverse locations. A
wireless sensor network (WSN) consists of tens to thousands of sensor nodes connected
using wireless technologies. They collect data about the environment and communicate it
to gateway devices that relay the information to the cloud over the Internet. The
communication between nodes in a WSN may be direct or multihop. The sensor nodes
are of a constrained nature, but gateway nodes have sufficient power and processing
resources. The popular network topologies used in a WSN are a star, a mesh, and a
hybrid network. Most of the communication in WSN is based on the IEEE 802.15.4
standard (discussed in Section 7.3). There are clearly a lot of protocols that can be used
in IoT scenarios. Let us discuss the design of a typical IoT network protocol stack with
the most popular alternatives.
7.3. IoT Network Protocol Stack
The Internet Engineering Task Force (IETF) has developed alternative protocols for
communication between IoT devices using IP because IP is a flexible and reliable
standard [50, 51]. The Internet Protocol for Smart Objects (IPSO) Alliance has published
various white papers describing alternative protocols and standards for the layers of the
IP stack and an additional adaptation layer, which is used for communication [51–54]
between smart objects.
(1) Physical and MAC Layer (IEEE 802.15.4). The IEEE 802.15.4 protocol is designed
for enabling communication between compact and inexpensive low power embedded
devices that need a long battery life. It defines standards and protocols for the physical
and link (MAC) layer of the IP stack. It supports low power communication along with
low cost and short range communication. In the case of such resource constrained
environments, we need a small frame size, low bandwidth, and low transmit power.
Transmission requires very little power (maximum one milliwatt), which is only one
percent of that used in WiFi or cellular networks. This limits the range of communication.
Because of the limited range, the devices have to operate cooperatively in order to enable
multihop routing over longer distances. As a result, the packet size is limited to 127 bytes
only, and the rate of communication is limited to 250 kbps. The coding scheme in IEEE
802.15.4 has built in redundancy, which makes the communication robust, allows us to
detect losses, and enables the retransmission of lost packets. The protocol also supports
short 16-bit link addresses to decrease the size of the header, communication overheads,
and memory requirements [55].
Readers can refer to the survey by Vasseur et al. [54] for more information on different
physical and link layer technologies for communication between smart objects.
(2) Adaptation Layer. IPv6 is considered the best protocol for communication in the IoT
domain because of its scalability and stability. Such bulky IP protocols were initially not
thought to be suitable for communication in scenarios with low power wireless links such
as IEEE 802.15.4.
6LoWPAN, an acronym for IPv6 over low power wireless personal area networks, is a
very popular standard for wireless communication. It enables communication using IPv6
over the IEEE 802.15.4 [52] protocol. This standard defines an adaptation layer between
the 802.15.4 link layer and the transport layer. 6LoWPAN devices can communicate with
all other IP based devices on the Internet. The choice of IPv6 is because of the large
addressing space available in IPv6. 6LoWPAN networks connect to the Internet via a
gateway (WiFi or Ethernet), which also has protocol support for conversion between
IPv4 and IPv6 as today’s deployed Internet is mostly IPv4. IPv6 headers are not small
enough to fit within the small 127 byte MTU of the 802.15.4 standard. Hence, squeezing
and fragmenting the packets to carry only the essential information is an optimization that
the adaptation layer performs.
Specifically, the adaptation layer performs the following three optimizations in order to
reduce communication overhead [55]:(i)Header compression 6loWPAN defines header
compression of IPv6 packets for decreasing the overhead of IPv6. Some of the
fields are deleted because they can be derived from link level information or can
be shared across packets.(ii)Fragmentation: the minimum MTU size (maximum
transmission unit) of IPv6 is 1280 bytes. On the other hand, the maximum size of
a frame in IEEE 802.15.4 is 127 bytes. Therefore, we need to fragment the IPv6
packet. This is done by the adaptation layer.(iii)Link layer forwarding 6LoWPAN
also supports mesh under routing, which is done at the link layer using link level
short addresses instead of in the network layer. This feature can be used to
communicate within a 6LoWPAN network.
(3) Network Layer. The network layer is responsible for routing the packets received
from the transport layer. The IETF Routing over Low Power and Lossy Networks
(ROLL) working group has developed a routing protocol (RPL) for Low Power and
Lossy Networks (LLNs) [53].
For such networks, RPL is an open routing protocol, based on distance vectors. It
describes how a destination oriented directed acyclic graph (DODAG) is built with the
nodes after they exchange distance vectors. A set of constraints and an objective function
is used to build the graph with the best path [53]. The objective function and constraints
may differ with respect to their requirements. For example, constraints can be to avoid
battery powered nodes or to prefer encrypted links. The objective function can aim to
minimize the latency or the expected number of packets that need to be sent.
The making of this graph starts from the root node. The root starts sending messages to
neighboring nodes, which then process the message and decide whether to join or not
depending upon the constraints and the objective function. Subsequently, they forward
the message to their neighbors. In this manner, the message travels till the leaf nodes and
a graph is formed. Now all the nodes in the graph can send packets upwards hop by hop
to the root. We can realize a point to point routing algorithm as follows. We send packets
to a common ancestor, from which it travels downwards (towards leaves) to reach the
destination.
To manage the memory requirements of nodes, nodes are classified into storing and
nonstoring nodes depending upon their ability to store routing information. When nodes
are in a nonstoring mode and a downward path is being constructed, the route
information is attached to the incoming message and forwarded further till the root. The
root receives the whole path in the message and sends a data packet along with the path
message to the destination hop by hop. But there is a trade-off here because nonstoring
nodes need more power and bandwidth to send additional route information as they do
not have the memory to store routing tables.
(4) Transport Layer. TCP is not a good option for communication in low power
environments as it has a large overhead owing to the fact that it is a connection oriented
protocol. Therefore, UDP is preferred because it is a connectionless protocol and has low
overhead.
(5) Application Layer. The application layer is responsible for data formatting and
presentation. The application layer in the Internet is typically based on HTTP. However,
HTTP is not suitable in resource constrained environments because it is fairly verbose in
nature and thus incurs a large parsing overhead. Many alternate protocols have been
developed for IoT environments such as CoAP (Constrained Application Protocol) and
MQTT (Message Queue Telemetry Transport).(a)Constrained Application Protocol:
CoAP can be thought of as an alternative to HTTP. It is used in most IoT
applications [56, 57]. Unlike HTTP, it incorporates optimizations for constrained
application environments [50]. It uses the EXI (Efficient XML Interchanges) data
format, which is a binary data format and is far more efficient in terms of space
as compared to plain text HTML/XML. Other supported features are built in
header compression, resource discovery, autoconfiguration, asynchronous
message exchange, congestion control, and support for multicast messages.
There are four types of messages in CoAP: nonconfirmable, confirmable, reset
(nack), and acknowledgement. For reliable transmission over UDP, confirmable
messages are used [58]. The response can be piggybacked in the
acknowledgement itself. Furthermore, it uses DTLS (Datagram Transport Layer
Security) for security purposes.(b)Message Queue Telemetry Transport: MQTT is
a publish/subscribe protocol that runs over TCP. It was developed by IBM [59]
primarily as a client/server protocol. The clients are publishers/subscribers and
the server acts as a broker to which clients connect through TCP. Clients can
publish or subscribe to a topic. This communication takes place through the
broker whose job is to coordinate subscriptions and also authenticate the client
for security. MQTT is a lightweight protocol, which makes it suitable for IoT
applications. But because of the fact that it runs over TCP, it cannot be used with
all types of IoT applications. Moreover, it uses text for topic names, which
increases its overhead.
MQTT-S/MQTT-SN is an extension of MQTT [60], which is designed for low power
and low cost devices. It is based on MQTT but has some optimizations for WSNs as
follows [61]. The topic names are replaced by topic IDs, which reduce the overheads of
transmission. Topics do not need registration as they are preregistered. Messages are also
split so that only the necessary information is sent. Further, for power conservation, there
is an offline procedure for clients who are in a sleep state. Messages can be buffered and
later read by clients when they wake up. Clients connect to the broker through a gateway
device, which resides within the sensor network and connects to the broker.
7.4. Bluetooth Low Energy (BLE)
Bluetooth Low Energy, also known as “Bluetooth Smart,” was developed by the
Bluetooth Special Interest Group. It has a relatively shorter range and consumes lower
energy as compared to competing protocols. The BLE protocol stack is similar to the
stack used in classic Bluetooth technology. It has two parts: controller and host. The
physical and link layer are implemented in the controller. The controller is typically a
SOC (System on Chip) with a radio. The functionalities of upper layers are included in
the host [62]. BLE is not compatible with classic Bluetooth. Let us look at the differences
between classic Bluetooth and BLE [63, 64].
The main difference is that BLE does not support data streaming. Instead, it supports
quick transfer of small packets of data (packet size is small) with a data rate of 1 Mbps.
There are two types of devices in BLE: master and slave. The master acts as a central
device that can connect to various slaves. Let us consider an IoT scenario where a phone
or PC serve as the master and mobile devices such as a thermostat, fitness tracker, smart
watch, or any monitoring device act as slaves. In such cases, slaves must be very power
efficient. Therefore, to save energy, slaves are by default in sleep mode and wake up
periodically to receive packets from the master.
In classic Bluetooth, the connection is on all the time even if no data transfer is going on.
Additionally, it supports 79 data channels (1 MHz channel bandwidth) and a data rate of
1 million symbols/s, whereas, BLE supports 40 channels with 2 MHz channel bandwidth
(double of classic Bluetooth) and 1 million symbols/s data rate. BLE supports low duty
cycle requirements as its packet size is small and the time taken to transmit the smallest
packet is as small as 80 s. The BLE protocol stack supports IP based communication also.
An experiment conducted by Siekkinen et al. [65] recorded the number of bytes
transferred per Joule to show that BLE consumes far less energy as compared to
competing protocols such as Zigbee. The energy efficiency of BLE is 2.5 times better
than Zigbee.
7.5. Low Power WiFi
The WiFi alliance has recently developed “WiFi HaLow,” which is based on the IEEE
802.11ah standard. It consumes lower power than a traditional WiFi device and also has a
longer range. This is why this protocol is suitable for Internet of Things applications. The
range of WiFi HaLow is nearly twice that of traditional WiFi.
Like other WiFi devices, devices supporting WiFi HaLow also support IP connectivity,
which is important for IoT applications. Let us look at the specifications of the IEEE
802.11ah standard [66, 67]. This standard was developed to deal with wireless sensor
network scenarios, where devices are energy constrained and require relatively long
range communication. IEEE 802.11ah operates in the sub-gigahertz band (900 MHz).
Because of the relatively lower frequency, the range is longer since higher frequency
waves suffer from higher attenuation. We can extend the range (currently 1 km) by
lowering the frequency further; however, the data rate will also be lower and thus the
tradeoff is not justified. IEEE 802.11ah is also designed to support large star shaped
networks, where a lot of stations are connected to a single access point.
7.6. Zigbee
It is based on the IEEE 802.15.4 communication protocol standard and is used for
personal area networks or PANs [68]. The IEEE 802.15.4 standard has low power MAC
and physical layers and has already been explained in Section 7.3. Zigbee was developed
by the Zigbee alliance, which works for reliable, low energy, and cheap communication
solutions. The range of Zigbee device communication is very small (10–100 meters). The
details of the network and application layers are also specified by the Zigbee standard.
Unlike BLE, the network layer here provides for multihop routing.
There are three types of devices in a Zigbee network: FFD (Fully Functional Device),
RFD (Reduced Functional Device), and one Zigbee coordinator. A FFD node can
additionally act as a router. Zigbee supports star, tree, and mesh topologies. The routing
scheme depends on the topology. Other features of Zigbee are discovery and maintenance
of routes, support for nodes joining/leaving the network, short 16-bit addresses, and
multihop routing.
The framework for communication and distributed application development is provided
by the application layer. The application layer consists of Application Objects (APO),
Application Sublayer (APS), and a Zigbee Device Object (ZDO). APOs are spread over
the network nodes. These are pieces of software, which control some underlying device
hardware (examples: switch and transducer). The device and network management
services are provided by the ZDO, which are then used by the APOs. Data transfer
services are provided by the Application Sublayer to the APOs and ZDO. It is responsible
for secure communication between the Application Objects. These features can be used
to create a large distributed application.
7.7. Integration of RFID and WSN
RFID and wireless sensor networks (WSN) are both important technologies in the IoT
domain. RFID can only be used for object identification, but WSNs serve a far greater
purpose. The two are very different but merging them has many advantages. The
following components can be added to RFID to enhance its usability:(a)Sensing
capabilities(b)Multihop communication(c)Intelligence
RFID is inexpensive and uses very little power. That is why its integration with WSN is
very useful. The integration is possible in the following ways [69, 70]:(a)Integration of
RFID tags with sensors: RFID tags with sensing capabilities are called sensor tags.
These sensor tags sense data from the environment and then the RFID reader
can read this sensed data from the tag. In such cases, simple RFID protocols are
used, where there is only single hop communication. RFID sensing technologies
can be further classified on the basis of the power requirement of sensor tags as
explained earlier in the section on RFIDs (active and passive) (see
Section 5.5).(b)Integration of RFID tags with WSN nodes: the communication
capabilities of sensor tags are limited to a single hop. To extend its capabilities,
the sensor tag is equipped with a wireless transceiver, little bit of Flash memory,
and computational capabilities such that it can initiate communication with other
nodes and wireless devices. The nodes can in this fashion be used to form a
wireless mesh network. In such networks, sensor tags can communicate with
each other over a large range (via intermediate hops). With additional processing
capabilities at a node, we can reduce the net amount of data communicated and
thus increase the power efficiency of the WSN.(c)Integration of RFID readers with
WSN nodes: this type of integration is also done to increase the range of RFID tag
readers. The readers are equipped with wireless transceivers and
microcontrollers so that they can communicate with each other and therefore, the
tag data can reach a reader, which is not in the range of that tag. It takes
advantage of multihop communication of wireless sensor network devices. The
data from all the RFID readers in the network ultimately reaches a central
gateway or base station that processes the data or sends it to a remote server.
These kinds of integrated solutions have many applications in a diverse set of domains
such as security, healthcare, and manufacturing.
7.8. Low Power Wide-Area-Networks (LPWAN)
Let us now discuss a protocol for long range communication in power constrained
devices. The LPWAN class of protocols is low bit-rate communication technologies for
such IoT scenarios.
Let us now discuss some of the most common technologies in this area. Narrow band
IoT: it is a technology made for a large number of devices that are energy
constrained. It is thus necessary to reduce the bit rate. This protocol can be
deployed with both the cellular phone GSM and LTE spectra. The downlink
speeds vary between 40 kbps (LTE M2) and 10 Mbps (LTE category 1). Sigfox: it
is one more protocol that uses narrow band communication (10 MHz). It uses
free sections of the radio spectrum (ISM band) to transmit its data. Instead of 4G
networks, Sigfox focuses on using very long waves. Thus, the range can
increase to a 1000 kms. Because of this the energy for transmission is
significantly lower (0.1%) than contemporary cell phones. Again the cost is
bandwidth. It can only transmit 12 bytes per message, and a device is limited to
140 messages per day. This is reasonable for many kinds of applications:
submarine applications, sending control (emergency) codes, geolocation,
monitoring remote locations, and medical applications. Weightless: it uses a
differential binary phase shift keying based method to transmit narrow band
signals. To avoid interference, the protocol hops across frequency bands
(instead of using CSMA). It supports cryptographic encryption and mobility. Along
with frequency hopping, two additional mechanisms are used to reduce
collisions. The downlink service uses time division multiple access (TDMA) and
the uplink service uses multiple subchannels that are first allocated to
transmitting nodes by contacting a central server. Some applications include
smart meters, vehicle tracking, health monitoring, and industrial machine
monitoring. Neul: this protocol operates in the sub-1 GHz band. It uses small
chunks of the TV whitespace spectrum to create low cost and low power
networks with very high scalability. It has a 10 km range and uses the Weightless
protocol for communication. LoRaWAN: this protocol is similar to Sigfox. It targets
wide area network applications and is designed to be a low power protocol. Its
data rates can vary from 0.3 kbps to 50 kbps, and it can be used within an urban
or a suburban environment (2–5 kms range in a crowded urban area). It was
designed to serve as a standard for long range IoT protocols. It thus has features
to support multitenancy, enable multiple applications, and include several
different network domains.
7.9. Lightweight Application Layer Protocols
Along with physical and MAC layer protocols, we also need application layer protocols
for IoT networks. These lightweight protocols need to be able to carry application
messages, while simultaneously reducing power as far as possible.
OMA Lightweight M2M (LWM2M) is one such protocol. It defines the communication
protocol between a server and a device. The devices often have limited capabilities and
are thus referred to as constrained devices. The main aims of the OMA protocol are as
follows:(1)Remote device management.(2)Transferring service data/information
between different nodes in the LWM2M network.
All the protocols in this class treat all the network resources as objects. Such resources
can be created, deleted, and remotely configured. These devices have their unique
limitations and can use different kinds of protocols for internally representing
information. The LWM2M protocol abstracts all of this away and provides a convenient
interface to send messages between a generic LWM2M server and a distributed set of
LWM2M clients.
This protocol is often used along with CoAP (Constrained Application Protocol). It is an
application layer protocol that allows constrained nodes such as sensor motes or small
embedded devices to communicate across the Internet. CoAP seamlessly integrates with
HTTP, yet it provides additional facilities such as support for multicast operations. It is
ideally suited for small devices because of its low overhead and parsing complexity and
reliance on UDP rather than TCP.
8. Middleware
Ubiquitous computing is the core of the Internet of Things, which means incorporating
computing and connectivity in all the things around us. Interoperability of such
heterogeneous devices needs well-defined standards. But standardization is difficult
because of the varied requirements of different applications and devices. For such
heterogeneous applications, the solution is to have a middleware platform, which will
abstract the details of the things for applications. That is, it will hide the details of the
smart things. It should act as a software bridge between the things and the applications. It
needs to provide the required services to the application developers [20] so that they can
focus more on the requirements of applications rather than on interacting with the
baseline hardware. To summarize, the middleware abstracts the hardware and provides an
Application Programming Interface (API) for communication, data management,
computation, security, and privacy.
The challenges, which are addressed by any IoT middleware, are as follows:
[20, 71, 72].(1)Interoperability and programming abstractions: for facilitating
collaboration and information exchange between heterogeneous devices,
different types of things can interact with each other easily with the help of
middleware services. Interoperability is of three types: network, semantic, and
syntactic. Network interoperability deals with heterogeneous interface protocols
for communication between devices. It insulates the applications from the
intricacies of different protocols. Syntactic interoperability ensures that
applications are oblivious of different formats, structures, and encoding of data.
Semantic interoperability deals with abstracting the meaning of data within a
particular domain. It is loosely inspired by the semantic web.(2)Device discovery
and management: this feature enables the devices to be aware of all other devices
in the neighborhood and the services provided by them. In the Internet of Things,
the infrastructure is mostly dynamic. The devices have to announce their
presence and the services they provide. The solution needs to be scalable
because the devices in an IoT network can increase. Most solutions in this
domain are loosely inspired by semantic web technologies. The middleware
provides APIs to list the IoT devices, their services, and capabilities. In addition,
typically APIs are provided to discover devices based on their capabilities.
Finally, any IoT middleware needs to perform load balancing, manage devices
based on their levels of battery power, and report problems in devices to the
users.(3)Scalability: a large number of devices are expected to communicate in
an IoT setup. Moreover, IoT applications need to scale due to ever increasing
requirements. This should be managed by the middleware by making required
changes when the infrastructure scales.(4)Big data and analytics: IoT sensors
typically collect a huge amount of data. It is necessary to analyze all of this data
in great detail. As a result a lot of big data algorithms are used to analyze IoT
data. Moreover, it is possible that due to the flimsy nature of the network some of
the data collected might be incomplete. It is necessary to take this into account
and extrapolate data by using sophisticated machine learning
algorithms.(5)Security and privacy: IoT applications are mostly related to
someone’s personal life or an industry. Security and privacy issues need to be
addressed in all such environments. The middleware should have built in
mechanisms to address such issues, along with user authentication, and the
implementation of access control.(6)Cloud services: the cloud is an important part
of an IoT deployment. Most of the sensor data is analyzed and stored in a
centralized cloud. It is necessary for IoT middleware to seamlessly run on
different types of clouds and to enable users to leverage the cloud to get better
insights from the data collected by the sensors.(7)Context detection: the data
collected from the sensors needs to be used to extract the context by applying
various types of algorithms. The context can subsequently be used for providing
sophisticated services to users.
There are many middleware solutions available for the Internet of Things, which address
one or more of the aforementioned issues. All of them support interoperability and
abstraction, which is the foremost requirement of middleware. Some examples are
Oracle’s Fusion Middleware, OpenIoT [21], MiddleWhere [22], and Hydra [23].
Middlewares can be classified as follows on the basis of their design [72]:(1)Event
based: here, all the components interact with each other through events. Each
event has a type and some parameters. Events are generated by producers and
received by the consumers. This can be viewed as a publish/subscribe
architecture, where entities can subscribe for some event types and get notified
for those events.(2)Service oriented: service oriented middlewares are based on
Service Oriented Architectures (SOA), in which we have independent modules
that provide services through accessible interfaces. A service oriented
middleware views resources as service providers. It abstracts the underlying
resources through a set of services that are used by applications. There is a
service repository, where services are published by providers. The consumers
can discover services from the repository and then bind with the provider to
access the service. Service oriented middleware must have runtime support for
advertising services by providers and support for discovering and using services
by consumers. HYDRA [23] is a service oriented middleware. It incorporates
many software components, which are used in handling various tasks required
for the development of intelligent applications. Hydra also provides semantic
interoperability using semantic web technologies. It supports dynamic
reconfiguration and self-management.(3)Database oriented: in this approach, the
network of IoT devices is considered as a virtual relational database system. The
database can then be queried by the applications using a query language. There
are easy to use interfaces for extracting data from the database. This approach
has issues with scaling because of its centralized model.(4)Semantic: semantic
middleware focuses on the interoperation of different types of devices, which
communicate using different formats of data. It incorporates devices with different
data formats and ontologies and ties all of them together in a common
framework. The framework is used for exchanging data between diverse types of
devices. For a common semantic format, we need to have adapters for
communication between devices because; for each device, we need adapters to
map standards to one abstract standard [73]. In such a semantic middleware
[74], a semantic layer is introduced, in which there is a mapping from each
resource to a software layer for that resource. The software layers then
communicate with each other using a mutually intelligible language (based on the
semantic web). This technique allows multiple physical resources to
communicate even though they do not implement or understand the same
protocols.(5)Application specific: this type of middleware is used specifically for an
application domain for which it is developed because the whole architecture of
this middleware software is fine-tuned on the basis of requirements of the
application. The application and middleware are tightly coupled. These are not
general purpose solutions.
8.1. Popular IoT Middleware
8.1.1. FiWare
FiWare is a very popular IoT middleware framework that is promoted by the EU. It has
been designed keeping smart cities, logistics, and shop floor analytics in mind. FiWare
contains a large body of code, reusable modules, and APIs that have been contributed by
thousands of FiWare developers. Any application developer can take a subset of these
components and build his/her IoT application.
A typical IoT application has many producers of data (sensors), a set of servers to process
the data, and a set of actuators. FiWare refers to the information collected by sensors
as context information. It defines generic REST APIs to capture the context from
different scenarios. All the context information is sent to a dedicated service called a
context broker. FiWare provides APIs to store the context and also query it. Moreover,
any application can register itself as a context consumer, and it can request the context
broker for information. It also supports the publish-subscribe paradigm. Subsequently,
the context can be supplied to systems using context adapters whose main role is to
transform the data (the context) based on the requirements of the destination nodes.
Moreover, FiWare defines a set of SNMP APIs via which we can control the behavior of
IoT devices and also configure them.
The target applications are provided APIs to analyze, query, and mine the information
that is collected from the context broker. Additionally, with advanced visualization APIs,
it is possible to create and deploy feature rich applications very quickly.
8.1.2. OpenIoT
OpenIoT is another popular open source initiative. It has 7 different components. At the
lowest level, we have a physical plane. It collects data from IoT devices and also does
some preprocessing of data. It has different APIs to interface with different types of
physical nodes and get information from them.
The next plane is the virtualized plane, which has 3 components. We first have the
scheduler, which manages the streams of data generated by devices. It primarily assigns
them to resources and takes care of their QoS requirements. The data storage component
manages the storage and archival of data streams. Finally, the service delivery component
processes the streams. It has several roles. It combines data streams, preprocesses them,
and tracks some statistics associated with these streams such as the number of unique
requests or the size of each request.
The uppermost layer, that is, the application layer, also has 3 components: request
definition, request presentation, and configuration. The request definition component
helps us create requests to be sent to the IoT sensors and storage layers. It can be used to
fetch and query data. The request presentation component creates mashups of data by
issuing different queries to the storage layer, and finally the configuration component
helps us configure the IoT devices.
9. Applications of IoT
There are a diverse set of areas in which intelligent applications have been developed. All
of these applications are not yet readily available; however, preliminary research
indicates the potential of IoT in improving the quality of life in our society. Some uses of
IoT applications are in home automation, fitness tracking, health monitoring,
environment protection, smart cities, and industrial settings.
9.1. Home Automation
Smart homes are becoming more popular today because of two reasons. First, the sensor
and actuation technologies along with wireless sensor networks have significantly
matured. Second, people today trust technology to address their concerns about their
quality of life and security of their homes (see Figure 8).
Figure 8: Block diagram of a smart home system.
In smart homes, various sensors are deployed, which provide intelligent and automated
services to the user. They help in automating daily tasks and help in maintaining a routine
for individuals who tend to be forgetful. They help in energy conservation by turning off
lights and electronic gadgets automatically. We typically use motion sensors for this
purpose. Motion sensors can be additionally used for security also.
For example, the project, MavHome [75], provides an intelligent agent, which uses
various prediction algorithms for doing automated tasks in response to user triggered
events and adapts itself to the routines of the inhabitants. Prediction algorithms are used
to predict the sequence of events [76] in a home. A sequence matching algorithm
maintains sequences of events in a queue and also stores their frequency. Then a
prediction is made using the match length and frequency. Other algorithms used by
similar applications use compression based prediction and Markov models.
Energy conservation in smart homes [77] is typically achieved through sensors and
context awareness. The sensors collect data from the environment (light, temperature,
humidity, gas, and fire events). This data from heterogeneous sensors is fed to a context
aggregator, which forwards the collected data to the context aware service engine. This
engine selects services based on the context. For example, an application can
automatically turn on the AC when the humidity rises. Or, when there is a gas leak, it can
turn all the lights off.
Smart home applications are really beneficial for the elderly and differently abled. Their
health is monitored and relatives are informed immediately in case of emergencies.
Floors are equipped with pressure sensors, which track the movement of an individual
across the smart home and also help in detecting if a person has fallen down. In smart
homes, CCTV cameras can be used to record events of interest. These can then be used
for feature extraction to find out what is going on.
In specific, fall detection applications in smart environments [78–80] are useful for
detecting if elderly people have fallen down. Yu et al. [80] use computer vision based
techniques for analyzing postures of the human body. Sixsmith et al. [79] used low cost
infrared sensor array technology, which can provide information such as the location,
size, and velocity of a target object. It detects dynamics of a fall by analyzing the motion
patterns and also detects inactivity and compares it with activity in the past. Neural
networks are employed and sample data is provided to the system for various types of
falls. Many smartphone based applications are also available, which detect a fall on the
basis of readings from the accelerometer and gyroscope data.
There are many challenges and issues with regard to smart home applications [81]. The
most important is security and privacy [82] since all the data about the events taking
place in the home is being recorded. If the security and trustworthiness of the system are
not guaranteed, an intruder may attack the system and may make the system behave
maliciously. Smart home systems are supposed to notify the owners in case they detect
such abnormalities. This is possible using AI and machine learning algorithms, and
researchers have already started working in this direction [83]. Reliability is also an issue
since there is no system administrator to monitor the system.
9.2. Smart Cities
Smart transport applications can manage daily traffic in cities using sensors and
intelligent information processing systems. The main aim of intelligent transport systems
is to minimize traffic congestion, ensure easy and hassle-free parking, and avoid
accidents by properly routing traffic and spotting drunk drivers. The sensor technologies
governing these types of applications are GPS sensors for location, accelerometers for
speed, gyroscopes for direction, RFIDs for vehicle identification, infrared sensors for
counting passengers and vehicles, and cameras for recording vehicle movement and
traffic. There are many types of applications in this area (refer to [84]):(1)Traffic
surveillance and management applications: vehicles are connected by a network
to each other, the cloud, and to a host of IoT devices such as GPS sensors,
RFID devices, and cameras. These devices can estimate traffic conditions in
different parts of the city. Custom applications can analyze traffic patterns so that
future traffic conditions can be estimated. Yu et al. [85] implement a vehicle
tracking system for traffic surveillance using video sequences captured on the
roads. Traffic congestion detection can also be implemented using smartphone
sensors such as accelerometers [86] and GPS sensors. These applications can
detect movement patterns of the vehicle while the user is driving. Such kind of
information is already being collected by Google maps and users are using it to
route around potentially congested areas of the city.(2)Applications to ensure
safety: smart transport does not only imply managing traffic conditions. It also
includes safety of people travelling in their vehicles, which up till now was mainly
in the hands of drivers. There are many IoT applications developed to help
drivers become safer drivers. Such applications monitor driving behavior of
drivers and help them drive safely by detecting when they are feeling drowsy or
tired and helping them to cope with it or suggesting rest [87, 88]. Technologies
used in such applications are face detection, eye movement detection, and
pressure detection on the steering (to measure the grip of the driver’s hands on
the steering). A smartphone application, which estimates the driver’s driving
behavior using smartphone sensors such as the accelerometer, gyroscope, GPS,
and camera, has been proposed by Eren et al. [89]. It can decide whether the
driving is safe or rash by analyzing the sensor data.(3)Intelligent parking
management (see Figure 9): in a smart transportation system, parking is
completely hassle free as one can easily check on the Internet to find out which
parking lot has free spaces. Such lots use sensors to detect if the slots are free
or occupied by vehicles. This data is then uploaded to a central server.(4)Smart
traffic lights: traffic lights equipped with sensing, processing, and communication
capabilities are called smart traffic lights. These lights sense the traffic
congestion at the intersection and the amount of traffic going each way. This
information can be analyzed and then sent to neighboring traffic lights or a
central controller. It is possible to use this information creatively. For example, in
an emergency situation the traffic lights can preferentially give way to an
ambulance. When the smart traffic light senses an ambulance coming, it clears
the path for it and also informs neighboring lights about it. Technologies used in
these lights are cameras, communication technologies, and data analysis
modules. Such systems have already been deployed in Rio De
Janeiro.(5)Accident detection applications: a smartphone application designed by
White et al. [90] detects the occurrence of an accident with the help of an
accelerometer and acoustic data. It immediately sends this information along with
the location to the nearest hospital. Some additional situational information such
as on-site photographs is also sent so that the first responders know about the
whole scenario and the degree of medical help that is required.
Figure 9: Block diagram of a smart parking system.
Barcelona and Stockholm stand out in the list of smart cities. Barcelona has
a CityOS project, where it aims to create a single virtualized OS for all the smart devices
and services offered within the city. Barcelona has mainly focused on smart
transportation (as discussed in Section 9.2.1) and smart water. Smart transportation is
implemented using a network of sensors, centralized analysis, and smart traffic lights. On
similar lines Barcelona has sensors on most of its storm drains, water storage tanks, and
water supply lines. This information is integrated with weather and usage information.
The result of all of this is a centralized water planning strategy. The city is able to
estimate the water requirements in terms of domestic usage and industrial usage and for
activities such as landscaping and gardening.
Stockholm started way back in 1994, and its first step in this direction was to install an extensive
fiber optic system. Subsequently, the city added thousands of sensors for smart traffic and smart
water management applications. Stockholm was one of the first cities to
implement congestion charging. Users were charged money, when they drove into congested
areas. This was enabled by smart traffic technologies. Since the city has a solid network
backbone, it is very easy to deploy sensors and applications. For example, recently the city
created a smart parking system, where it is possible to easily locate parking spots nearby.
Parking lots have sensors, which let a server know about their usage. Once a driver queries the
server with her/his GPS location, she/he is guided to the nearest parking lot with free slots.
Similar innovations have taken place in the city’s smart buildings, snow clearance, and political
announcement systems.
9.3. Social Life and Entertainment
Social life and entertainment play an important role in an individual’s life. Many applications
have been developed, which keep track of such human activities. The term “opportunistic IoT”
[92] refers to information sharing among opportunistic devices (devices that seek to make
contact with other devices) based on movement and availability of contacts in the vicinity.
Personal devices such as tablets, wearables, and mobile phones have sensing and short range
communication capabilities. People can find and interact with each other when there is a
common purpose.
Circle Sense [93] is an application, which detects social activities of a person with the help of
various types of sensor data. It identifies the social circle of a person by analyzing the patterns of
social activities and the people present in those activities. Various types of social activities and
the set of people participating in those activities are identified. It uses location sensors to find out
where the person is and uses Bluetooth for searching people around her. The system has built in
machine learning algorithms, and it gradually improves its behavior with learning.
Affective computing [94] is a technology, which recognizes, understands, stimulates, and
responds to the emotions of human beings. There are many parameters, which are considered
while dealing with human affects such as facial expressions, speech, body gestures, hand
movements, and sleep patterns. These are analyzed to figure out how a person is feeling. The
utterance of emotional keywords is identified by voice recognition and the quality of voice by
looking at acoustic features of speech.
One of the applications of affective computing is Camy, an artificial pet dog [95], which is
designed to interact with human beings and show affection and emotions. Many sensors and
actuators are embedded in it. It provides emotional support to the owner, encourages playful and
active behavior, recognizes its owner, and shows love for her and increases the owner’s
communication with other people. Based on the owner’s mood, Camy interacts with the owner
and gives her suggestions.
Logmusic [96] is an entertainment application, which recommends music on the basis of the
context, such as the weather, temperature, time, and location.
9.4. Health and Fitness
IoT appliances have proven really beneficial in the health and wellness domains. Many wearable
devices are being developed, which monitor a person’s health condition (see Figure 10).
Figure 10: Block diagram of a smart healthcare system.
Health applications make independent living possible for the elderly and patients with serious
health conditions. Currently, IoT sensors are being used to continuously monitor and record their
health conditions and transmit warnings in case any abnormal indicators are found. If there is a
minor problem, the IoT application itself may suggest a prescription to the patient.
IoT applications can be used in creating an Electronic Health Record (EHR), which is a record of
all the medical details of a person. It is maintained by the health system. An EHR can be used to
record allergies, surges in blood sugar and blood pressure.
Stress recognition applications are also fairly popular [97]. They can be realized using
smartphone sensors. Wang et al. describe an application [30], which measures the stress level of
a college student and is installed on the student’s smartphone. It senses the locations the student
visits during the whole day, the amount of physical activity, amount of sleep and rest, and her/his
interaction and relationships with other people (audio data and calls). In addition, it also conducts
surveys with the student by randomly popping up a question in the smartphone. Using all of this
data and analyzing it intelligently, the level of stress and academic performance can be
measured.
In the fitness sector, we have applications that monitor how fit we are based on our daily activity
level. Smartphone accelerometer data can be used for activity detection by applying complex
algorithms. For example, we can measure the number of steps taken and the amount of exercise
done by using fitness trackers. Fitness trackers are available in the market as wearables to
monitor the fitness level of an individual. In addition, gym apparatus can be fitted with sensors to
count the number of times an exercise is performed. For example, a smart mat [98] can count the
number of exercise steps performed on it. This is implemented using pressure sensors on the mat
and then by analyzing the patterns of pressure, and the shape of the contact area.
9.5. Smart Environment and Agriculture
Environmental parameters such as temperature and humidity are important for agricultural
production. Sensors are used by farmers in the field to measure such parameters and this data can
be used for efficient production. One application is automated irrigation according to weather
conditions.
Production using greenhouses [99] is one of the main applications of IoT in agriculture.
Environmental parameters measured in terms of temperature, soil information, and humidity are
measured in real time and sent to a server for analysis. The results are then used to improve crop
quality and yield.
Pesticide residues in crop production are detected using an Acetylcholinesterase biosensor [100].
This data is saved and analyzed for extracting useful information such as the sample size, time,
location, and amount of residues. We can thus maintain the quality of the crop. Moreover, a QR
code can be used to uniquely identify a carton of farm produce. Consumers can scan the QR code
and check the amount of pesticides in it (via a centralized database) online before buying.
Air pollution is an important concern today because it is changing the climate of the earth and
degrading air quality. Vehicles cause a lot of air pollution. An IoT application proposed by
Manna et al. [39] monitors air pollution on the roads. It also tracks vehicles that cause an undue
amount of pollution. Electrochemical toxic gas sensors can also be used to measure air pollution.
Vehicles are identified by RFID tags. RFID readers are placed on both sides of the road along
with the gas sensors. With this approach it is possible to identify and take action against
polluting vehicles.
9.6. Supply Chain and Logistics
IoT tries to simplify real world processes in business and information systems [101]. The goods
in the supply chain can be tracked easily from the place of manufacture to the final places of
distribution using sensor technologies such as RFID and NFC. Real time information is recorded
and analyzed for tracking. Information about the quality and usability of the product can also be
saved in RFID tags attached with the shipments.
Bo and Guangwen [102] explain an information transmission system for supply chain
management, which is based on the Internet of Things. RFID tags uniquely identify a product
automatically and a product information network is created to transmit this information in real
time along with location information. This system helps in automatic collection and analysis of
all the information related to supply chain management, which may help examine past demand
and come up with a forecast of future demand. Supply chain components can get access to real
time data and all of this information can be analyzed to get useful insights. This will in the long
run improve the performance of supply chain systems.
9.7. Energy Conservation
The smart grid is information and communication technology enabled modern electricity
generation, transmission, distribution, and consumption system [103].
To make electric power generation, transmission, and distribution smart, the concept of smart
grids adds intelligence at each step and also allows the two-way flow of power (back from the
consumer to the supplier). This can save a lot of energy and help consumers better understand
the flow of power and dynamic pricing. In a smart grid, power generation is distributed. There
are sensors deployed throughout the system to monitor everything. It is actually a distributed
network of microgrids [104]. Microgrids generate power to meet demands of local sites and
transmit back the surplus energy to the central grid. Microgrids can also demand energy from the
central grid in case of a shortfall.
Two-way flow of power also benefits consumers, who are also using their own generated energy
occasionally (say, solar, or wind power); the surplus power can be transmitted back so that it is
not wasted. The user will also get paid for that power.
Some of the IoT applications in a smart grid are online monitoring of transmission lines for
disaster prevention and efficient use of power in smart homes by having a smart meter for
monitoring energy consumption [105].
Smart meters read and analyze consumption patterns of power at regular and peak load times.
This information is then sent to the server and also made available to the user. The generation is
then set according to the consumption patterns. In addition, the user can adjust her/his use so as
to reduce costs. Smart power appliances can leverage this information and operate when the
prices are low.
Now, that we have profiled most of the IoT technologies, let us look at some of the design
considerations for designing a practical IoT network.
The first consideration is the design of the sensors. Even though there might not be much of a
choice regarding the sensors, there is definitely a lot of choice regarding the processing and
networking capabilities that are bundled along with the sensors. Our choices range from small
sub-mW boards meant for sensor motes to Arduino or Atom boards that consume 300–500 mW
of power. This choice depends on the degree of analytics and data preprocessing that we want to
perform at the sensor itself. Secondly, there is an issue of logistics also. To create a sub-mW
board, we need board design expertise, and this might not be readily available. Hence, it might
be advisable to bundle a sensor with commercially available embedded processor kits.
The next important consideration is communication. In IoT nodes, power is the most dominant
issue. The power required to transmit and receive messages is a major fraction of the overall
power, and as a result a choice of the networking technology is vital. The important factors that
we need to consider are the distance between the sender and the receiver, the nature of obstacles,
signal distortion, ambient noise, and governmental regulations. Based on these key factors, we
need to choose a given wireless networking protocol. For example, if we just need to
communicate inside a small building, we can use Zigbee, whereas if we need communication in
a smart city, we should choose Sigfox or LoraWAN. In addition, often there are significant
constraints on the frequency and the power that can be spent in transmission. These limitations
are mainly imposed by government agencies. An apt decision needs to be made by taking all of
these factors into account.
Let us then come to the middleware. The first choice that needs to be made is to choose between
an open source middleware such as FiWare or a proprietary solution. There are pros and cons of
both. It is true that open source middleware is in theory more flexible; however, they may have
limited support for IoT devices. We ideally want a middleware solution to interoperate with all
kinds of communication protocols and devices; however, that might not be the case. Hence, if we
need strict compatibility with certain devices and protocols, a proprietary solution is better.
Nevertheless, open source offerings have cost advantages and are sometimes easier to deploy.
We also need to choose the communication protocol and ensure that it is compatible with the
firewalls in the organizations involved. In general choosing a protocol based on HTTP is the best
from this point of view. We also need to choose between TCP and UDP. UDP is always better
from the point of view of power consumption. Along with these considerations, we also need to
look at options to store sensor data streams, querying languages, and support for generating
dynamic alerts.
Finally, let us consider the application layer. Most IoT frameworks provide significant amount of
support for creating the application layer. This includes data mining, data processing, and
visualization APIs. Creating mashups and dashboards of data is nowadays very easy to do given
the extensive support provided by IoT frameworks. Nevertheless, here the tradeoff is between
the features provided and the resources that are required. We do not need a very heavy
framework if we do not desire a lot of features. This call needs to be taken by the application
developers.
Smart manufacturing and the IoT is driving the next industrial revolution
According to Markets and Markets Research, the smart factory market is projected to reach
205.42 Billion USD by 2022, growing at a CAGR of 9.3% between 2017 and 2022 2.
In this fiercely competitive market, IoT-enabled smart manufacturing provides full visibility of
assets, processes, resources, and products. This, in turn, supports streamlined business
operations, optimized productivity and improved ROI. The key to success is connecting
equipment, integrating diverse industrial data, and securing industrial systems for the entire
lifespan of equipment.
For two decades, Gemalto has been a trusted partner, helping customers Connect, Secure and
Monetize their enterprise operations with IoT technology. In this web dossier, we'd like to share
some of the best practices we've gathered to help companies interested in making the leap to
"Industry 4.0."
Smart manufacturing allows factory managers to automatically collect and analyze data to make
better-informed decisions and optimize production. The data from sensors and machines is
communicated to the Cloud by IoT connectivity solutions deployed in the factory. That data is
analyzed and combined with contextual information and then shared with authorized
stakeholders. IoT technology, leveraging both wired and wireless connectivity, enables this flow
of data, providing the ability to remotely monitor and manage processes and change production
plans quickly, in real time when needed. It greatly improves outcomes of manufacturing
reducing waste, speeding production and improving yield and the quality of goods produced.
Replacing the hierarchical structure that has historically defined the "shop floor" with an open,
flatter, fully-interconnected model that links R&D processes with supply chain management has
many benefits, including the optimization of global manufacturing processes related to
performance, quality, cost, and resource management. It also enables the manufactured products
themselves to play a key role in development and design of the manufacturing process. This is
because connected smart products are able to feed information back to the factory so that quality
issues can be detected and fixed during the manufacturing stage by adjusting product design
and/or the manufacturing processes. Smart products can also provide insights on how they are
actually used by consumers, providing the opportunity to adapt features to better meet the real
needs of the marketplace.
The manufacturing sector is being fundamentally reshaped by the unstoppable progress of the
4th Industrial Revolution, powered by the IoT. The changes to this segment are made possible by
technological breakthroughs that are occurring at an unprecedented pace. Just as the steam
engine ushered in massive changes in the early 17th Century and the advent of the digital age
rocked the world in the second half of the 20th century, today's technological innovations are
forcing decision makers to reimagine how products are designed and produced. In addition to the
IoT, consider how Artificial Intelligence (AI), machine learning, and Virtual Reality (VR) will
impact manufacturing.
This IoT revolution is expected to profoundly increase productivity and value. This is why the
world's largest manufacturers, China, the US and Europe, have launched dedicated initiatives to
bolster their own manufacturing sector. In essence, these manufacturing leaders are engaged in a
global battle for smart manufacturing competitiveness.
The expectation is that all types of manufacturing have something to gain from the 4th industrial
revolution and the IoT. For instance, discrete manufacturing is the production of distinct items
that can be individually touched and counted and are typically associated with assembly lines.
This includes items such as cars, furniture, and airplanes, that are increasingly connected. Smart
processes will play a prominent role in balancing supply and demand, improving product design,
optimizing manufacturing efficiency and greatly reducing waste. Similarly process
manufacturing where goods are produced in bulk using carefully crafted recipes, gains from the
IoT revolution in terms of improved plant monitoring, a streamlined supply chain, and quality
improvements in track and trace and distribution processes.
Today, the manufacturing sector is the leading victim of infrastructure cybercrime, accounting
Security challenges have also slowed the pace of adoption of new IoT technologies,
organizational changes, and business models that could immensely improve processes, enhance
competitiveness and bring new services to customers. Unfortunately, enterprises that do not keep
pace will find it more difficult to compete with their more forward-thinking counterparts who are
tackling the challenge head-on.
What should industry players consider as they transform manufacturing to smart manufacturing?
To stay competitive, manufacturers need to partner with manufacturing automation vendors and
systems integrators that provide solutions to upgrade factories or build new systems from
scratch. Strong automation partners are adding security architecture to the value chain since they
know this is a major concern for manufacturers and a key to competitive advantage in the
marketplace.
Manufacturers should work with experienced integrators, developers and technology partners
who have already exhibited excellence and longevity
in connecting, securing and monetizing smart manufacturing systems. Experienced partners can
provide the direction needed to develop the best system to meet business needs.
For instance, manufacturing processes can be connected by hard wiring, WiFi, Bluetooth, RFID,
Low-Power Wide-Area Networks including LoRa and LTE M, and even IoT Terminals that
work out of the box and connect via flexible industrial interfaces. Each has different strengths
and ideal use cases and a strong partner with experience in connecting smart manufacturing
systems can help decide which solution is best for individual use cases.
They must also consider security and how to protect smart manufacturing systems from intrusion
or error. For instance, Gemalto Secure Elements and Hardware Security Modules (HSM) are
used to secure product manufacturing systems. SEs and HSMs allow manufacturers to generate
and distribute IDs and certificates for devices and they authenticate devices, users, and
applications that interact with devices. They also help secure communication and protect data at
rest.
Similarly, Trusted Key Manager (TKM) plays a key role in security. TKM manages credentials
for LoRa devices and networks as well as IoT devices not connected to a cellular network,
historically a challenge for manufacturers. The TKM solution allows manufacturers to decouple
these credentials from the production process, making the business scalable and preserving trust
between manufacturers and customers.
Another area for industry players to consider is successful software monetization. Licensing and
IP protection is an important component in manufacturing industrial devices, which includes
more and increasingly complex software, trade secrets, and pricing models based on usage and
variable feature sets.
UNIT V
Industry 4.0
Industry 4.0 (the ‘fourth industrial revolution’) refers to the current trend of improved
automation, machine-to-machine and human-to-machine communication, artificial intelligence,
continued technological improvements and digitalisation in manufacturing.
In a Smart Factory, machinery and equipment will have the ability to improve processes through
self-optimisation and autonomous decision-making. This is in stark contrast to running fixed
program operations, as is the case today. Read more.
A pilot facility, developed by The German Research Centre for Artificial Intelligence (DFKI) in
Kaiserslautern, Germany, is demonstrating how a “smart” factory can operate.
This pilot facility uses soap bottles to show how products and manufacturing machines can
communicate with one another. Empty soap bottles have RFID tags attached to them, and these
tags inform machines whether the bottles should be given a black or a white cap. A product that
is in the process of being manufactured carries a digital product memory with it from the
beginning and can communicate with its environment via radio signals. This product becomes a
cyber-physical system that enables the real world and the virtual world to merge.
Industry 4.0 – Why should you care about all this?
Within the manufacturing industry today, there are said to be two groups: the traditional, first
generation who may be struggling in the Australian market due to a lack of desire to invest in
technology, and the innovators, who are finding themselves more success in a tough climate
because they are open to adopting new ways.
Annaliese Kloe, Managing Director of Headland Machinery explains that “Industry 4.0 is being
spoken about everywhere. In particular, it was widely reflected at EuroBLECH 2016. It will
widely change the approach to the way that manufacturers work, so if you aren’t looking into
this now then you’ll be left behind. It will revolutionise your business, so it is vital to get on
board.”
With an increasingly digital future ahead of us, this new era for manufacturing looks set to
transform businesses worldwide. It is imperative for manufacturers to consider new technologies
arising and explore how they can adapt their processes to comply with the expectations of the
modern world.
For more information about cloud technology, read more here. If you’d like to discuss how you
can modernise your business processes and systems, get in touch to speak with an expert.
1 Introduction
This text provides you with a basic information about the Cloud Computing, a new and fastly
growing term. It is structured to seven chapters for better orientation and easy understanding.
The first chapter talks about the very basis such as definition, its attributes or history.
1.1 Definition
Cloud Computing is a buzzword of 2010 and many experts disagree on its exact definition. But
the most used one and concurred one includes the notion of web‐based services which are
available on demand from and optimized and highly scalable service provider. Since such a
disagreement on the definition, one will be provided to better understand of the notion. The
cloud is IT as a service, delivered by IT resources that are independent of location. It is a style of
computing in which dynamically scalable and often virtualized resources are provided as a
service over the Internet where end‐users have no knowledge of, expertise in, or control over the
technology infrastructure (the cloud) that supports them. [1]
1.2 Attributes
Before some of the attributes will be defined, the term cloud should be explained. A cloud has
been long used in IT, in network diagrams respectively, to represent a sort of black box where
the interfaces are well known but the internal routing and processing is not visible to the network
users. Key attributes in cloud computing:
Shared: Services share a pool of resources to build economies of scale and IT resources
are used with maximum efficiency. The underlying infrastructure, software or platforms
are shared among the consumers of the service (usually unknown to the consumers). This
enables unused resources to serve multiple needs for multiple consumers, all working at
the same time.
Metered by Use: Services are tracked with usage metrics to enable multiple payment
models. The service provider has a usage accounting model for measuring the use of the
services, which could then be used to create different pricing plans and models. These
may include pay‐as‐you go plans, subscriptions, fixed plans and even free plans. The
implied payment plans will be based on usage, not on the cost of the equipment. These
plans are based on the amount of the service used by the consumers, which may be in
terms of hours, data transfers or other use‐based attributes delivered.
Uses Internet Technologies: The service is delivered using Internet identifiers, formats
and protocols, such as URLs, HTTP, IP and representational state transfer Web‐oriented
architecture. Many examples of Web technology exist as the foundation for
Internet‐based services. Google's Gmail, Amazon.com's book buying, eBay's auctions
sharing all exhibit the use of Internet and Web technologies and protocols. More details
about examples are in the chapter four – Intergration [2]
1.3 History
History of Cloud Computing surprisingly began almost 50 years ago. The father of this idea is
considered to be John McCarthy, a professor at MIT University in US, who first in 1961
presented the idea of sharing the same computer technology as being the same as for example
sharing electricity. Electrical power needs many households/firms that possess a variety of
electrical appliances but do not possess power plant. One power plant serves many customers
and using the electricity example, power plant=service provider, distribution network=internet
and the households/firms=computers. [3]
Since that time, Cloud computing has evolved through a number of phases which include grid
and utility computing, application service provision (ASP), and Software as a Service (SaaS).
One of the first milestones was the arrival of Salesforce.com in 1999, which pioneered the
concept of delivering enterprise applications via a simple website. The next development was
Amazon Web Services in 2002, which provided a suite of cloud‐based services including
storage, computation and even human intelligence. Another big milestone came in 2009 as
Google and others started to offer browser‐based enterprise applications, though services such as
Google Apps. [4]
2 Architecture
A basis infromation about the architecture is provided in this chapter, together with the
explanations of relevant terms such as virtualization, Frond/Back end or Middleware.
The Cloud Computing architecture can be divided into two sections, the front end and the back
end, connected together through a network, usually Internet. The Front End includes the client's
computer and the application required to access the cloud computing system. Not all cloud
computing systems have the same user interface. Services like Web‐based e‐mail programs
leverage existing Web browsers like Internet Explorer or Firefox. Other systems have unique
applications that provide network access to clients.
The Back End of the system is represented by various computers, servers and data storage
systems that create the "cloud" of computing services. Practically, Cloud Computing system
could include any program, from data processing to video games and each application will have
its own server.
A central server administers the system, monitoring traffic and client demands to ensure
everything runs smoothly. It follows a set of rules called protocols and uses a special kind of
software called Middleware. Middleware allows networked computers to communicate with
each other. [6]
Public Cloud (external cloud) is a model where services are available from a provider over the
Internet, such as applications and storage. There are free Public Cloud Services available, as well
as pay‐per‐usage or other monetized models. Private Cloud (Internal Cloud/Corporate Cloud) is
computing architecture providing hosted services to a limited number of people behind a
company’s protective firewall and it sometimes attracts criticism as firms still have to buy, build,
and manage some resources and thus do not benefit from lower up‐front capital costs and less
hands‐on management, the core concept of Cloud Computing. [7]
Private/Public cloud
Source:
http://www.technologyevaluation.com/login.aspx?returnURL=http://www.technologyevaluation.
com%2fresearch%2farticles%2fi-want-my-private-cloud-21964%2f
There are three main categories in CC, Infrastructure as a Service (IaaS), Software as a Service
(SaaS) and Platform as a Service (PaaS). All of them are described below in more details.
4 Intergration
Once the definition, categories and componencts needed for the user´s solution have been
identified the next challenge is to determine how to put them all together. This chapter provides
information about the Cloud Computing degisn and integrability as well as gives some examples.
Technical design – in its simplest form, the end‐to‐end design will include the end‐user
device, user connectivity, Internet, cloud connectivity, and the cloud itself.
At a minimum, most organizations will have users who connect to the cloud service remotely
(from home or while travelling) and through the internal network. In addition to connectivity at
the network level, the interfaces at the application layer need to be compatible and it will be
necessary to ensure this connectivity is reliable and secure.
Devices – cloud services should be device agnostic. They should work with traditional
desktop, mobile devices and thin client. Unfortunately, this is much easier said than done.
Regression testing on five or ten client platforms can be challenging. A good start is to
bundle the sets of supported devices into separate services. With Microsoft Exchange
2007 you have the option of supporting Windows platforms through HTTP (Outlook web
access) and using RPC over HTTP. You can also support Windows Mobile (as well as
Symbian, iPhone and Blackberry devices using ActiveSync). The platform is just
beginning. You would also want to take an inventory of existing systems to determine the
actual operating platforms, which might range from Mac OS and Linux to Google
Chrome, Android, Symbian, RIM Blackberry and iPhones.
Connectivity – in order to assess the connectivity demands you need to identify all
required connections. At high level the connections will include categories such as:
o Enterprise to cloud
o Remote to cloud
o Remote to enterprise
o Cloud to cloud
o Cloud to enterprise
Once you put these together into a high level connectivity diagram you can then proceed to the
next step of identifying and selecting connectivity options. Unless the systems are connected
they cannot operate, at least for any extended periods of time. It the case of cloud computing,
data and processing are both highly distributed making reliable, efficient and secure connectivity
and are the most critical.
Management – generally, for each component in the design we need to investigate how
we will manage it. This includes all the end‐user devices, the connectivity, and legacy
infrastructure and all the applications involved. The challenge of splitting management
components will be that you may have policies that need to be kept synchronized.
Imagine for example, that you have a minimum password length of 8 characters which is
increased to 10. If you have only two management servers and this is not a frequent type
of occurrence then you can easily apply challenge manually. However, if you are dealing
with hundreds of management servers and you receive minor policy changes on a weekly
basis you can imagine how cumbersome and error‐prone the task will become.
Security – the impact of Cloud Computing on security is profound. There are some
benefits and unfortunately some hurdles to overcome. One challenge in trying to evaluate
security is that it tends to relate to all aspects of IT and, since Cloud Computing`s impact
is similarly pervasive. Security domains:
Cryptography ‐ presents various methods for taking legible, readable data, and
transforming it into unreadable data for the purpose of secure transmission, and then
using a key to transform it back into readable data when it reaches its destination. [11]
Operations security – includes procedures for back‐ups and change control
management.
4.2 Examples
Most common public known examples of a Cloud are Google Apps. This service provide
number of on‐line applications like Word‐processor, Application for creating and editing
presentations, documents storage and sharing, email functions with connection on MS Outlook
or MS exchange services, account and contacts sharing, Instant Messenger functions, etc., all
provided by Google. Other Clouds examples include CloudX Technology Group, Yahoo, E‐bay,
Facebook, Citric XennApp, AJAX, etc.
Device using CC
Chromebook ‐ is a mobile device running Google Chrome OS. The two first devices for
sale are by Samsung and Acer Inc. and are slated for release on June 15, 2011 [14]
Chromebook (CR‐48) is Google prototype model. These machines boot‐up very quickly
and offer basic tools for internet communication. Such as 3G/4G and Wifi connectivity,
Web cam and microphone, mobile processor and enought RAM for webbrowsing and
works on‐line only. Basic Hardrive is optional.
Chromebook by Acer
Source: http://gearburn.com/2011/05/chromebook-awesome-if-it-wasn%E2%80%99t-from-
google/
Neither Cloud Computing is an exception and experience both prons and cons. Some of them are
stated and described in more details in this chapter.
5.1 Pros
Lower costs ‐ the principle of sharing resources (HW, SW, infrastructure...) gives to
customer also the benefit of sharing its costs. Customer do not has to buy expensive
hardware, such as powerful workstations, large server solution and software applications.
Customer needs only internet connection and basic PC with not high requirements.
Simple laptop, netbook or mobile phone is enought. Customer also pays only for what the
real usege. These could be services, hardware resources or infrastructure or its
combination.
Instant access anywhere - one of the most important benefit is availability of a service
anywhere. What is needed for accessing the service is computer connected to the internet.
There is no dependence on platform (PC, MAC, mobile phone, car etc.).
Security - is a very discussed issue in the Cloud Computing service providing and could
be put in both pros and cons as you see in a while. Service is protected by usage an
authorization. Users identify themselves by using an ID (Username) and Password (or
also more sophisticated method such as chip, fingerprint, face detection etc. can be used).
Communication between client and provider servers is secured. Data centre is protected
by firewalls and kept in secured buildings. There generally there is a very low risk of
danger caused by attack of third parties. BUT on the other hand, a problem could be that
client (customer) keeps all the data out of his computer – just at the providers´ servers. It
means the client entrusts the data to the provider (provider company) and has in fact no
physical control over them.
Requirements - technology, which customer needs are very simple. Importatnt is only
terminal as a laptop, desktop, mobile phone, netbook etc. with web‐browser, internet
connection and usually also created account on a service at providers place.
5.2 Cons
Dependence on provider – if company starts using the Cloud Computing service and
replaces its previous information system or changes IT structure, it becomes dependant
on its service provider. Risks connected with such a dependency may include sudden
change of prices or conditions of a contract. Provider could be hit by bankruptcy and end
its business activities. Functions and applications might be changed without will of a
customer and if a provider suffers from technical problems, all the customers are out of
service which means without their data.
Reputation – Cloud Computing is very new type of service. Not many companies has an
experience with such a kind of services and application outsourcing. Many users are still
worried about data security tranmitted over the internet.
Migration costs – in some cases there can be higher start‐up costs. Company may have
to invest into users training, any amendments which allows the communication of service
provider and current company software and in some cases, switching to Cloud
Computing could lead to a change of business processes.
Less functions – solutions, which are targeted to the wide range of companies that can’t
provide specific functions and therefore are not flexible.
Dependence on internet connection ‐ all the Cloud Computing applications can be used
on‐line only thus any connection failure could be fatal.
6 Operation
After reading through this chapter you will understand the terms such as administration, support
or monitoring.
Service strategy relates very closely to the Strategic Impact. Service providers only have
limited resources and usually have more requests for services and functionality than can
provide within their budget. In order to maximize their impact they must therefore
prioritize these services. So IT organization must determine the value of potential internal
and external services.
Service design covers all elements relevant to the service delivery including service
catalogue management, service level management, capacity management, availability
management, IT service continuity management, information security management and
supplier management. A key aspect of this design is the definition of service levels in
terms of key performance indicators (KPIs). The key challenge is not to derive a number
of KPIs, but to select a few that are critical to the overall strategy.
Example of KPIs
Source: http://mkhairul.sembangprogramming.com/2008/04/24/key-performance-indicators-kpi-
for-software-development/
Service transition represents the intersection between project and service management.
In a cloud-based solution this is not only covers the initial implementation of cloud
services but also any updates to them, launches of new services or retirement and
migration of existing services.
Service operation is the core of the ITIL model. Its focus is on the day-to-day operations
that are required in order to deliver service to its users at the agreed levels of availability,
reliability and performance. It includes concepts such as event management, incident
management, problem management, access management, request fulfillment and service
desk.
6.2 Administration
Since Cloud Computing is primarily web-based, the logical interface for administering is a
portal. It can offer facilities such as billing, analytic, account management, service management,
package install, configuration, instance flexing and tracing problems and incidents.
The area between service request and more extensive change management is not always obvious
and depends to a large extent on the organization involved. However, in all companies there are
likely to be services that are too critical for automated change requests.
One major recurring change is the need to perform upgrades to increase functionality, solve
problems and sometimes improve performance. New version can disrupt services because they
may drop functions, implement them differently or contain undiscovered bugs. It is therefore
important to understand whether they will have any impact on business processes before rolling
them out live. One approach is to stage all services locally and test them with on-premise
equipment before overwriting the production services.
Long-term capacity management is less critical for on-demand services. Elasticity of resources
means that enterprises can scale up and down as demand dictates without need for extensive
planning. It´s also a good idea to verify that your services provider will actually be in a position
to deliver all the resource requirements that you anticipate. Several aspects of capacity planning
have to be evaluated in parallel.
Managing identities and access control for enterprise applications remains one of the greatest
challenges facing IT today. While an enterprise may be able to leverage several Cloud
Computing services without a good identity and access management strategy, in the long run
extending an organization’s identity services into the cloud is a necessary precursor towards
strategic use of on-demand computing services. Supporting today’s aggressive adoption of an
admittedly immature cloud ecosystem requires an honest assessment of an organization’s
readiness to conduct cloud-based Identity and Access Management (IAM), as well as
understanding the capabilities of that organization’s Cloud Computing providers.
Identity and Access Management Model
Source: http://radio-
weblogs.com/0100367/stories/2002/05/11/enterpriseIdentityAndAccessManagement.html
We will discuss the following major IAM functions that are essential for successful and effective
management of identities in the cloud:
Identity provisioning/deprovisioning
Authentication
Federation
6.3 Monitoring
Part of the incentive of moving to a public cloud is to reduce the amount of internal operational
activity. Much of the internal infrastructure is local such as the printers, scanners and local
equipment. End user desktops and mobile device s are also closer to on-site operations personnel.
One area that is of particular concern to business continuity is backup. Backups are required for a
variety of reasons including:
Problem management refers to tracking and resolving unknown causes of incidents. It is closely
related to Incident management but focuses on solving root causes for a set of incidents rather
than applying what may be a temporary fix to an incident.
6.4 Support
There is some diversity in the user roles that may require assistance in a cloud solution. There are
two types: end user and IT support. End-user support should progress in tiers that successively
address more difficult and less common problems. It begins with simple documentation and on-
line help to orient the user and clarify any obvious points of confusion. A self-service portal can
then help to trigger automatic process to fulfill common requests.
In addition to end users there is also a requirement for IT and business users to receive assistance
from the service providers. There must be mechanism in place for obtaining and sharing
documentation and training on all cloud services and technologies. Vendor architecture diagrams
and specifications for all technical interfaces can help IT staff.
6.5 Control
Most of the legal provisions that relate to cloud computing fall into one of three categories:
Data privacy
Electronic discovery
Notification
There are also threats connected such as data leakage, data loss, non-compliance, loss of service
and impairment of service
7 Conclusion
From the text and infromation aforementioned, you should have a basis information about what
is Cloud Computing and its history, features or architecture. To summarize it, Cloud Computing
is very new and modern technology based on sharing resources (especially software, hardware
and infrastructure). It helps companies but also individuals in saving costs for IT resources. All
data are stored out‐of‐company at a providers place which brings both advantages and
disadbvnatges especially problematic issue about security and data privacy. Most common Cloud
service you as a user may come across with are Google Apps.
A File System in the Cloud vs. A Cloud File System
There are two types of cloud based file systems. One is designed to extend cloud storage into the
organization. The other is designed to allow organizations to run applications in the cloud but use
more traditional file protocols like NFS and SMB. Both have their roles, but the organizations
needs to make sure it is picking the right tool for the job.
A cloud file system is a file system that creates a hub and spoke method of distributing data. The
“hub” is the central storage area, typically located at a public cloud provider like Amazon AWS,
Microsoft Azure or Google Cloud. The “spokes” are the organization’s local locations (data
centers, branch offices, remote offices). At each spoke a software or hardware appliance is
installed and it acts as a cache for that location’s most active data.
It is important to note that all vendors claiming to have a cloud file system are not created equal.
The distribution of data, while critical, is just the first step. Organizations need to examine how
accurate the vendor’s caching algorithms are and how efficiently they can transfer data to and
from the cloud. Beyond the critical first step of data distribution, organizations need to look for
solutions that provide high performance access, global file locking, granular scaling of capacity
and intelligent archiving.
A file system in the cloud is exactly what it sounds like. The vendor creates a file system that
offers traditional file protocols like NFS or SMB to cloud hosted applications. Essentially, the
vendor provides an instance of their file system and the organization implements it in the cloud
provider of their choice. It then allocates the appropriate storage compute performance and the
storage IO.
The goal with these file systems is to speed the migration of applications to the cloud. By using a
file system in the cloud the organization does not need to re-write the storage IO components of
the application.
AWS Vs Azure Vs Google: Cloud Services Comparison [Latest]
The most defining cloud battle of the present time is AWS vs Azure vs Google. Choosing one
public cloud from AWS, Azure or Google is the most difficult task for the one who wants to
enter and grow in the cloud world. This blog will help you make a right decision!
With the growing importance of Cloud Computing, public cloud service is nowadays in huge
demand. This increasing demand for public cloud is thus opening the doors of more growth and
opportunities for cloud service providers. In order to grow in the cloud market, cloud companies
are focused to increase their services while reducing prices to lead in the market of the public
cloud.
According to Gartner Survey Report, The market for public cloud is predicted to reach from
$260 billion in 2017 to around $411 billion in 2020.
AWS, Google, and Azure are the multi-indweller cloud services that are based on their cloud
computing model where the cloud service provider supply resources like database, applications,
and storage over the internet.
Public Cloud consists of a wide range of cloud services and products including
Become a certified cloud professional! Start your preparation with Whizlabs online training and
practice material Now!
The Public Cloud market is governed by top three public clouds – AWS, Google, and Azure.
There is a strong competition between these three that can’t be recouped by any additional public
cloud provider in nearest future.
Amazon Web Services is dominating public cloud over MS Azure and Google since 2006 when
it started offering services. Microsoft Azure and Google are far from the race but growing
continuously to be at the top.
On the basis of features and solutions, AWS vs Azure vs Google Features comparison is:
AWS vs Azure vs Google Features Comparison
Cloud Service Providers (CSPs) offer high-quality services with multiple capabilities, excellent
availability, good performance, high security, and customer support. The cloud market is
governed by top three cloud services providers – Amazon Web Services (AWS), Microsoft
Azure, and Google Cloud platform.
Each cloud services provider supplies multiple products according to the user requirements over
the internet. Most commonly used cloud services include –
Compute
Storage
Database
Networking and Content Delivery
Management tools
Development Tools
SecurityLet’s study the AWS Vs Azure Vs Google cloud services comparison on the
basis of the services offered.
AWS Vs Azure Vs Google: Compute
Compute is a computer’s fundamental role. It contains services related to compute workloads.
An effective cloud provider has the ability to scale thousands of nodes in just a few minutes.
Amazon EC2 provides core computing services to configure VM (Virtual Machines) with the
use of custom or pre-configured AMIs while Azure provides a VHD (Virtual Hard Disk), which
is similar to Amazon’s AMI to configure VM. Google provides Google Compute Engine to
introduce cloud computing services.
On the basis of services provided by the compute domain, the difference between the top three
public clouds is given by the following table.
Storage is one of the key functions of cloud services; the services offered by storage domain are
related to data storage. AWS provide long-running storage services while storage services
provided by Microsoft Azure and Google Cloud Platform are also reliable and respectable
options. On the basis of services provided by storage domain, the difference between the top
three public clouds is given by
Both the Amazon and Microsoft has been named as the leaders in Gartner’s infrastructure as
a Service Magic Quadrant 2017, Google and IBM are among those following the leaders.
A database provides services related to database workloads. It is worth note here that Azure
supports big data and both the NoSQL and relational databases. On the basis of services provided
by database domain, the difference between the top three public clouds is given by the following
table.
AWS vs Azure vs Google: Database Services
Each cloud service provider offers different networks. Amazon’s network is the Virtual Private
Cloud, Azure’s network is the Virtual Network, and Google’s network is the Subnet. On the
basis of services provided by networking & content delivery domain, the difference between the
top three public clouds is given by the table below.
Each of the top three public cloud providers provides a range of monitoring and management
services. These services support performance, infrastructure, workloads, visibility into health,
and utilization of the applications. On the basis of services provided by management &
monitoring domain, the difference between the top three public clouds is given by the following
table.
Confused about which AWS certification is right for you? Let’s clear out the confusion
which AWS Certification you should choose!
Development tools are used to build, diagnose, debug, deploy and, manage multiplatform
scalable applications and services. On the basis of services provided by development tools
domain, the difference between the top three public clouds can be given as given in the following
table.
AWS Vs Azure Vs Google: Development Tools
Amazon provides top-rated cloud security services. Fortinet in Amazon Web Services (AWS)
provides security features to Amazon Virtual Private Cloud (VPC) in many availability zones
on-demand. While in Microsoft Azure, Fortinet supplies optimized security for data and
applications and remove extra security expenditures during migration.
FortiGate Next-Generation Firewall provides advanced security and critical firewalling for
Google Cloud Platform (GCP). On the basis of services provided by security domain, the
difference between the top three public clouds is given by
In revenue terms, Amazon Web Services (AWS) outshines the Cloud Market with 62 % market
share which is more than three times from the market share of Azure and five times from the
Google Cloud Platform. While Microsoft Azure and Google Cloud Platform have only 20% and
12% market shares.
The revenue and thus, the market share of Azure and Google is showing a considerable growth
with the time. It is making Google and Microsoft Azure the other two giants of cloud market
after AWS. These two have got all the technology, power, wealth, and marketing in order to
attract enterprises and individual customers to their services.
According to KeyBanc, Amazon lost 6% share while Microsoft Azure moved from 16% to 20%,
and Google jumped its share from 10% to 12% in the cloud business.
Whether the customer is an individual or an enterprise, all of these cloud services make cloud
offerings with a pricing model. This is the best characteristic of these cloud service providers as
you don’t need to buy a cloud solution. The pricing model these clouds follow is to pay as you
go; it means you need to pay on the basis of usage. Considering AWS Vs Azure Vs Google,
Amazon charges on an hourly basis while Azure and Google charge on the minute basis. One
can choose to make advanced payments i.e. prepaid or monthly payments i.e. postpaid.