You are on page 1of 154

SMART MANUFACTURING

Ms.ROJA RAMANI ,M.Tech, (Ph.D).,


Asistant Professor

Department of Information Technology,


Sethu Institute of Technology.
UNIT I INDUSTRY 4.0
Introduction to Industrial revolutions - Industry 4.0 environment - Drivers of industry 4.0 -
Digital integration in smart factory - Cyber Physical System, Internet of Things and Services -
New technologies for future manufacturing - Benefits and Challenges of Industry 4.0

SMART MANUFACTURING
The History Behind Industry 4.0

The First Industrial Revolution

dustrial Revolu
The industrial revolution in Britain came in to introduce machines into production by the
end of the 18th century (1760-1840). This included going from manual production to the use of
steam-powered engines and water as a source of power.

This helped agriculture greatly and the term “factory” became a little popular. One of the
industries that benefited a lot from such changes is the textile industry, and was the first to adopt
such methods. It also constituted a huge part of the British economy at the time.

The Second Industrial Revolution


The second one dates between 1870 and 1914 (although some of its characteristics date
back to the 1850) and introduced pre-existing systems such as telegraphs and railroads into
industries. Perhaps the defining characteristic of that period was the introduction of mass
production as a primary means to production in general.

The electrification of factories contributed hugely to production rates. The mass


production of steel helped introduce railways into the system, which consequently contributed to
mass production. Innovations in chemistry, such as the invention of the synthetic dye, also mark
such period as chemistry was in a rather primitive state then.

However, such revolutionary approaches to industry were put to an end with the start of
World War I. Mass production, of course, was not put to an end, but only developments within
the same context were made and none of which can be called industrial revolutions.

The Third Industrial Revolution

Perhaps the third one is much more familiar to us than the rest as most people living
today are familiar with industries leaning on digital technologies in production. However, the
third industrial revolution is dated between 1950 and 1970.

It is often referred to as the Digital Revolution, and came about the change from analog
and mechanical systems to digital ones.

Others call it the Information Age too. The third revolution was, and still is, a direct result of the
huge development in computers and information and communication technology.

The Definition of the Fourth Industrial Revolution and How It Is Different From the Third

The fourth industrial revolution takes the automation of manufacturing processes to a


new level by introducing customized and flexible mass production technologies.

This means that machines will operate independently, or cooperate with humans in
creating a customer-oriented production field that constantly works on maintaining itself. The
machine rather becomes an independent entity that is able to collect data, analyze it, and advise
upon it.

This becomes possible by introducing self-optimization, self-cognition, and self-


customization into the industry. The manufacturers will be able to communicate with computers
rather than operate them.
How will machines communicate?

The rapid changes in the information and communication technologies (ICT) have broken
the boundaries between virtual reality and the real world. The idea behind Industry 4.0 is to
create a social network where machines can communicate with each other, called the Internet of
Things (IoT) and with people, called the Internet of People (IoP).

This way, machines can communicate with each other and with the manufacturers to
create what we now call a cyber-physical production system (CPPS). All of this helps industries
integrate the real world into a virtual one and enable machines to collect live data, analyze them,
and even make decisions based upon them.

The fourth Industrial Revolution and the third industrial innovation wave of the
Industrial Internet

As mentioned, in the US, GE and a range of other industrial players (including non-
American ones who are also members of the “Plattform Industrie 4.0”) launched the
Industrial Internet Consortium.

By 2018, only 30 percent of manufacturers investing in digital transformation will be able


to maximize the outcome; the rest are held back by outdated business models and technology
(IDC)

The Industrial Internet, as we wrote previously a term coined by American industrial


giant GE, looked pretty much like Industry 4.0., although in the Boston Consulting Group image
above it is mentioned as one of the enabling industrial technologies in the network of machines
and products and networked objects communications sphere of IIoT.

The difference between Industry 4.0 and the Industrial Internet, however, is that,
originally, the Industrial Internet was seen as the third industrial innovation wave. So, a third
wave of innovation instead of a fourth revolution in the industry.

It only shows how relative revolutionary terms are as the three industrial Internet
innovation waves respectively were:

 The Industrial Revolution. The real one and more or less a combination of the first and
second revolution in the Industry 4.0 view.
 The Internet Revolution: ‘computing power and the rise of distributed information
networks’.
 The Industrial Internet: what is called the fourth industrial revolution in Industry 4.0.

Industry 4.0 challenges and risks

And it’s here that also in Industry 4.0 we find those eternal hurdles. The Boston
Consulting Group, among others, identified:

 The definition of a strategy (for Industry 4.0), challenge number one.


 The rethinking of the organization and processes to maximise outcomes.
 Understanding the business case.
 Conducting successful pilots.
 Making the organization realize action is needed.
 Change management, so often overlooked.
 Company culture.
 A true interconnection of departments.
 Talent….
Cyber-physical systems (CPS) in the Industry 4.0 vision

This might still seem complex but, then again, cyber-physical systems are complex.
Moreover, the term isn’t new and is better known in an engineering and industry context.

It fits more in the Operational Technology (OT) side of the converging IT/OT world
which is typical in Industry 4.0 and the Industrial Internet. So, if you want to understand Industry
4.0 or the Industrial Internet, you’ll need an understanding of some essential operational,
production and mechanics terms.

Cyber-physical systems in the Industry 4.0 view are based on the latest control systems,
embedded software systems and also an IP address (the link with the Internet of Things becomes
clearer, although strictly both are not the same but they certainly are twins as we see in the next
‘chapter’.

In the Industry 4.0 context of mechanics, engineering and so forth, cyber-physical


systems are seen as a next stage in an evolution of an ongoing improvement of enhancement
and functions integration.
Looking at Industry 4.0 as the next new stage in the organization and control of the value
chain across the lifecycle of products, this ongoing improvement in which CPS fits started from
mechanical systems, moved to mechatronics (where we use controllers, sensors and actuators,
more terms that are familiar in IoT) and adaptronics, and is now entering this stage of the rise of
cyber-physical systems.

Cyber-physical systems essentially enable us to make industrial systems capable to


communicate and network them, which then adds to existing manufacturing possibilities.

They result to new possibilities in areas such as structural health monitoring, track and
trace, remote diagnosis, remote services, remote control, condition monitoring, systems health
monitoring and so forth.

And it’s with these possibilities, enabled by networked and communicating cyber-
physical modules and systems, that realities such as the connected or smart factory, smart health,
smart cities, smart logistics etc. are possible as mentioned previously.

Cyber-physical systems before Industry 4.0

In the original definitions, going back over a decade, IP addresses where not specifically
mentioned in cyber-physical systems.

In 2008, Professor Edward A. Lee from the University of California, Berkeley, defined
Cyber-Physical Systems as follows: “Cyber-Physical Systems (CPS) are integrations of
computation and physical processes. Embedded computers and networks monitor and control the
physical processes, usually with feedback loops where physical processes affect computations
and vice versa”.

On his page on the Berkeley website, Professor Lee links to cyberphysicalsystems.org where you
find his definition and a CPS concept map in the form of a mind map where you can click the
various components to read more. For the German Industrie 4.0 academia and industry people,
CPS (and that bridging of cyber/digital and physical) was key in Industry 4.0.

Cyber-physical systems also include dimensions of simulation and twin models, smart
analytics, self-awareness (self-configuration) and more . We’ve tackled some of these
topics, including digital twins, previously.

Hopefully, the essence of the concept, context and reality of the evolution towards cyber-
physical systems has become a bit clearer now. Note: there is a difference between cyber-
physical systems and cyber-physical manufacturing systems or cyber-physical production
systems (CPSS) where we move from the technological component to the far more important
process and application dimension.

Cyber-Physical Systems
As mentioned above, a cyber-physical system aims at the integration of computation and
physical processes. This means that computers and networks are able to monitor the physical
process of manufacturing at a certain process. The development of such a system consists of
three phases:

 Identification: Unique identification is essential in manufacturing. This is the very basic


language by which a machine can communicate. RFID (Radio-frequency identification)
is a great example of that. RFID uses an electromagnetic field to identify a certain tag
that is often attached to an object. Although such technology has been around since 1999,
it still serves as a great example of how Industry 4.0 operated initially.
 The Integration of Sensors and Actuator: This is essential for a machine to operate.
The integration of sensors and actuators simply means that a certain machine’s movement
can be controlled and that it can sense changes in the environment. However, even with
the integration of sensors and actuators, their use was limited and does not allow them to
communicate with each other.
 The Development of Sensors and Actuators: Such development allowed machines to
store and analyze data. A CPS now is equipped with multiple sensors and actuators that
can be networked for the exchange of information.

The Internet of Things (IoT)


A cyber-physical system still sounds familiar to us today. Machines can exchange data
and, in a lot of applications, can sense the changes in the environment around them. Fire alarms
are a good example of that. The Internet of Things, however, is thought to be what truly has
initiated Industry 4.0.

The Internet of Things is what enables objects and machines such as mobile phones and
sensors to “communicate” with each other as well as human beings to work out solutions. The
integration of such technology allows objects to work and solve problems independently. Of
course, this is not entirely true as human beings are also allowed to intervene.

However, in case of conflicting goals, the case is usually raised to higher positions.
According to Hermann, Pentek, and Otto, ““things” and “objects” can be understood as CPS.
Therefore, the IoT can be defined as a network in which CPS cooperate with each other through
unique addressing schemas.

The Internet of Services (IoS)

It is easy to see that in today’s world each and every electronic device is more likely to be
connected to either another device, or to the internet. With the huge development and diversity in
electronic and smart devices, obtaining more and more of them creates complexities and
undermines the utility of each added device.

Smart phones, tablets, laptops, TVs or even watches are becoming more and more
interconnected, but the more you buy, the added value of the last device becomes
unrecognizable. The Internet of Services aims at creating a wrapper that simplifies all connected
devices to make the most out of them by simplifying the process. It is the customer’s gateway to
the manufacturer.
Internet of Things and cyber-physical systems: similar characteristics

The presence of an IP address by definition means that cyber-physical systems, as


objects, are connected to the Internet (of Things). An IP address also means that the cyber-
physical system can be uniquely identified within the network. This is a key characteristic
of the Internet of Things as well.

The main Internet of Things use case in manfacturing, from a spending perspective,
concerns manufacturing operations

Cyber-physical systems are also equipped with sensors, actuators and all the other
elements which are part of the Internet of Things. Cyber-physical systems, just like the Internet
of Things need connectivity. The exact connectivity technologies which are needed depend on
the context (in both).

The Internet of Things consists of objects with embedded or attached technologies that
enable them to sense data, collect them and send them for a specific purpose. Depending on the
object and goal this could be capturing data regarding movement, location, presence of gasses,
temperature, ‘health’ conditions of devices, the list is endless. This data as such is just the
beginning, the real value starts when analyzing and acting upon them, in the scope of the IoT
project goal.

IoT devices can also receive data and instructions, again depending on the ‘use case’. All
this applies to cyber-physical systems as well, which are essentially connected objects. There are
more similar characteristics but you see how much there is in common already.

CPS-enabled capabilities and Internet of Things use cases

Moreover, the new capabilities which are enabled by cyber-physical systems, such
as structural health monitoring, track and trace and so forth are essentially what we call
Internet of Things use cases.

In other words: what you can do with the Internet of Things. Some of them are used in a
cross-industry way, beyond manufacturing.

Below are two examples of CPS-enabled capabilities we tackled previously and how
they really are IoT uses cases.

Track and trace possibilities in practice lead to multiple IoT use cases in, among others,
healthcare, logistics, warehousing, shipping, mining and even in consumer-oriented Internet of
Things use cases. There are ample applications of the latter with numerous solutions and
technologies. You can track and trace your skateboard, your pets, anything really, using IoT.

Structural health monitoring is also omnipresent, mainly across industries such as


engineering, building maintenance, facility management, etc. With the right sensors and systems
you can monitor the structural health of all kinds of objects, from bridges and objects in
buildings to the production assets and cyber-physical assets in manufacturing and Industry 4.0
Smart Factory

Smart factories are a key feature of Industry 4.0. A smart factory adopts a so called Calm-
system. A calm system is a system that is able to deal with both the physical world as well as the
virtual. Such systems are called “background systems” and in a way operate behind the scene. A
calm system is aware of the surrounding environment and the objects around it.

It also can be fed with soft information regarding the object being manufactured such as
drawings and models. According to Hermann, Pentek, and Otto.

Internet of Things and cyber-physical systems: similar characteristics

The presence of an IP address by definition means that cyber-physical systems, as


objects, are connected to the Internet (of Things). An IP address also means that the cyber-
physical system can be uniquely identified within the network. This is a key characteristic of the
Internet of Things as well.

The main Internet of Things use case in manufacturing, from a spending perspective,
concerns manufacturing operations

Cyber-physical systems are also equipped with sensors, actuators and all the other
elements which are part of the Internet of Things. Cyber-physical systems, just like the Internet
of Things need connectivity. The exact connectivity technologies which are needed depend on
the context (in both).

The Internet of Things consists of objects with embedded or attached technologies that
enable them to sense data, collect them and send them for a specific purpose. Depending on the
object and goal this could be capturing data regarding movement, location, presence of gasses,
temperature, ‘health’ conditions of devices, the list is endless. This data as such is just the
beginning, the real value starts when analyzing and acting upon them, in the scope of the IoT
project goal.

IoT devices can also receive data and instructions, again depending on the ‘use case’. All
this applies to cyber-physical systems as well, which are essentially connected objects. There are
more similar characteristics but you see how much there is in common already.

WHAT IS THE SMART FACTORY?

The Smart Factory also goes by other names, which basically mean the same thing. The
more popular ones include “Smart Manufacturing” and “Intelligent Factory”. The names that are
more in use, however, are “Smart Factory” and “Factory of the Future”.

There are several ways to define and describe the Smart Factory. Some describe it as a
factory that will be comprised of systems that are significantly more intelligent and dynamic than
the systems currently in use in manufacturing processes. Others say it is a factory that is certainly
more flexible, and operates under the concept that the processes will be interlinked across
networks.
When we speak of factories getting smart, it means that these manufacturing concerns
will focus more on utilizing their best talent, and establishing or building industrial infrastructure
that are designed to handle increased connectivity among all sensors and devices that are
involved in a complete production line. This connected factory design that has become
associated with the phrase “Smart Factory” is expected to increase growth and add value to the
entire production chain.

Factories of today follow a certain system, depending on the nature of the operations and
functions, and they are organized accordingly. However, in the Smart Factory, which is designed
to be more intelligent and flexible, the organization will be done in a different manner. The
difference lies in the use of networking. It also has a broader application, since the organization
is not done on a per process basis. Instead, entire production chains are connected with each
other.

The usual system would have, for example, the suppliers and logistics working separately
from the product development process. In the same manner, factory and production planning will
deem itself to be independent from the enterprise resource planning. Within the Smart Factory,
their functions will impact each other.

The present setup of factories also follows a fixed program of operations, which could be
very limiting and restrictive. As such, they are not allowed to deviate from what has been
previously planned or programmed, even if doing so would enable production process to have
improved efficiencies. In a setting that adapts the Smart Factory, processes can be easily
improved as they, as well as the machinery and various equipment in use within the process, are
designed for self-optimization. Decision-making will also be quick, since autonomy is granted
within the processes.

Benefits of Smart Manufacturing

There are many reasons why manufacturers would feel compelled to make that transition
to smart manufacturing and set up their own “factory of the future”.

Real-time capability and results

Smart manufacturing enables real-time collection of data, real-time analysis of the same,
and consequently real-time decision-making.

This is considered to be the ultimate reason why anyone would want to have a Smart
Factory. Taking advantage of the benefits of connectivity, networks and communication sensors,
to name a few, will cut through the usual “long-way-around” operations. In any manufacturing
process, time is considered to be money, and the more time that is spent unnecessarily on a
single phase of the process is money wasted.

Streamlining operations

How many times have we come across manufacturers hitting themselves over the head
once they realize they are spending a lot more than they should on production processes that are
not really necessary, or could be simplified? Smart manufacturing makes streamlined operations
possible, so the company will have more savings.
Efficiency

The overall efficiency of the production process – and the organization as a whole – will
greatly benefit from the application of smart manufacturing concepts. This is closely related to
the results of having streamlined operations. Perhaps this is most apparent in energy
consumption. For example, the energy consumption on the old production process can be cut
down in almost half once the whole process has been reevaluated and shortened considerably by
doing away with stages that are not required or combining complimentary phases together.

Employee safety and comfort

It is a fact that not all factories are really employee- or worker-friendly. It could be that
the workers directly involved in the production process find their work environment unsafe or
uncomfortable. With the streamlining of operations, efficiencies are improved and management
will have more time to focus on the welfare of its employees, particularly their safety and
comfort. Thanks to automation, tasks that were usually performed manually will also be more
manageable.

Similarly, the tools and machines that will be used in the production process are also
designed to reduce worker fatigue, as well as pollution. Stress levels will be lowered
considerably, if only for the simple reason that the workplace is more conducive to working,
especially during prolonged periods.

And it is not just the lower-level workers that will benefit from this. Top management, or
those that have to make the decisions will also be aided by information that is updated and
accurate.

Key features or characteristics of a Smart Factory

Out of the many – often confusing – descriptions or definitions of Smart Factory, we can
glean several of the key features or characteristics that define it.

 Automation: Automation is said to be the key component of the Smart Factory. Through
automation, particularly connected automation, factory efficiency will be vastly
improved. This is thanks to labor costs and overhead being reduced, as the operations
have become more streamlined. A smart factory will utilize an infrastructure that is better
equipped to handle and manage a larger number of sensors and connected devices across
the production line, using industrial Ethernet protocols. Lately, we see more automation
options being developed, purposely to be integrated into factory machinery and
equipment, allowing them to communicate with other devices.
 Industrial internet: Smart Factories are supported by the infrastructure known as
Industrial Internet, which is comprised of hundreds and thousands of sensors and devices
that are managed by one central command or operator. This serves as the link that
connects everything together.
 Interconnection of systems within a system: In a Smart Factory, we are not talking of
just one system working within the manufacturing process. More often than not, you will
be looking at one system, which happens to be one among the several other systems that
are interconnected or interlinked within a bigger, broader system. Perhaps the most
identifiable systems within a Smart Factory are the cyber-physical systems and, thanks to
adapting smart manufacturing concepts, each cyber-physical system has autonomy to
make decisions on its own. In short, the Factory of the Future is pretty much a system of
systems that involves manufacturing systems, work piece carriers, assembly lines, and
automated workstations, all interconnected via the Internet of Things.

INDUSTRY 4.0 DESIGN PRINCIPLES

The design principles allow manufacturers to investigate a potential transformation to


Industry 4.0 technologies. Based on the components above, the following are the design
principles:

 Interoperability: Objects, machines and people need to be able to communicate through


the Internet of Things and the Internet of People. This is the most essential principle that
truly makes a factory a smart one.
 Virtualization: CPSs must be able to simulate and create a virtual copy of the real world.
CPSs must also be able to monitor objects existing in the surrounding environment.
Simply put, there must be a virtual copy of everything.
 Decentralization: The ability of CPSs to work independently. This gives room for
customized products and problem solving. This also creates a more flexible environment
for production. In cases of failure or having conflicting goals, the issue is delegated to a
higher level. However, even with such technologies implemented, the need for quality
assurance remains a necessity on the entire process
 Real-Time Capability: A smart factory needs to be able to collect real time data, store or
analyze it, and make decisions according to new findings. This is not only limited to
market research but also to internal processes such as the failure of a machine in
production line. Smart objects must be able to identify the defect and re-delegate tasks to
other operating machines. This also contributes greatly to the flexibility and the
optimization of production.
 Service-Orientation: Production must be customer-oriented. People and smart
objects/devices must be able to connect efficiently through the Internet of Services to
create products based on the customer’s specifications. This is where the Internet of
Services becomes essential.
 Modularity: In a dynamic market, a Smart Factory’s ability to adapt to a new market is
essential. In a typical case, it would probably take a week for an average company to
study the market and change its production accordingly. On the other hand, smart
factories must be able to adapt fast and smoothly to seasonal changes and market trends.

Advantages of Industry 4.0

 Optimization: Optimizing production is a key advantage to Industry 4.0. A Smart


Factory containing hundreds or even thousands of Smart Devices that are able to self-
optimize production will lead to an almost zero down time in production. This is
extremely important for industries that use high end expensive manufacturing equipment
such as the semi-conductors industry. Being able to utilize production constantly and
consistently will profit the company.
 Customization: Creating a flexible market that is customer-oriented will help meet the
population’s needs fast and smoothly. It will also destroy the gap between the
manufacturer and the customer. Communication will take place between both directly.
Manufacturers won’t have to communicate internally (in companies and factories) and
externally (to customers). This fastens the production and delivery processes.

 Pushing Research: The adoption of Industry 4.0 technologies will push research in
various fields such as IT security and will have its effect on the education in particular. A
new industry will require a new set of skills. Consequently, education and training will
take a new shape that provides such an industry will the required skilled labor.

Challenges facing Industry 4.0

 Security: Perhaps the most challenging aspect of implementing Industry 4.0 techniques is
the IT security risk. This online integration will give room to security breaches and data
leaks. Cyber theft must also be put into consideration. In this case, the problem is not
individual, but can, and probably will, cost producers money and might even hurt their
reputation. Therefore, research in security is crucial.
 Capital: Such transformation will require a huge investment in a new technology that
doesn’t sound cheap. The decision to make such transformation will have to be on CEO
level. Even then, the risks must be calculated and taken seriously. In addition, such
transformation will require a huge capital, which alienates smaller businesses and might
cost them their market share in the future.
 Employment: While it still remains early to speculate on employment conditions with
the adoption of Industry 4.0 globally, it is safe to say that workers will need to acquire
different or an all-new set of skills. This may help employment rates go up but it will also
alienate a big sector workers. The sector of workers whose work is perhaps repetitive will
face a challenge in keeping up with the industry. Different forms of education must be
introduced, but it still doesn’t solve the problem for the elder portion of workers. This is
an issue that might take longer to solve and will be further analyzed later in this report.
 Privacy: This not only the customer’s concern, but also the producers. In such an
interconnected industry, producers need to collect and analyze data. To the customer, this
might look like a threat to his privacy. This is not only exclusive to consumers. Small or
large companies who haven’t shared their data in the past will have to work their way to a
more transparent environment. Bridging the gap between the consumer and the producer
will be a huge challenge for both parties.

THE FUTURE WORKFORCE

Industry 4.0 has a lot to promise when it comes to revenues, investment, and
technological advancements, but employment still remains one of the most mysterious
aspects of the new industrial revolution. It’s even harder to quantify or estimate the
potential employment rates.

What kind of new jobs will it introduce? What does a Smart Factory worker needs to
have to be able to compete in an ever changing environment such as this? Will such
changes lay off many workers? All of these are valid questions to the average worker.

Industry 4.0 might be the peak of technological advancement in manufacturing, but it still
sounds as if machines are taking over the industry. Consequently, it is important to
further analyze this approach in order to be able to draw conclusions on the demographics
of labor in the future. This will help workers of today prepare for a not so far future.
Given the nature of the industry, it will introduce new jobs in big data analysis, robot
experts, and a huge portion of mechanical engineers. In an attempt to determine the type
of jobs that Industry 4.0 will introduce or need more labor in, BCG has published
a reportbased on interviews with 20 of the industry’s experts to showcase how 10 of the
most essential use cases for the foundation of the industry will be affected.

The following are some of the important changes that will affect the demographics of
employment:

 Big-Data-Driven Quality Control: In engineering terms, quality control aims at


reducing the inevitable variation between products. Quality Control depends to a large
extent on statistical methods to show whether a specific feature of a product (such as size
or weight) is changing in a way that can be considered a pattern. Of course such a process
depends largely on collecting real-time or historical data regarding the product. However,
since Industry 4.0 will rely on big data for that, the need for quality control workers will
decrease. On the other side, the demand for big data scientists will increase.
 Robot-Assisted Production: The entire basis of the new industry relies of the smart
devices being able to interact with the surrounding environment. This means that workers
who assist in production (such as packaging) will be laid off and be replaced with smart
devices equipped with cameras, sensors, and actuators that are able to identify the
product and then deliver the necessary changes for it. Consequently, the demand for such
workers will drop and will be replaced with “robot coordinators”.
 Self-Driving Logistics Vehicles: One of the most important focuses of optimization is
transportation. Engineers use linear programming methods (such as the Transportation
Model) to utilize the use of transportation. However, with self-driven vehicles, and with
the assistance of big data, so many drivers will be laid off. In addition, having self-driven
vehicles allows for restriction-free working hours and higher utility.
 Production Line Simulation: While the need for optimization for transportation
declines, the need for industrial engineers (who typically work on optimization and
simulation) to simulate productions lines will increase. Having the technology to simulate
production lines before establishment will open up jobs for mechanical engineers
specializing in the industrial field.
 Predictive Maintenance: Having smart devices will allow manufacturers to predict
failures. Smart machines will be able to also independently maintain themselves.
Consequently, the number of traditional maintenance technicians will drop, and they’ll be
replaced with more technically informed ones.
 Machines as a Service: The new industry will also allow manufactures to sell a machine
as a service. This means that instead of selling the entire machine to the client, the
machine will be set-up and maintained by the manufacturer while the client takes
advantage of the services it provides. This will open up jobs in maintenance and will
require an expansion in sales.

FINAL THOUGHTS

Industry 4.0 is definitely a revolutionary approach to manufacturing techniques. The


concept will push global manufacturers to a new level of optimization and productivity.
Not only that, but customers will also enjoy a new level of personally customized
products that may have never been available before. As mentioned above, the economic
rewards are immense.
However, there are still many challenges that need to be tackled systematically to ensure
a smooth transition. This needs to be the focus of large corporations and governments
alike. Pushing research and experimentation in such fields are essential.

While speculations regarding privacy, security, and employment need more study, the
overall picture is promising. Such approach to manufacturing industries is truly
revolutionary.
UNIT II

ADDITIVE MANUFACTURING

2.1 Introduction

Additive Manufacturing (AM) is an appropriate name to describe the technologies that


build 3D objects by adding layer-upon-layer of material, whether the material is plastic, metal,
concrete or one day…..human tissue.

Common to AM technologies is the use of a computer, 3D modeling software (Computer Aided


Design or CAD), machine equipment and layering material. Once a CAD sketch is produced,
the AM equipment reads in data from the CAD file and lays downs or adds successive layers of
liquid, powder, sheet material or other, in a layer-upon-layer fashion to fabricate a 3D object.

The term AM encompasses many technologies including subsets like 3D Printing, Rapid
Prototyping (RP), Direct Digital Manufacturing (DDM), layered manufacturing and additive
fabrication.

AM application is limitless. Early use of AM in the form of Rapid Prototyping focused on


preproduction visualization models. More recently, AM is being used to fabricate end-use
products in aircraft, dental restorations, medical implants, automobiles, and even fashion
products.

While the adding of layer-upon-layer approach is simple, there are many applications of AM
technology with degrees of sophistication to meet diverse needs including:

+ a visualization tool in design


+ a means to create highly customized products for consumers and professionals alike
+ as industrial tooling
+ to produce small lots of production parts
+ one day….production of human organs

2.1.1 Examples of Additive Manufacturing (AM)


SLA
Very high end technology utilizing laser technology to cure layer-upon-layer of
photopolymer resin (polymer that changes properties when exposed to light).
The build occurs in a pool of resin. A laser beam, directed into the pool of resin, traces the cross-
section pattern of the model for that particular layer and cures it. During the build cycle, the
platform on which the build is repositioned, lowering by a single layer thickness. The process
repeats until the build or model is completed and fascinating to watch. Specialized material may
be needed to add support to some model features. Models can be machined and used as patterns
for injection molding, thermoforming or other casting processes.

FDM
Process oriented involving use of thermoplastic (polymer that changes to a liquid upon the
application of heat and solidifies to a solid when cooled) materials injected through indexing
nozzles onto a platform. The nozzles trace the cross-section pattern for each particular layer with
the thermoplastic material hardening prior to the application of the next layer. The process
repeats until the build or model is completed and fascinating to watch. Specialized material may
be need to add support to some model features. Similar to SLA, the models can be machined or
used as patterns. Very easy-to-use and cool.

MJM (Multi-Jet Modeling)


Multi-Jet Modeling is similar to an inkjet printer in that a head, capable of shuttling back
and forth (3 dimensions-x, y, z)) incorporates hundreds of small jets to apply a layer of
thermopolymer material, layer-by-layer.

3DP
this involves building a model in a container filled with powder of either starch or plaster
based material. An inkjet printer head shuttles applies a small amount of binder to form a layer.
Upon application of the binder, a new layer of powder is sweeped over the prior layer with the
application of more binder. The process repeats until the model is complete. As the model is
supported by loose powder there is no need for support. Additionally, this is the only process that
builds in colors.

SLS (Selective Laser Sintering)


Somewhat like SLA technology Selective Laser Sintering (SLS) utilizes a high powered laser to
fuse small particles of plastic, metal, ceramic or glass. During the build cycle, the platform on
which the build is repositioned, lowering by a single layer thickness. The process repeats until
the build or model is completed. Unlike SLA technology, support material is not needed as the
build is supported by unsintered material.

2.2 Additive Manufacturing Processes Classification

Additive manufacturing processes are classified into seven areas on the basis of:

 Type of materials used


 Deposition technique, and
 The way the material is fused or solidified

These classifications have been developed by the ASTM International Technical Committee F42
on additive manufacturing technologies. The work of this Committee focuses on the promotion
of knowledge, stimulation of research, and implementation of technology through the
development of standards.

Classification of Additive Manufacturing Systems


The Better way is to classify AM systems broadly by theinitial form of its material, all
AMSystems can be easilycategorized into
1) Liquid Based
2) Solid Based
3) Powder Based

Liquid Based Additive Manufacturing Systems:


Building material is in the liquid state.
The following AM Systems fall into this category:
1) Stereolithography Apparatus(SLA)
2) PolyJet 3D printing
3) Multijet Printing (MJP)
4) Solid Object Ultravoilet-Laser Printer(SOUP)
5) Rapid Freeze Prototyping.

Solid Based Additive Manufacturing Systems


Building material is in the Solid state (except powder).The solid form can include the shape in
the forms of wire,rolls, laminates and pellets.

The following AM Systems fall into this category:


1) Fused deposition modeling (FDM)
2) Selective Deposition Lamination (SDL)
) Laminated Object Manufacturing (LOM)
4) Ultrasonic Consolidation

Powder Based Additive Manufacturing Systems


Building material is Powder(grain like form).All Powder Based AM Systems employ
thejoining/bindingmethod.
The following AM Systems fall into this category:
1) Selective Laser Sintering(SLS)
) ColorJet Printing(CJP)
3) Laser Engineered Net Shaping (LENS)
4) Electron Beam Melting(EBM) etc.

The Technologies of Additive Manufacturing Processes

1. Photopolymerization
2. Material jetting
3. Binder jetting
4. Material extrusion
5. Powder Bed Fusion
6. Sheet Lamination
7. Direct Energy Deposition

Material Extrusion

Fused deposition modelling (FDM) is one of the most common and widely used
additive manufacturing technology. FDM was trademarked by Stratasys Inc., and hence the
separate name Fused filament fabrication (FFF) is used to avoid infringement issues.
Above: Material Extrusion type FDM 3D Printer

In this method, material in a filament form is drawn through a nozzle, is heated and then
extruded and deposited onto the build platform in a layer-by-layer process. FFF 3D Printers are
most commonly Cartesian type where the nozzle moves in X & Y-direction whereas the build
platform moves in the Z-direction.

Extrusion 3D printers are inexpensive and offer quick prototyping of simple parts. It is
usually used for printing household items, characters, toys, games and other similar products.
The parts have rough surface finish as the maximum resolution is around 100 microns.

VAT Photopolymerization

Vat Photopolymerization uses a vat of liquid photosensitive polymer resin. This resin
hardens on exposure to UV light. This property is used to build objects layer-by-layer.

The Carbon CLIP Technology by Carbon3D


A vat filled with the liquid photopolymer resin is exposed to UV light in a controlled
environment and the geometry of the object to be printed is traced out. The exposed resin
hardens (called curing) and a solid layer is formed. This process continues till the complete
object is printed.

The resins are polymer compound with additives for specific applications like Tough,
Flexible, Dental, etc. These processes impart high quality surface finish to the object. Most
common process in this category are Stereolithography (SLA) and Digital Light Processing
(DLP) and even Carbon’s trademarked Digital Light Synthesis (DLS) based on its patented
technology called Continuous Liquid Interface Production (CLIP).

Powder Bed Fusion

Powder-bed fusion (PBF) is an additive manufacturing technology which fuses powdered


material to additively create/build 3D objects. Other technologies which operate on this principle
are Selective Laser Sintering (SLS), Direct Metal Laser Sintering (DMLS), Selective Laser
Melting (SLM), Electron Beam Melting (EBM), and Selective Heat Sintering (SHS).

The Powder-bed fusion process uses a laser or an electron beam to sinter, melt and fuse
the powder particles together while it traces the cross-section of the object to be created. On
completion of the first layer, the powder dispensing unit spreads a new layer of powder onto the
build platform and the printing continues for the next layer. This process continues till the
complete object is built.

Material Jetting

Material Jetting process operates in a similar way a regular two-dimensional inkjet printer works.
Material in the form of liquid droplets is dispensed from multiple printheads similar to those in
an inkjet printer. The material is photosensitive polymer which hardens on exposure to UV light
thereby building the part layer-by-layer.
Above: Half Glossy, Half Matte finish

This additive manufacturing technology is used for building parts with high dimensional
accuracy and smooth surface finish. In fact, parts can be printed in glossy as well as matte finish
with equal accuracy. It is a multi-material technology which enables full-color printing. The
technology is also called as Drop-on-Demand (DOD) as it uses printheads which dispense liquid
material to create wax-like parts. It is mostly used for creating investment casting patterns

Binder Jetting

Binder Jetting is similar to material jetting but it uses two materials in place of one. The
two materials include a powdered base material and a binder material. The binder is dispensed on
to the powdered material in the build chamber and acts as the binding agent for adhesion of
individual layers.

Above: Inkjet print heads dispensing binding

The binder is usually liquid and is dispensed from printheads which move in X & Y-
direction. The binder drops the binder as per the geometry of the object to be built. After each
layer of printing, the build chamber drops down and a new layer of powder is spread on top of
the previous layer and the printhead again traces the cross-section of the object and binds the
previous and current layer together.

This process is relatively fast but BJ parts are not recommended for use in structural
applications. The unused powder acts as a support to the object and it as such does not need any
support structure.

Sheet Lamination
The Sheet Lamination process includes two types of manufacturing techniques,
Ultrasonic Additive Manufacturing (UAM) and Laminated Object Manufacturing (LOM).

In Ultrasonic Additive Manufacturing (UAM), sheets or ribbons of metal are bound


together using ultrasonic welding. After the welding, the part does not require any additional step
of machining or removal of material. Different metals like aluminum, copper, steel, and titanium
can be joined together which allows for greater flexibility in strength requirement of the part. It
requires relatively less energy as the metals are not melted.

Laminated Object Manufacturing (LOM) uses sheets of paper as the base material and
adhesive in place of welding. The paper is fed with the help of rollers and a laser traces the cross-
section of the object. It uses a cross-hatch method during the printing process so the completed
part is easy to remove. Objects manufactured using LOM are not fit for structural use and can
only be used for aesthetic purpose.

Directed Energy Deposition

Directed Energy Deposition (DED) is an additive manufacturing technology used for 3D


printing of metal and alloys. It can be used for polymers, glass and ceramics but is not popularly
used for those materials.

In DED, a nozzle holds a material in a wire form which is known as a feed which moves
across multiple axis and an electron beam projector which melts the feed as it moves across
while tracing the object geometry. As it uses a laser, DED method is also called as Laser
engineered net shaping, 3D laser cladding, directed light fabrication or direct metal deposition.

In DED process the nozzle supplying the material is not restricted to any specific axis but
can be moved is various angles due to 4 to 5 axis machines. This method is not only used to
build new objects but it is also used for adding materials to existing models for repairs.

This additive manufacturing technology uses materials like Titanium, Cobalt Chrome and
Tantalum (a rare metal). The most common application of DED method is in Aerospace and
automotive industries.
Important Technologies of AM
In the next few slides, we are going to discuss these three AMtechnologies
1) Stereolithography Apparatus(SLA)
2) Fused deposition modeling (FDM)
3) Selective Laser SinteringSLS)

Stereolithography
One of the most important additive manufacturingtechnologies currently available.The
first ever commercial RP systems were resin-basedsystems commonly called stereolithography
or SLA.The resin is a liquid photosensitive polymer that cures orhardens Stereolithography when
exposed to ultravioletradiation.
This technique involves the curing or solicitation of a liquidphotosensitive polymer
through the use of the irradiation lightsource.
The source supplies the energy that is needed to induce achemical reaction (curing reaction),
bonding large no of smallmolecules and forming a highly cross-linked polymer
Fused deposition modeling
 Fused Deposition Modeling (FDM), produced and developedby Stratasys, USA.
 FDM uses a heating chamber to liquefy polymer that is fed into the system as a lament.
 The lament is pushed into the chamber by a tractor wheelarrangement and it is this
pushing that generates theextrusion pressure.
 The major strength of FDM is in the range of materials and the effective mechanical
properties of resulting parts madeusing this technology.
 Parts made using FDM are amongst the strongest for anypolymerbased additive
manufacturing process.
Materials for FDM
The most popular material is the ABSplus material, which canbe used on all current Stratasys
FDM machines.
Some machines also have an option for ABS blended withPolycarbonate.

Note that FDM works best with polymers that are amorphous
In nature rather than the highly crystalline polymers.
This is because the polymers that work best are those that areextruded in a viscous paste rather
than in a lower viscosityform.
As in amorphous polymers, there is no distinct melting pointand the material increasingly softens
and viscosity lowers withincreasing temperature.
The viscosity at which these amorphous polymers can beextruded under pressure is high enough
that their shape will belargely maintained after extrusion, maintaining the extrusionshape and
enabling them to solidify quickly and easily.
Limitations of FDM
 Sharp features or corners not possible to get;
 Part strength is weak perpendicular to build axis;
 More area in slices requires longer build times;
 Temperature uctuations during production could lead toDelamination

Selective Laser Sintering

 Layer thickness: nearly 0.1 mm thick;


 The part building takes place inside an enclosed chamber filledwith nitrogen gas to
minimize oxidation and degradation ofthe powdered material;
 The powder in the building platform is maintained at anelevated temperature just below
the melting point and/orglass transition temperature of the powdered material;
 Infrared heaters are used to maintain an elevated temperaturearound the part being
formed;
 A focused CO2 laser beam is moved on the bed in such a waythat it thermally fuses the
material to form the slicecross-section;
 Surrounding powders remain loose and serve as support forsubsequent layers.

Advantages
1) A distinct advantage of the SLS process is that because itis fully self-supporting
2) Parts possess high strength and stiffness
3) Good chemical resistance
4) Various finishing possibilities (e.g., metallization, stoveenameling, vibratory grinding, tub
coloring, bonding, powder,coating, ocking)
5) Complex parts with interior components,channels, can be built without trapping the material
inside.
6) Fastest additive manufacturing process
Disadvantages
SLS printed parts have surface porosity. Such porosity can besealed by applying sealant such as
cyanoacrylate.
Introduction to Reverse Engineering
It is the processes of extracting knowledge or designinformation from anything man-
made and reproducing it orreproducing anything based on the extracted information.
The process often involves disassembling something (amechanical device, electronic component,
computer program,or biological, chemical, or organic matter) and analyzing its
Components and workings in detail.
Motivation for Reverse Engineering
Interfacing: Reverse engineering can be used when a systemis required to interface to another
system.
Military or commercial espionage: Learning about anenemy's or competitor’s latest research by
stealing orcapturing a prototype and dismantling it.
Product security analysis: To examine how a product works, what are specifications of its
components, estimatecosts and identify potential patent infringement.
Academic/learning purposes: Reverse engineering forlearning purposes may be to understand the
key issues of anunsuccessful design and subsequently improve the design.
Saving money: when one finds out what a piece ofelectronics is capable of, it can spare a user
from purchase ofa separate product.

Generic AM Processes:
Additive Manufacturing Process Chain
A series of steps goes into the process chain required to generate a useful physicalpart
from the concept of the same part using additive manufacturing processes.
Depending on the technology and, at times the machines and components, theprocess chain is
mainly made up of six steps:
• Generation of CAD model of the design;
• Conversion of CAD model into AM machine acceptable format;
• CAD model preparation;
• Machine setup;
• Part removal;
• Post-processing.
These steps can be grouped or broken down and can look different from case tocase, but
overall the process chain from one technology remains similar to that of a different technology.
The process chain is also constantly evolving and can changeas the existing technologies develop
and new technologies surface. In this text, thefocus will be on the powder bed metal technology.
Therefore, the process chain forthis technology will be discussed in details, while other will be
roughly mentioned.

Generation of Computer-Aided Design


In any product design process the first step is to imagine and conceptualize thefunction
and appearance of the product. This can take the form of textualdescriptions, sketches, to 3-
dimensional computer models. In terms of processchain, the first enabler of AM technologies is
3D digital Computer-Aided Design (CAD) models where the conceptualized product exist in a
“computer” space andthe values of its geometry, material, and properties are stored in digital
form and arereadily retrievable.
In general the AM process chains start with 3D CAD modeling. The process ofproducing
a 3D CAD model from an idea in the designer’s mind can take on manyforms, but all requires
CAD software programs. The details of these programs andthe technology behind them is
outside of the scope of this text, but these programsare a critical enabler of a designer’s ability to
generate a 3D CAD model that canserve as the start of an AM process chain. There are a large
number of CADprograms with different modeling principles, capabilities, accessibilities, and
cost.
Some examples includes Autodesk Inventor, Solid works, Cero, NX, etc.
2.2 Conversion of CAD Model into AM Machine
Almost all AM technology available today uses the StereoLithography (STL) fileformat.
The STL format ofa 3D CAD model captures all surfaces of the 3D model by means of
stitchingtriangles of various sizes on its surfaces. The spatial locations of the vertices of
eachtriangle and the vectors normal to each triangle, when combined, these featuresallow AM
pre-process programs to determine the spatial locations of surfaces of thepart in a build envelope,
and on which side of the surface is the interior of the part.
Although the STL format has been consider the de facto standard, it has
limitationsintrinsic to the fact that only geometry information is stored in these fileswhile all
other information that a CAD model can contain is eliminated.
Information such as unit, color, material, etc. can play critical role in the functionalityof
the built part is lost through the file translation process. As such it placesAdditive Manufacturing
Process Chainlimitations on the functionality of the finished parts. The “AMF” format
wasdeveloped specifically to address these issues and limitations, and is now theASTM/ISO
standard format. Beyond geometry information, it also containsdimensions, color, material, and
additional information is also possible with this fileformat. Though currently the predominate
format of file used by AM systems andsupported by CAD modeling programs is still the STL
format. An increasingnumber of CAD program companies, including several major programs,
haveincluded support of AMF file formats. Currently, actual use of the informationstored in the
AMF file is still limited due to the capabilities of current AM systemsand the state of current
technology development.

2.3 CAD Model Preparation


Once a correct STL file is available, a series of steps is required to generate
theinformation an AM system needs to start the build process. The needed informationvaries,
depending on the technology but in general these steps start with repairingany errors within the
STL file. Typical errors can be gaps between surface trianglefacets, inverted normal where the
“wrong side” of a triangle facet is identified as theinterior of the part. Once the errors have been
repaired, a proper orientation of the3D model with respect to the build platform/envelope is then
decided. Followingthe orientation, the geometry, density, geometry of support structures are
decidedand generated in 3D model space and assigned to the part model. The process
thenprogresses to slicing the 3D model defined by the STL as well the support structureinto a
given number of layers of a desired height each representing a slice of the partand support
models. Within each slice the cross-sectional geometry is kept constant.
Once 2D slices are obtained, a “unit” area representing the smallest materialplacement is
then used, combined with a set of strategies, to fill the area within eachlayer that is enclosed by
the surface of the part and the support. For the SLAprocess, the unit area is related to the laser
spot size and its intensity distribution aswell as absorption of monomer, and the strategies of
filling the enclose area in onelayer is the path in which the laser would raster the resin surface.
At this point thesurface data that are originally in the STL file has been processed and
machinespecific information to allow placement of the material unit into the desired locationin a
controlled manner to construct the physical model layer by layer (Fig. 2.2).
Specifically for metal powder bed processes, a STL file is first imported into asoftware that
allows repairing and manipulating of the file, as well as the generationof support, and the slicing
of the part and support models. The sliced data are thentransferred into the AM system machine
for build preparation and the start of thebuilding process. There are a number of software
programs that allows these tasksto be carried out, Magic’s, for example by Materialize is one
such software programthat is capable of integrating all CAD model preparation steps into one
program andgenerating data files directly accepted by powder bed machine systems. In the
Sections below additional details of model preparation specific to the powder bedmetal process
is described.
Fig. 2.2 Support structuregenerated on the model36 2 Additive Manufacturing Process Chain

2.3.1 STL File Preparation


The CAD model preparation starts with importing an STL, or other compatible
fileformats, into the pre-process software program (e.g., Magics). Once imported, thedimensions
can be modified if needed. Once the model is in desired dimensions, aseries of steps is carried
out to correctpossible errors in the model file. These errorscan include missing triangles, inverted
or double triangles, transverse triangles,open edges and contours, and shells. Each type of error
can cause issues in thebuilding process or result in incorrect parts and geometries. While some
errors suchas shells and double triangles are non-critical and can sometimes be tolerated,
errorssuch asinverted triangles and open contours can cause critical issue in the buildingprocess
and needs to be resolved during STL preparation.

2.3.2 Support Generation


For metal powder bed process, the primary function of the “support” structure is toextract
heatfrom the model and to provide anchor surfaces and features to the buildplate to avoid war
page due to thermal stresses during and after the build. It does not“support” the part against
gravity that causes over hanging or angled features to failto build.
Generation of support structures in powder bed processes can be accomplishedin a few
different ways. Also applied to any other AM processes, the first way is togenerate the support
structures during CAD modeling and design the support to befeatures of the geometry of the part.
Alternatively, the support structures can begenerated in the STL preprocess software program.
This second approach providesmuch more flexibility in terms of being able to tailor the
structures based on thedetailed needs. For example, since the support structure is only used
during thebuild process and is removed during post-process, the amount of material that goesinto
it needs to be minimized. However, since the primary function of the support isto conduct heat
and provide mechanical anchor, a minimum amount ofcross-sectional area in the support is
needed for it to be functional. Optimization ofthe volume, geometry, location, and the part-
support interface geometry, isimportant and part dependent. Therefore, carful design of support
structure plays acritical role in the success of a build process. Figure 2.2 shows examples of
supportgenerated in CAD model space.

2.3.3 Build File Preparation


Once the CAD models of part and support are generated and prepared in thepre-process
software program, a slicer program is used to divide the models intoCAD Model Preparation 37
Layers in the build direction based on the desired layer thickness. For typical metalpowder bed
systems the layer thickness can be anywhere from 25 microns to close100 microns. Typical
thicknesses used are 25 microns for high resolution builds and50–75 microns or higher for high-
rate applications. Layer thickness is also correlatedto the powder dimension and distribution.
Ideally the layer thickness would beslightly larger than the mean diameter of powder to achieve
high coupling of laserenergy input into the absorption, heating and melting of powder, and re-
melting ofprevious layer. At larger thicknesses, scattering of optical energy input may not
besufficient to allow uniform optical energy absorption and heating, resulting inpartial
melting.Within each slice, the slicer software also determines the restoring path that theenergy
beam takes to fully melt the entire layer. How a melt pool created by anoptical beam can move in
an enclosed area to ensure every part of it is covered isanalogous to how an end-milling tool does
a pocket milling operation to create anenclosed 2D area with a depth on a surface. Figure 2.3
shows how the use ofcontour scan coupled with in-fill to fully cover an area. As seen in the
illustration,an important factor is the beam diameter, which is captured by the Beam
compensationparameter. This value is typically chosen such that sufficient over lappingof
adjacent paths occurs and partial melting is avoided.
Other important parameters includes “islands” within each layer, and shifting ofscan
directions, offset of scan lines from layer to layer as build progresses. The useof islands is
important in that the rapid heat generation and cooling in the build partas a result of absorbing a
fast-moving optical energy input source (*1000 mm/s).
Heat input into small “islands” of randomized locations within a layer allows theFig. 2.3
Beam compensation to determine correct beam paths38 2 Additive Manufacturing Process
Chainheat to be evenly introduced spatially, and reduces the chance of deformation andfailures
caused by uneven warpage during a build. Changing of scan directions andoffsetting scan lines
from layer to layer allows for uniform distribution ofscan line-wise thermal and microstructural
anisotropy in the entire 3D space within abuild part. It allows for the overall isotropy of material
properties.
In addition to restoring strategy, critical parameters that determine energy inputinto the
powder bed to achieve controlled melting are beam power (laser or electronbeam), scan speed,
and focus move (if the hardware of the systems allows). Overall,these parameters determine the
amount of energy incident onto the powder bed perunit time, and directly relates to the heating,
melting, and cooling of the material.
Each factor, however, also plays a role in different characteristics of the build.
Highpower at faster scan rate produces different thermal history as compared with lowerpower at
slower scan rates, even if the total power input are the same. As a result ofthe differences in
thermal history, the microstructures and properties of parts candiffer. In addition, the adventive
flow of molten material in the meltpool can alsohave different behaviors when the meltpool
travels at different rates, resulting inalso differences in the solidification process.
Once the slice information is generated, it is transferred into the interface programthat runs on
AM systems. The interface program serves as the interfacebetween information of the build and
machine controls that carry out the actualbuild process.
2.4 Machine Setup
Following software preparation steps in the AM process chain, machine preparationis the
next step before a build a can start. Machine preparation can roughly bedivided into two groups
of tasks: machine hardware setup, and process control.
Hardware setup entails cleaning of build chamber from previous build, loading
ofpowdermaterial, a routine check of all critical build settings and process controlssuch as gas
pressure, flow rate, oxygen sensors, etc. Details of how each task in thisgroup is carried out can
vary from one system to another, but overall once themachine hardware setup is complete, the
AM system is ready to accept the buildfiles (slices generated from previous step) and start the
build.
The tasks in the process control task group allow an AM system to accept andprocess the build
files, start the build, interrupt the build at any given time if desiredor required, and preparing the
machine for finished part extraction, and unloading ofmaterial. This first task is usuallyimporting
and positioning of build parts in thearea defined by the build plate. In this step some capabilities
of scaling and basicmanipulation of build part are usually provided to account for changes
needed atthis step. Once the physical locations of parts are decided upon, it is followed by aseries
of steps of defining the (1) build process parameters, (2) material parameters,and (3) Part
parameters.
2.3 CAD Model Preparation
The build process parameter controls machine level parameter that is applied tothe entire
build. Examples of these parameters include gas injection processes,material recoater motions,
and ventilation processes, etc. These parameters definethe basic level machine operation to
enable a build environment. Material parameterstypically control powder dosing behaviors and
chamber environment controlthrough inert gas injection. Critical parameter such as “dose factor”
determines theamount by which the dose chamber plate rises as compared with the drop in
thebuild chamber build plate. A 100% does factor means that the rise in the dosechamber plate is
the same as the drop in the build chamber plate. Typically highervalues (150–200%) are desired
at the beginning of the process to ensure fullcoverage of a new layer with powder material. This
factor needs to have valuesgreater than 100% because as the powder is melted and fused into a
solid layer, thevolume occupied by the material in a solid layer form is smaller than that of in
thepowder form, due to elimination of spaces between powder particles. Therefore, thesurface of
melted areas is lower than the rest of the powder bed. As the portion ofthe powder bed surface
occupied by build part increases, the dosing factor alsoneeds to increase accordingly. This factor
is typically adjustable anytime during abuild process as well to provide adjustments to the build
process as needed.
Inert gases such as nitrogen or argon are typically used in AM system to controlthe
buildchamber environment and maintain low oxygen concentration. Oxygenconcentration inside
the build chamber is of critical importance to not only the buildquality, but also to the success of
a build process. Typically the oxygen concentrationis maintained below 1–2%. Upper and lower
limits of concentrations areoften set to allow the gas injection and ventilation system to maintain
the oxygencontent. Above threshold values, AM systems will shut down the build process.
Inreactive material (such as aluminum, titanium, etc.) powder bed processes, oxygencontent
control is particularly important due to safety reasons, and the inert gasinjection typically
remains on even after the build process ends.Part parameters are assigned to each and every
component/part to be build.Multiple sets of part parameters can be used in the same build on
different parts.These parameters are taken into account in the slicing process that takes place in
theprevious step of the process chain. As a result, the parameters chosen in the slicerhave to
correspond to the actual parameters selected on the AM system. Once partparameters are
selected the building process, the build process starts and is controlledand monitored by the AM
system itself. Some feedback and in-processmonitoring systems are possible. Most current
systems are outfitted with basicin-process diagnostic tools such as melt-pool monitoring where a
diagnostic beamcoaxial to the process beam monitors the intensity of emission of thermal
radiationfrom the melt-pool and does basic analysis of melt-pool size and radiation
intensityspatial distribution. A quality index can be extracted from the results of monitoringto
provide an indication of part quality. Another type of in-process feedback toolavailable to some
of the current systems is related to the powder re-coating process.
The tool typically takes an optical image of each and every layer and use the40 2
Additive Manufacturing Process Chainreflectivity information within the optical image to
determine where a full coatingon a competed layer is achieved. In some cases, this value is used
to pause theprocess to prevent failure of builds due to insufficient powder coating andover-
heating.

2.5 Build Removal


The build time of the powder bed process depends on a number of factors. Of them,the
height of the entire build has the largest effect on the total time. It can takeanywhere from
minutes to days. Nevertheless, once the build completes, the lasermetal powder bed technology
allows for immediate unpacking of build chamberand retrieval of finished part, because the
process does not maintain the buildplatform at elevated temperatures (as opposed to laser powder
bed for polymers andelectron beam-based powder bed processes). The unpacking process
typicallyinvolves raising the platform in the build chamber and removing loose powder atthe
same time. The loose powder from one process can be re-used and has to gothrough a series of
sieving steps to remove contaminates and unwanted particulates.Figure 2.4 shows an example of
the process. Once the loose powder is removedfrom the finished part, the build is ready for post-
process. The finished parts inmetal powder bed AM at this point are welded onto the build plate
via supportstructures. Removal of finished part from the build plate typically involves the use of
cutting tools such as band saws, or wire EDM for higher fidelity and flexibility.
Fig. 2.4 SLM part being extracted from build chamber.

2.6 Post-processing
Depending on AM technology used to create the part, the purpose, and requirementsof
the finished part, the post-fabrication processes can vary in a wide range. Itcan require anything
from no post process to several additional steps of processingto change the surface, dimensions,
and/or material properties of the builtpart. Shown in Fig. 2.5 is an example of the unique surface
features on powder bedAM parts where partially melted particles are bound to the surfaces of
built parts.
These features, along with the weld-lines restoring the melt-pool in differentdirections
results in a new type of surface finish that is distinct from any existingmanufacturing processes.
In metal powder bed AM systems, the minimum requiredprocessing is removal of built part from
build plate and the removal of supportstructures from the built part. Removal of support
structures can be as simple asmanually breaking the supports from the surface of the part, but it
can also be aprocess that utilizes CNC tools to not only remove the support, but also to
achievedesired surface finish and/or dimension tolerance. In addition, the metal powder bedAM
systems can introduce large amounts of thermal stresses into the builtpart. Under these
conditions, the support structure serves as the mechanical“tie-downs” to hold the built part in
place and keep it in its intended geometry. Ifthe supports were to be removed from the part,
warpage in the part will occur.
A thermal annealing process can be used to relieve the thermal stresses in the partbefore
it is removed from the build plate, to prevent part warpage upon removalfrom the build plate.
Fig. 2.5 SEM images of SLM parts showing unique surface features unlike any other
currentmanufacturing processes42 2 Additive Manufacturing Process ChainHot Isostatic
Pressing, HIP, is a process where a component is subjected toelevated temperature and isostatic
pressure in a pressure vessel. At a temperature ofover 50% of the melting point of materials and
pressures above 100 MPa (can be ashigh as 300 MPa), the voids and porosity inside a powder
bed metal AM part canbe greatly reduced. Once processed by HP, the final bulk density can
reach morethan 95% of the true density of material. Under these extreme conditions of
pressureandtemperature, material in a component being treated not only undergoes
localizedplastic deformation, but the processes of creep and solid-state diffusionbonding are
allowed to take place. This enables the required shaping and masstransport around internal
defects to happen and to “heal” them, increasing the bulkdensity of the component.

Powder Bed Fusion


The Powder Bed Fusion process includes the following commonly used printing
techniques: Direct metal laser sintering (DMLS), Electron beam melting (EBM), Selective heat
sintering (SHS), Selective laser melting (SLM) and Selective laser sintering (SLS).

Powder bed fusion (PBF) methods use either a laser or electron beam to melt and fuse
material powder together. Electron beam melting (EBM), methods require a vacuum but can be
used with metals and alloys in the creation of functional parts. All PBF processes involve the
spreading of the powder material over previous layers. There are different mechanisms to enable
this, including a roller or a blade. A hopper or a reservoir below of aside the bed provides fresh
material supply. Direct metal laser sintering (DMLS) is the same as SLS, but with the use of
metals and not plastics. The process sinters the powder, layer by layer. Selective Heat Sintering
differs from other processes by way of using a heated thermal print head to fuse powder material
together. As before, layers are added with a roller in between fusion of layers. A platform lowers
the model accordingly.

Powder Bed Fusion – Step by Step

1. A layer, typically 0.1mm thick of material is spread over the build platform.
2. A laser fuses the first layer or first cross section of the model.
3. A new layer of powder is spread across the previous layer using a roller.
4. Further layers or cross sections are fused and added.
5. The process repeats until the entire model is created. Loose, unfused powder is remains in
position but is removed during post processing.
Process Parameters and Modelling:

Factors: It is the process inputs. An investigator manipulates the factors to cause a


change in the output. In quality control, the factors are the process parameters. For example, for
the traditional machining process, feed rate, depth of cut and cutting speed can be the factors.
The factors that are controllable and are investigated in a DoE are called control factors. Some
factors cannot be controlled by the experimenter but may affect the responses, such as noise
factors.

Process Parameters Determination for BJ Process


Normally, the dimensional accuracy and surface quality are the two of the most important
end-product properties. This research chooses the shrinkage rate as the indicator of dimensional
accuracy because the estimated shrinkage rate value is always needed by designers to make
dimensional compensation before printing. Also, surface roughness Ra is chosen as the indicator
of the surface quality in this research. There is very little reported research that considered the
surface roughness as an end-product property in BJ process. Due to the lack of information on
the relationship between process parameters and surface roughness, it is difficult to control the
BJ process properly.
In the entire BJ process, there are many variables and parameters that affect the end-
product properties. These variables and parameters can be categorized into four groups: design
feature (e.g. smallest pore size, strut thickness in lattice structure), material property (e.g. powder
basic geometry parameters, flowability and wetability, binder viscosity and volatility), machine
capability (e.g. printing resolution limit, minimum layer thickness) and process parameters (e.g.
layer thickness, printing saturation, cleaning frequency, curing temperature and time and
sintering curve).
According to previous experimental observation and literature review, it is found that
only a few key parameters dominate the end-product properties. In order to optimize these
parameters, this research work considers the control of 4 key printing process parameters. The
other variables are maintained at the same value during experiments. This research is conducted
on ExOne X1-Lab machine system. Definitions or interpretations of the four key control
parameters are 38 introduced below.
2.3.1 Layer Thickness
After printing one layer, the print bed will lower some distance, and this distance is the
layer thickness. It can be seen in Table 3 that all the reported AM research has considered the
layer thickness as a parameter to optimize. These studies have proven that the layer thickness is
an important process parameter to be considered for most types of AM. The variation range of
the layer thickness is limited by the BJ machine capability. In the X1-Lab System, the minimum
layer thickness can be set as 0.05mm. The layer thickness constrains the resolution and the
possible smallest design feature size. Normally, the thinner the layer is, the better end-product
properties are obtained. However, the building time will increase significantly as a tradeoff.
Printing Saturation
During the printing process, indissolvable powder, binder and air constitute the volume of
print bed. Saturation is the percentage of air space that is occupied by a binder volume

Heater Power Ratio


After printing one layer, step motor system moves the print bed under an electrical
infrared heater to dry the binder.

Drying Time
After the printing of each layer, the entire print bed needs to be dried. The drying time
defines how long the print bed stays under the heater. During the drying period, the print head
moves into a cleaner reservoir to dissolve the excessive binder materials to avoid blocking.
Based on previous experimental experience, it is found that the drying time is critical because
shorter drying time is very likely to lead to a print head blockage.

Modelling in Additive Manufacturing:

The AM processes are hampered mainly due to their lowproductivity, relatively poor
surface quality and dimensionalstability as well as uncertainty regarding the mechanical
propertiesof the products. Therefore, those manufacturing attributesshould be optimized in order
for AM to get establishedin production. For the optimization of any manufacturing process,a
deepknowledge of the process itself is required. Thisknowledge could be gained, either by
experimentation or byanalyzing the physical mechanisms of the process. A model isthe abstract
representation of a process that establishes a relationbetween input and output quantities. The
real system issimulated by the models that aim to predict its performance. Models met in
literature can be divided into three majorcategories, namely analytical, numerical and empirical
ones,depending on the development approach. The analytical models are the output of the
process’s mathematical analysis,taking into consideration the physical laws and the
relevantphysical processes. The main advantage of such models is thatthe derived results can be
easily transferred to other pertinentprocesses. The limits of the analytical modelling are
determinedby the underlying assumptions. The empirical models,on the other hand, are the
outcome of a number of experiments,whose results are evaluated; one model type is chosen,the
coefficients are determined, and then the empirical modelcan be verified by further tests. The
quality of the models’results is limited in the special conditions of the specific process.
Their major advantage, compared with that of the analytical models, is that they require
minimum effort. Numericalmodels are in between, in essence, stem from the physics ofthe
process, but a numerical step-by-step method is employedover time in order for useful results to
be produced. Followingis a list of authors who have presented modelling attempts inAM.

PBF Commercial Machine

Metal 3D printing is a burgeoning area at the moment. Many companies are investing in
metal 3D printing systems and the market is growing quickly. There are a number of different
metal 3D printing technologies out there. In this article we’ll give you an overview of the major
technologies and vendors in the space. In this installment we’re looking at Powder Bed Fusion.

Powder Bed Fusion


In Powder Bed Fusion (PBF) a bed of metal powder is sintered together by a laser. A new
layer of powder is applied and the process repeats. A more apt name may be powder bed
confusion because there are a number of vendors selling essentially the same technology under
different names. LaserCussing, Selective Laser Melting, Additive Layer Manufacturing and
Direct Metal Laser Sintering (DMLS) are all essentially the same thing. Electron Beam Melting
(EBM) is a similar process that uses an electron beam instead of a laser.
Powder Bed Fusion is probably the most mature and well researched metal printing technology.
It is being used at scale to produce orthopedics and aerospace parts. It is also being used to make
millions of dental bridges and crowns.
Build volumes are limited but this technology is capable of efficiently producing thousands of
mechanical parts to spec.
Titanium is the most popular commercial material along with cobalt chromes for dental.
There are several titanium grades available as well as tool steels, aluminium, pure titanium and
hastelloy, inconel and other superalloys for high temperature applications such as rocket engines.
The vendors in the space are moving from lab based systems to manufacturing systems that have
automated powder handling, quality assurance and some post processing on board. Whereas
initially a lot of these machines went to universities, now many aerospace customers are driving
growth. Compared to other metal 3D printing technologies PBF produces more detailed
mechanical parts. PBF parts are being used in satellites, rockets, rocket engines, drones, military
aircraft and civilian aircraft. The fact that there are several vendors in the space and a
comparatively high installed base at universities means that more research is being done on
commercializing and improving PBF than on other technologies. The techge, nology is
essentially the only one currently capable of producing high volume mechanical parts for
aerospace, automotive, dental and manufacturing applications.

Material Jetting:

In this introduction to Material Jetting 3D printing, we cover the basic principles of the
technology. After reading this article you will understand the fundamental mechanics of the
Material Jetting process and how these relate to its benefits and limitations.

What is Material Jetting?

Material Jetting (MJ) is an additive manufacturing process that operates in a similar


fashion to 2D printers. In material jetting, a printhead (similar to the printheads used for standard
inkjet printing) dispenses droplets of a photosensitive material that solidifies under ultraviolet
(UV) light, building a part layer-by-layer. The materials used in MJ are thermoset photopolymers
(acrylics) that come in a liquid form.
MJ creates parts of high dimensional accuracy with a very smooth surface finish. Multi-
material printing and a wide range of materials (such as ABS-like, rubber-like and fully
transparent materials) are available in Material Jetting. These characteristics make MJ a very
attractive option for both visual prototypes and tooling manufacturing. Nevertheless, material
jetting has some key limitations that we present in this article.

A variation of the MJ process uses Drop-On-Demand (DOD) printheads to dispense


viscous liquids and create wax-like parts. DOD is used almost exclusively for manufacturing
investment casting patterns though and for this reason we will not discuss it here further.

The Material Jetting 3D printing process

How does Material Jetting work?

This is how the MJ printing process works:

I. First, the liquid resin is heated to 30 - 60oC to achieve optimal viscosity for printing.
II. Then the printhead travels over the build platform and hundreds of tiny droplets of
photopolymer are jetted/deposited to the desired locations.
III. A UV light source that is attached to the printhead cures the deposited material,
solidifying it and creating the first layer of the part.
IV. After the layer is complete, the build platform moves downwards one layer height and the
process repeats until the whole part is complete.

Unlike most other 3D printing technologies, MJ deposits material in a line-wise fashion.


Multiple inkjet printheads are attached to same carrier side-by-side and deposit material on the
whole print surface in a single pass. This allows different heads to dispense different material, so
multi-material printing, full-color printing and dispensing of dissolvable support structures
is straightforward and widely used. Support structures are always required in material jetting and
need post-processing to be removed.

In Material Jetting, the liquid material is solidified through a process called


photopolymerization. This is the same mechanism that is used in SLA. Similarly to SLA,
material jetted parts have homogeneous mechanical and thermal properties, but unlike SLA they
do not require additional post-curing to achieve their optimal properties, due to the very small
layer height used.

Schematic of a Material Jetting 3D printer

Characteristics of Material Jetting

Printer Parameters

In Material Jetting, almost all process parameters are pre-set by the machine
manufacturer. Even the layer height is linked to each specific material, due to the complex
physics of droplet formation. The typical layer height used in Material Jetting is 16 - 32
microns.
Material Jetting is considered one of the most accurate 3D printing technologies. MJ
systems have a dimensional accuracy of ± 0.1% with a typical lower limit of ± 0.1 mm
(sometimes as low as ± 0.02 mm). Warping can occur, but it is not as common as in other
technologies, such as FDM or SLS, because printing happens at near room temperature. For this
reason very big parts can be printed with great accuracy. The typical build size is approximately
380 x 250 x 200 mm, while large industrial systems can be as big as 1000 x 800 x 500 mm.

Multi-material & Full-color printing

A key advantage of Material Jetting is the ability to produce accurate multi-material and multi-
color prints that represent end products.

Multi-material and multi-color printing in MJ can be employed in three different levels:

 At the build area level, different parts can be printed in different materials or colors
simultaneously, speeding up the manufacturing process.
 At the part level, different sections of a part can be designated to be printed in different
material or color (for example creating a stiff case with flexible buttons for prototyping
with haptic feedback).
 At the material level, two or more printing resins can be mixed in different ratios before
dispensing, creating a "digital material" with specific physical properties, such as
hardness, stiffness or hue.

To designate a different material or color to particular areas of the part, the model must be
exported as separate STL files. When blending colors or material properties to create a digital
material, the design must be exported as an OBJ or VRML file, because these formats allow the
designation of special properties (such as texture or full color) on a per face or per vertex basis.

Applications of AM:
UNIT III

ROBOTICS IN MANUFACTURING

An Introduction to Robotics and Automation


So, ever wondered what is this ‘Robotics and Automation’? Okay, let’s break it down to
‘Robotics’ and ‘Automation’.
What on earth is Robotics?
‘Robotics’, cool word, eh? Well, yeah, it’s quite cool… not because of its name, but because you
make and work with cool stuffs here, like robots. Okay, let me ask you one question. What is it
that comes to your mind immediately after hearing the word ‘robot’?
Is it something…
 that does your task
 that obeys you (and sometimes disobeys) :)
 that can perform stuffs that we humans cannot
Or, is it something like this…

Is this a robot?
Or this…

Is this a robot?
Or these…

Are these robots?


So, many of you have a misconception that robots are big machines like those shown in movies
like Transformers, etc. (now don’t say that you haven’t seen the movie ;)). Sorry to disappoint
you all, but I would like to clarify that no such robot has ever been made practically. Robots
which have a shape like humans are called humanoids. Research and Development is being
carried out in this field. Till now, not even a single perfect humanoid has been made (so forget
about Rajnikanth’s movie Robot).
Now, in the pictures shown above, you can see a few cars, tanks and moving vehicles.
Yes, they are robots. You can also see a fish. For your information, it’s a mechanical (artificial)
fish that has been developed by engineers. Yes, itis a robot. You can also see a few humanoids
(one of them is ASIMO).
But what about the pictures of a gorilla, mosquito, bacteria, humans, etc.? Well, even they are
robot, biological robots!
Okay, so now, can you define a robot? Think hard and try to come up with your own unique
answers. By the way, there are many definitions of a robot, the simplest one as follows:
“Any matter which has at least one degree of freedom is called a robot”.
Confused regarding degree of freedom? Well, you should be ;-). Degree Of Freedom (DOF)
represents the direction in which a robot can make movements. Say, one DOF means that the
robot has only one moveable part and can move only in one direction. Two DOF means that the
robot is free to move its parts in any two different directions. For example, the following robotic
arm has 3 DOF as marked. Please note that all the three degrees of freedom are rotatory, not
translatory.
3 degrees of freedom robotic arm
So, this proves that each and every pic that is shown above is a robot! Convinced? If not, please
comment below so that I can convince you. ;-)
And to your surprise, there are also laws governing robotics! The Three Laws of Robotics :
 A robot may not injure a human being or, through inaction, allow a human being to come to
harm.
 A robot must obey any orders given to it by human beings, except where such orders would
conflict with the First Law.
 A robot must protect its own existence as long as such protection does not conflict with the First
or Second Law
So, by now, you can easily say what on earth is Robotics? What do you think? Well, it’s all upto
you the way you want to define robotics. It can be
 study of robots
 making of robots
 playing with robots :)
 practical use of robots
 artificial intelligence (AI)
So, any idea what is this “AI”, or Artificial Intelligence? Well, I ain’t gonna tell ya, google it out
yourself… ;)
What on earth is Automation?
You must have heard the word Automatic. What does it mean? In simple words, an automatic
device means that it can work with minimum or no human intervention.
Similarly, Automation means the act of implementing the control of equipment with advanced
technology; usually involving electronic hardware and robotic equipment. The main motto of
automation is to reduce human work. It is an outcome of the varied application of robotics.
Applications of Robotics and Automation
Whether it is outer space, home, farms, industries, hospitals, defense etc, robotics and
automation plays a major role. Be it for research and exploration purpose, education,
entertainment or disaster mitigation, automation is very essential. The following examples will
clarify a bit more:
 intelligent home i.e. house automation systems are becoming popular day by day
 manipulative arms controlled by humans
 unmanned vehicles for defense and exploration
 automated harvesters and automated dairies
 car manufacturing process
 food processing industry and packaging industry
 robotic suit for nurses
 surgery-assisting robots
 interactive robots that exhibit behaviours and learning ability
 education is integrating technologies in a creative format and robotics involves all key learning
areas such as maths, arts (i.e. materials and design), English, sciences (i.e. chemistry, physics,
mechanics, electronics) and social skills
So, I guess you have got a good idea of what robotics and automation is! We will start with
robotics and then move towards automation.
If you have read this post, do drop in a comment below. Whether good or bad, I will be happy to
see them.
Published on June 4, 2011.
Last updated on June 4, 2011.
Last reviewed on December 18, 2014.
An Overview of Robotics in Manufacturing

The word “robot” comes from the Czech word “robotnik”, meaning “to slave.” Factories started
using these machines in the early 1960’s to handle some of the more dangerous or mundane tasks
that humans didn’t want to do. However, they did more than fill unwanted factory jobs; they
completed the work with unprecedented speed and precision. Today, robots perform all kinds of
tasks and can be classified according to different criteria such as:

 Type of movement
 Application
 Architecture
 Brand
 Ability to be collaborative
As labor costs rise and competition for low-wage overseas locations increases, more and more
manufacturers are utilizing robot technologies. In fact, 90 percent of all modern robots can be
found in factories.

There are six major types of industrial robots used for various tasks:

1. Articulated

Articulated robots have rotary joints that allow for a full range of motion. They can perform very
precise movements which makes them useful in manufacturing lines where they need to bend in
different directions. Multiple arms can be utilized for greater control or to execute multiple tasks
at once.

The main advantage of articulated robots is the flexibility and dexterity. They can move and
manipulate a variety of objects while performing small tasks with greater speed and consistency
than human workers.

2. Cartesian

Also referred to rectilinear or gantry robots, Cartesian robots have three linear joints that move in
different axes (X, Y, and Z). The rigid structure of these robots allows for advanced precision
and repeatability. They are often used in assembly lines for performing simple movements such
as picking up and moving bottles.

Due to the relatively simple design and mechanic structure, Cartesian robots are fairly
inexpensive to make. They can be the cheapest solution for simple pick and place operations or
other tasks that do not require extensive movement

3. Cylindrical

Just as the name suggests, cylindrical robots have a cylindrical work area. They feature a robotic
arm that is connected to a base via single joint, with one more linear joint connecting the arm’s
links. Basically, these machines feature a single robotic arm that moves up, down, and around a
cylindrical pole.

Cylindrical robots are used for assembly operations, handling, and spot welding. Their function
is similar to Cartesian robots, but may be more preferable in some applications due to their
ability to move between required points faster.

4. Spherical

Spherical robots are similar to, but more complex than, Cartesian or cylindrical robots. They
feature a robotic arm connected to a base via a twisting joint, giving the mechanism a spherically
shaped work area. This allows them to perform tasks that require movement in a three
dimensional space.

Spherical robots were some of the first industrial robots to be used in manufacturing for
construction and other dexterous tasks that require advanced control. Nowadays, though, they are
becoming less and less popular as articulated robots are more flexible.
5. SCARA

SCARA is an acronym that stands for Selective Compliance Assembly Robot Arm. These robots
have arms that behave similarly to a human arm in that the joints allow for both vertical and
horizontal movement. However, the “wrist” has limited motion which gives it an advantage for
many types of assembly work such as pick and place, kitting, packaging, and other material
handling applications.

6. Delta

These robots are built from jointed parallelograms connected to a single base, giving them a
spider-like appearance. This type of design is optimal for delicate, precise movements that are
useful in the food, pharmaceutical, and electronic industries.

Delta robots are used extensively for assembly and other applications that require high-speed
repetition. They are able to complete highly repetitive tasks, such as small part assembly, quickly
and perfectly each time. This is beneficial not only from an efficiency standpoint, but in terms of
health and safety as well. Such tasks have been found to cause musculoskeletal disorders in
humans over long periods of time.

Revolutionizing the Industry

Thanks to industrial robots, the manufacturing industry is on the verge of a revolution. As the
technology becomes more intelligent, efficient, and cost-effective, robots are being called on to
handle more complex tasks. But this doesn’t mean jobs are any harder to come by. In part 2,
we’ll discuss how robots have actually created jobs in the manufacturing industry.

Robots are changing the face of manufacturing. They are designed to move materials, as well as
perform a variety of programmed tasks in manufacturing and production settings. They are often
used to perform duties that are dangerous, or unsuitable for human workers, such as repetitious
work that causes boredom and could lead to injuries because of the inattentiveness of the worker.

Industrial robots are able to significantly improve product quality. Applications are performed
with precision and superior repeatability on every job. This level of reliability can be difficult to
accomplish any other way. Robots are regularly being upgraded, but some of the most precise
robots used today have a repeatability of +/-0.02mm. Robots also increase workplace safety.
The disadvantages to integrating robots into a business are the significant upfront cost. Also,
ongoing maintenance requirements can add to the overall cost. Yet, the long term ROI makes
manufacturing robots the perfect investment.

Material handling is the most prevalent application of industrial robots with 38% of the robots
being used for this purpose. Material handling robots can automate some of the most tedious,
mind-numbing, and unsafe tasks in a production line. The term material handling takes in a
variety of product movements on the manufacturing floor, such as part selection, transferring of
the part, packing, palletizing, loading and unloading and machine feeding.
With the introduction of collaborative robots into manufacturing with a low price—around
$20,000—the potential to revolutionize production lines is growing. A lighter, mobile plug and
play generation of cobots is arriving on the production floor to work safely alongside human
workers thanks to advances in sensor and vision technology, and computing power. Should an
employee get in their way, the robot will stop, thereby avoiding an accident.

29% of the robots used in manufacturing are welders. This segment mostly includes spot welding
and arc welding. More small manufacturers are introducing welding robots into their fabrication
line. The cost of welding robots is going down, making it easier to automate a welding process.

The robot may be directed by a predetermined program, be guided by machine vision, or follow
a combination of the two methods. The benefits of robotic welding have demonstrated to make it
a technology that helps many manufacturers increase precision, repeatability, and output.

Welding robots offer efficiency, reach, speed, load capacity, and enhanced performance for
welding parts of all shapes and sizes; and they support a wide range of intelligent functions such
as ready-to-use robotic vision, and collision avoidance.
Assembly operations encompass 10% of the robots used in manufacturing, including fixing,
press-fitting, inserting, and disassembling. This category of robotic applications has diminished
because of the introduction of different technologies such as force torque sensors and tactile
sensors that gives more sensations to the robot.

When it comes to putting parts together, assembly robots move faster and with greater precision
than a human, and an off-the-shelf tool can be installed quicker than with special-purpose
equipment. An assembly robot is easily reconfigured and it is a low-risk investment that satisfies
the demands of manufacturing, quality and finance all at the same time.

Assembly robots can be fitted with vision systems and force sensing. The vision system guides
the robot to pick up a component from a conveyor, reducing or eliminating the need for precise
location of the part; and visual serving allows a robot to rotate or move a piece to make it fit with
another piece. Force sensing helps with part assembly operations like insertion, giving the robot
controller feedback on how well the parts are fitting together or how much force is being applied.
Together, these sensing technologies are making assembly robots even more cost efficient.

Dispensing robots are used for painting, gluing, applying adhesive, and spraying. Only 4% of
the operational robots are doing dispensing. Dispensing robots offer greater control over the
placement of fluids, including arcs, beads, circles and repeated timed dots. The benefits of a
dispensing robot include reduced manufacturing time, consistent accuracy over rough and
uneven surfaces, and improved product quality.

Dispensing robots are available for 1-part and 2-part materials. The XYZ gantry robot system
applies adhesives, sealants and lubricants with precision placement directly onto parts with
repeatable accuracy. They are used for high payload, high-speed applications.

These robots can be used to form in-place gaskets, apply adhesives, and spray coatings.
The primary components of an automated dispensing system are the PC, the robot, and the
dispensing valve components. The robot implements a computer program to dispense fluid from
the valve in a specific pattern onto a workpiece.

The fluid is dispensed through valve system, which may be contact or non-contact. Contact
dispensing requires that the dispensing tip be placed close to the part. On systems that include a
CCD camera, the robot can automatically adjust the dispensing program for each workpiece,
allowing for variations in the workpiece position or orientation. To accomplish this, the software
compares the current workpiece location to within 0.098 in. of a reference location that is stored
as an image file in the program. If the robot detects a difference in the X and Y positions and/or
the angle of rotation of the workpiece, it adjusts the dispensing path to correct for the difference.

Many manufacturers finish their products through grinding, cutting, deburring, sanding,
polishing or routing. Material removal robots can refine product surfaces, using harsh, abrasive
methods to smooth out steel to precise spot removal for small parts like jewelry. Robot material
removal can not only perfect a company's product, but it will increase cycle times and production
rates, which will save money. By automating material removal processes, manufacturers increase
the safety level in their shops by protecting workers from harmful dust and fumes caused by
material removal applications.
Robot-based inspection systems are on the increase, as vision systems become increasingly
powerful and flexible, allowing for flaw detection on parts, guaranteeing correct part assembly.
The vision system finds and inspects a part accurately. Most importantly, integrators have to
make sure of getting very good positional accuracy and communicating that back to the robot
quickly.

Robot inspection systems are now measuring components, but as tolerances get tighter and
tighter, these tolerances become harder to satisfy. The robot moves from verifying a part’s
presence to actually measuring it.

Manufacturing robots are more affordable today than ever before. Standard robot models are
now mass-produced, making them more available to meet the ever-increasing demand. These
robots are more straightforward, and more conducive to plug and play installation. They are
designed to communicate more easily with one another, making for easier production assembly
because the resulting systems are more reliable and flexible. Manufacturing robots can handle
more, as they are constructed to offer complexity and toughness in diverse manufacturing
settings. Robots are the future of manufacturing.
Robotics is a domain in artificial intelligence that deals with the study of
creating intelligent and efficient robots.

What are Robots?


Robots are the artificial agents acting in real world environment.

Objective
Robots are aimed at manipulating the objects by perceiving, picking,
moving, modifying the physical properties of object, destroying it, or to
have an effect thereby freeing manpower from doing repetitive functions
without getting bored, distracted, or exhausted.

What is Robotics?
Robotics is a branch of AI, which is composed of Electrical Engineering,
Mechanical Engineering, and Computer Science for designing, construction,
and application of robots.

Aspects of Robotics
 The robots have mechanical construction, form, or shape designed to
accomplish a particular task.

 They have electrical components which power and control the machinery.

 They contain some level of computer program that determines what, when
and how a robot does something.

Difference in Robot System and Other AI Program


Here is the difference between the two −

AI Programs Robots

They usually operate in computer- They operate in real physical world


stimulated worlds.

The input to an AI program is in Inputs to robots is analog signal in the form of


symbols and rules. speech waveform or images

They need general purpose They need special hardware with sensors and
computers to operate on. effectors.

Robot Locomotion
Locomotion is the mechanism that makes a robot capable of moving in its environment. There
are various types of locomotions −

 Legged
 Wheeled
 Combination of Legged and Wheeled Locomotion
 Tracked slip/skid
Legged Locomotion
 This type of locomotion consumes more power while demonstrating walk, jump, trot,
hop, climb up or down, etc.

 It requires more number of motors to accomplish a movement. It is suited for rough as


well as smooth terrain where irregular or too smooth surface makes it consume more
power for a wheeled locomotion. It is little difficult to implement because of stability
issues.

 It comes with the variety of one, two, four, and six legs. If a robot has multiple legs then
leg coordination is necessary for locomotion.

The total number of possible gaits (a periodic sequence of lift and release events for each of the
total legs) a robot can travel depends upon the number of its legs.

If a robot has k legs, then the number of possible events N = (2k-1)!.

In case of a two-legged robot (k=2), the number of possible events is N = (2k-1)! = (2*2-1)! =
3! = 6.

Hence there are six possible different events −

 Lifting the Left leg


 Releasing the Left leg
 Lifting the Right leg
 Releasing the Right leg
 Lifting both the legs together
 Releasing both the legs together
In case of k=6 legs, there are 39916800 possible events. Hence the complexity of robots is
directly proportional to the number of legs.

Wheeled Locomotion
It requires fewer number of motors to accomplish a movement. It is little easy to implement as
there are less stability issues in case of more number of wheels. It is power efficient as
compared to legged locomotion.

 Standard wheel − Rotates around the wheel axle and around the contact

 Castor wheel − Rotates around the wheel axle and the offset steering joint.

 Swedish 45o and Swedish 90o wheels − Omni-wheel, rotates around the contact point,
around the wheel axle, and around the rollers.

 Ball or spherical wheel − Omnidirectional wheel, technically difficult to implement.

Slip/Skid Locomotion
In this type, the vehicles use tracks as in a tank. The robot is steered by moving the tracks with
different speeds in the same or opposite direction. It offers stability because of large contact
area of track and ground.
Components of a Robot
Robots are constructed with the following −

 Power Supply − The robots are powered by batteries, solar power, hydraulic, or
pneumatic power sources.

 Actuators − They convert energy into movement.

 Electric motors (AC/DC) − They are required for rotational movement.

 Pneumatic Air Muscles − They contract almost 40% when air is sucked in them.

 Muscle Wires − They contract by 5% when electric current is passed through them.

 Piezo Motors and Ultrasonic Motors − Best for industrial robots.

 Sensors − They provide knowledge of real time information on the task environment.
Robots are equipped with vision sensors to be to compute the depth in the environment.
A tactile sensor imitates the mechanical properties of touch receptors of human
fingertips.

Computer Vision
This is a technology of AI with which the robots can see. The computer vision plays vital role in
the domains of safety, security, health, access, and entertainment.

Computer vision automatically extracts, analyzes, and comprehends useful information from a
single image or an array of images. This process involves development of algorithms to
accomplish automatic visual comprehension.

Hardware of Computer Vision System


This involves −

 Power supply
 Image acquisition device such as camera
 A processor
 A software
 A display device for monitoring the system
 Accessories such as camera stands, cables, and connectors
Tasks of Computer Vision
 OCR − In the domain of computers, Optical Character Reader, a software to convert
scanned documents into editable text, which accompanies a scanner.

 Face Detection − Many state-of-the-art cameras come with this feature, which enables to
read the face and take the picture of that perfect expression. It is used to let a user access
the software on correct match.

 Object Recognition − They are installed in supermarkets, cameras, high-end cars such
as BMW, GM, and Volvo.

 Estimating Position − It is estimating position of an object with respect to camera as in


position of tumor in human’s body.

Application Domains of Computer Vision

 Agriculture
 Autonomous vehicles
 Biometrics
 Character recognition
 Forensics, security, and surveillance
 Industrial quality inspection
 Face recognition
 Gesture analysis
 Geoscience
 Medical imagery
 Pollution monitoring
 Process control
 Remote sensing
 Robotics
 Transport
Applications of Robotics
The robotics has been instrumental in the various domains such as −

 Industries − Robots are used for handling material, cutting, welding, color coating,
drilling, polishing, etc.

 Military − Autonomous robots can reach inaccessible and hazardous zones during war.
A robot named Daksh, developed by Defense Research and Development Organization
(DRDO), is in function to destroy life-threatening objects safely.

 Medicine − The robots are capable of carrying out hundreds of clinical tests
simultaneously, rehabilitating permanently disabled people, and performing complex
surgeries such as brain tumors.

 Exploration − The robot rock climbers used for space exploration, underwater drones
used for ocean exploration are to name a few.

 Entertainment − Disney’s engineers have created hundreds of robots for movie making.
Manufacturing Applications for Robotics in 2018
1. Warehouse Logistics
Historically, most robotics applications were limited to assembly-line operations. As industrial
robots become more sophisticated and capable of assuming more responsibility, manufacturers
have begun exploring their use in the warehouse.

Automated robots navigate large storerooms and complex floor plans much more quickly, safely
and efficiently than their human counterparts, so warehouse-bound robots have the potential to
cut long-term costs significantly.

2. Aerospace
The aerospace industry is also exploring new and advanced applications in industrial robotics.
Although human researchers and development teams are necessary to conceptualize
breakthroughs, visualize new blueprints, and verify operability, the industry has delegated much
of the grunt work to industrial robots.

Boeing, which began operations in 1916 as the Boeing Airplane Company, is focused on
automation that will "improve employee safety by removing ergonomic risks," according to
spokesperson Nate Hulings. The company was among the first to deploy robots in the aerospace
sector.

3. Automotive Manufacturing
Top automotive manufacturers have used robotics for decades, and the trend is increasing as
robots become more affordable and efficient. North America received more than 20,000 new
units between 2011 and 2013, and a sharp uptick occurred at the beginning of 2014. Capital
investments have also experienced dramatic increases in recent years.
One reason behind the increasing acceptance of robots is their versatility. Factories can easily
modify or upgrade new designs to accommodate any tools or hardware they need for the job,
including air compressors. As pneumatic and air-powered equipment is necessary for many
stages of automotive manufacturing, it makes sense this industry is among the first to take robots
away from the assembly line and into roles of greater scope and accountability.

4. Woodworking and Construction


Lumberyards and construction sites can host dozens of individual workers at any given time.
Between contractors and subcontractors, skilled trade workers and general laborers, these are
hectic and fast-paced work environments. To reduce some of this organized chaos, and to help
cut costs, companies have moved much of the work they used to do onsite to a workshop or
manufacturing facility.

Robots easily build pallets, cut lumber to size and plane wood according to exact requirements.
Manufactured homes are growing in popularity, many of which next-gen robots assemble — at
least, in part. In some cases, builders even bring robots to the construction site for simple
framing jobs.

5. Home and Office Furniture


Consumers often save money on new furniture by purchasing it in unassembled kits. Some
pieces are easier to assemble than others, and some complex designs require special tools or
hardware — which may or may not come with the purchase.

But many manufacturers, including Steelcase — a manufacturer of office furniture — are


capable of handling most of this work before shipping. They've recently implemented multiple
computer-controlled collaborative robots (or 'cobots') in their production facility in Grand
Rapids, Mich.

Instead of taking away the jobs of current workers, they work alongside them to handle the more
arduous or mundane tasks — and it results in a better product in the end.

Working Alongside Robots, Cobots and Next-Gen Technology


Whether you embrace the idea or stand adamantly against it, there's no denying the widespread
integration of robots is already happening. Experts in the field agree that, at least for now, robots
are stuck with the monotonous, mundane jobs skilled human workers would rather pass up.

Embracing the movement and working in tandem with technology is necessary to uncover the
true potential of next-gen robotics and to determine the exact role humans will play in automated
manufacturing.
UNIT IV

INTERNET OF THINGS

The Internet of things (IoT) is the network of devices such as vehicles, and home appliances
that contain electronics, software, sensors, actuators, and connectivity which allows these things
to connect, interact and exchange data.
The IoT involves extending Internet connectivity beyond standard devices, such
as desktops, laptops, smartphones and tablets, to any range of traditionally dumb or non-internet-
enabled physical devices and everyday objects. Embedded with technology, these devices can
communicate and interact over the Internet, and they can be remotely monitored and controlled.

History[edit]
The definition of the Internet of things has evolved due to convergence of multiple technologies,
real-time analytics, machine learning, commodity sensors, and embedded systems.[5]Traditional
fields of embedded systems, wireless sensor networks, control
systems, automation (including home and building automation), and others all contribute to
enabling the Internet of things.[6]
The concept of a network of smart devices was discussed as early as 1982, with a modified Coke
vending machine at Carnegie Mellon University becoming the first Internet-connected
appliance,[7] able to report its inventory and whether newly loaded drinks were cold or
not.[8] Mark Weiser's 1991 paper on ubiquitous computing, "The Computer of the 21st Century",
as well as academic venues such as UbiComp and PerCom produced the contemporary vision of
the IoT.[9][10] In 1994, Reza Raji described the concept in IEEE Spectrum as "[moving] small
packets of data to a large set of nodes, so as to integrate and automate everything from home
appliances to entire factories".[11] Between 1993 and 1997, several companies proposed solutions
like Microsoft's at Work or Novell's NEST. The field gained momentum when Bill
Joy envisioned Device to Device (D2D) communication as a part of his "Six Webs" framework,
presented at the World Economic Forum at Davos in 1999. [12]
The term "Internet of things" was likely coined by Kevin Ashton of Procter & Gamble,
later MIT's Auto-ID Center, in 1999,[13] though he prefers the phrase "Internet for things".[14] At
that point, he viewed Radio-frequency identification (RFID) as essential to the Internet of
things,[15] which would allow computers to manage all individual things. [16][17][18]
A research article mentioning the Internet of Things was submitted to the conference for Nordic
Researchers in Norway, in June 2002,[19] which was preceded by an article published in Finnish
in January 2002.[20] The implementation described there was developed by Kary Främling and
his team at Helsinki University of Technology and more closely matches the modern one, i.e. an
information system infrastructure for implementing smart, connected objects. [21]
Defining the Internet of things as "simply the point in time when more 'things or objects' were
connected to the Internet than people", Cisco Systems estimated that the IoT was "born" between
2008 and 2009, with the things/people ratio growing from 0.08 in 2003 to 1.84 in 2010. [22]
Applications[edit]

A Nest learning thermostat reporting on energy usage and local weather.

A Ring doorbell connected to the Internet

An August Home smart lock connected to the Internet


The extensive set of applications for IoT devices[23] is often divided into consumer, commercial,
industrial, and infrastructure spaces.[24][25]
Consumer applications[edit]
A growing portion of IoT devices are created for consumer use, including connected
vehicles, home automation, wearable technology (as part of Internet of Wearable Things
(IoWT)[26]), connected health, and appliances with remote monitoring capabilities. [27]
Smart home[edit]
IoT devices are a part of the larger concept of home automation, which can include lighting,
heating and air conditioning, media and security systems.[28][29] Long term benefits could include
energy savings by automatically ensuring lights and electronics are turned off.
A smart home or automated home could be based on a platform or hubs that control smart
devices and appliances.[30] For instance, using Apple's HomeKit, manufacturers can get their
home products and accessories be controlled by an application in iOS devices such as
the iPhone and the Apple Watch.[31][32] This could be a dedicated app or iOS native applications
such as Siri.[33] This can be demonstrated in the case of Lenovo's Smart Home Essentials, which
is a line of smart home devices that are controlled through Apple's Home app or Siri without the
need for a Wi-Fi bridge.[33] There are also dedicated smart home hubs that are offered as
standalone platforms to connect different smart home products and these include the Amazon
Echo, Google Home, Apple's HomePod, and Samsung's SmartThings Hub.[34]
Elder care[edit]
One key application of smart home is to provide assistance for those with disabilities and elderly
individuals. These home systems use assistive technology to accommodate an owner's specific
disabilities.[35] Voice control can assist users with sight and mobility limitations while alert
systems can be connected directly to cochlear implants worn by hearing impaired users.[36] They
can also be equipped with additional safety features. These features can include sensors that
monitor for medical emergencies such as falls or seizures. [37] Smart home technology applied in
this way can provide users with more freedom and a higher quality of life. [35]
The term "Enterprise IoT" refers to devices used in business and corporate settings. By 2019, it is
estimated that the EIoT will account for 9.1 billion devices. [24]
Commercial application[edit]
Medical and healthcare[edit]
The Internet of Medical Things (also called the internet of health things) is an application of
the IoT for medical and health related purposes, data collection and analysis for research, and
monitoring.[38][39][40][41][42] This 'Smart Healthcare',[43] as it can also be called, led to the creation
of a digitized healthcare system, connecting available medical resources and healthcare
services.[44]
IoT devices can be used to enable remote health monitoring and emergency notification systems.
These health monitoring devices can range from blood pressure and heart rate monitors to
advanced devices capable of monitoring specialized implants, such as pacemakers, Fitbit
electronic wristbands, or advanced hearing aids. [45] Some hospitals have begun implementing
"smart beds" that can detect when they are occupied and when a patient is attempting to get up. It
can also adjust itself to ensure appropriate pressure and support is applied to the patient without
the manual interaction of nurses.[38] A 2015 Goldman Sachs report indicated that healthcare IoT
devices "can save the United States more than $300 billion in annual healthcare expenditures by
increasing revenue and decreasing cost."[46][47] Moreover, the use of mobile devices to support
medical follow-up led to the creation of 'm-health', used "to analyze, capture, transmit and store
health statistics from multiple resources, including sensors and other biomedical acquisition
systems".[48]
Specialized sensors can also be equipped within living spaces to monitor the health and general
well-being of senior citizens, while also ensuring that proper treatment is being administered and
assisting people regain lost mobility via therapy as well. [49] These sensors create a network of
intelligent sensors that are able to collect, process, transfer and analyse valuable information in
different environments, such as connecting in-home monitoring devices to hospital-based
systems.[43] Other consumer devices to encourage healthy living, such as connected scales
or wearable heart monitors, are also a possibility with the IoT. [50] End-to-end health monitoring
IoT platforms are also available for antenatal and chronic patients, helping one manage health
vitals and recurring medication requirements. [51]
Advances in plastic and fabric electronics fabrication methods have enabled ultra-low cost, use-
and-throw IoMT sensors. These sensors, along with the required RFID electronics, can be
fabricated on paper or e-textiles for wirelessly powered disposable sensing
[52]
devices. Applications have been established for point-of-care medical diagnostics, where
portability and low system-complexity is essential. [53]
As of 2018 IoMT was not only being applied in the clinical laboratory industry,[40] but also in the
healthcare and health insurance industries. IoMT in the healthcare industry is now permitting
doctors, patients and others involved (i.e. guardians of patients, nurses, families, etc.) to be part
of a system, where patient records are saved in a database, allowing doctors and the rest of the
medical staff to have access to the patient's information. [44] Moreover, IoT-based systems are
patient-centered, which involves being flexible to the patient's medical conditions. [44] IoMT in
the insurance industry provides access to better and new types of dynamic information. This
includes sensor-based solutions such as biosensors, wearables, connected health devices and
mobile apps to track customer behaviour. This can lead to more accurate underwriting and new
pricing models.[54]
The application of the IOT in healthcare plays a fundamental role in managing chronic diseases
and in disease prevention and control. Remote monitoring is made possible through the
connection of powerful wireless solutions. The connectivity enables health practitioners to
capture patient’s data and applying complex algorithms in health data analysis. [55]
Transportation[edit]

Digital variable speed-limit sign.


The IoT can assist in the integration of communications, control, and information processing
across various transportation systems. Application of the IoT extends to all aspects of
transportation systems (i.e. the vehicle, [56] the infrastructure, and the driver or user). Dynamic
interaction between these components of a transport system enables inter and intra vehicular
communication,[57] smart traffic control, smart parking, electronic toll collection
systems, logistic and fleet management, vehicle control, and safety and road assistance.[45][58] In
Logistics and Fleet Management for example, The IoT platform can continuously monitor the
location and conditions of cargo and assets via wireless sensors and send specific alerts when
management exceptions occur (delays, damages, thefts, etc.). This can only be possible with the
IoT and its seamless connectivity among devices. Sensors such as GPS, Humidity, Temperature,
send data to the IoT platform and then the data is analyzed and send further to the users. This
way, users can track the real-time status of vehicles and can make appropriate decisions. If
combined with Machine Learning then it also helps in reducing traffic accidents by
introducing drowsiness alerts to drivers and providing self driven cars too.
Building and home automation[edit]
IoT devices can be used to monitor and control the mechanical, electrical and electronic systems
used in various types of buildings (e.g., public and private, industrial, institutions, or
residential)[45] in home automation and building automation systems. In this context, three main
areas are being covered in literature:[59]

 The integration of the Internet with building energy management systems in order to create
energy efficient and IOT-driven "smart buildings".[59]
 The possible means of real-time monitoring for reducing energy consumption[60] and
monitoring occupant behaviors.[59]
 The integration of smart devices in the built environment and how they might to know who
to be used in future applications.[59]
Industrial applications[edit]
Main article: Industrial Internet of Things

Manufacturing[edit]
The IoT can realize the seamless integration of various manufacturing devices equipped with
sensing, identification, processing, communication, actuation, and networking capabilities. Based
on such a highly integrated smart cyberphysical space, it opens the door to create whole new
business and market opportunities for manufacturing. [61] Network control and management
of manufacturing equipment, asset and situation management, or manufacturing process
control bring the IoT within the realm of industrial applications and smart manufacturing as
well.[62] The IoT intelligent systems enable rapid manufacturing of new products, dynamic
response to product demands, and real-time optimization of manufacturing production
and supply chain networks, by networking machinery, sensors and control systems together. [45]
Digital control systems to automate process controls, operator tools and service information
systems to optimize plant safety and security are within the purview of the IoT. [63] But it also
extends itself to asset management via predictive maintenance, statistical evaluation, and
measurements to maximize reliability.[64] Smart industrial management systems can also be
integrated with the Smart Grid, thereby enabling real-time energy optimization. Measurements,
automated controls, plant optimization, health and safety management, and other functions are
provided by a large number of networked sensors. [45]
The term industrial Internet of things (IIoT) is often encountered in the manufacturing industries,
referring to the industrial subset of the IoT. IIoT in manufacturing could generate so much
business value that it will eventually lead to the Fourth Industrial Revolution, so the so-
called Industry 4.0. It is estimated that in the future, successful companies will be able to
increase their revenue through Internet of things by creating new business models and improve
productivity, exploit analytics for innovation, and transform workforce. [65] The potential of
growth by implementing IIoT may generate $12 trillion of global GDP by 2030. [65]

Design architecture of cyber-physical systems-enabled manufacturing system[66]


Industrial big data analytics will play a vital role in manufacturing asset predictive maintenance,
although that is not the only capability of industrial big data. [67][68] Cyber-physical systems (CPS)
is the core technology of industrial big data and it will be an interface between human and the
cyber world. Cyber-physical systems can be designed by following the 5C (connection,
conversion, cyber, cognition, configuration) architecture, [66] and it will transform the collected
data into actionable information, and eventually interfere with the physical assets to optimize
processes.
An IoT-enabled intelligent system of such cases was proposed in 2001 and later demonstrated in
2014 by the National Science Foundation Industry/University Collaborative Research Center
for Intelligent Maintenance Systems (IMS) at the University of Cincinnati on a bandsaw machine
in IMTS 2014 in Chicago.[69][70][71]Bandsaw machines are not necessarily expensive, but the
bandsaw belt expenses are enormous since they degrade much faster. However, without sensing
and intelligent analytics, it can be only determined by experience when the band saw belt will
actually break. The developed prognostics system will be able to recognize and monitor the
degradation of band saw belts even if the condition is changing, advising users when is the best
time to replace the belt. This will significantly improve user experience and operator safety and
ultimately save on costs.[71]
Agriculture[edit]
There are numerous IoT applications in farming[72] such as collecting data on temperature,
rainfall, humidity, wind speed, pest infestation, and soil content. This data can be used to
automate farming techniques, take informed decisions to improve quality and quantity, minimize
risk and waste, and reduce effort required to manage crops. For example, farmers can now
monitor soil temperature and moisture from afar, and even apply IoT-acquired data to precision
fertilization programs.[73]
In August 2018, Toyota Tsusho began a partnership with Microsoft to create fish farming tools
using the Microsoft Azure application suite for IoT technologies related to water management.
Developed in part by researchers from Kindai University, the water pump mechanisms
use artificial intelligence to count the number of fish on a conveyor belt, analyze the number of
fish, and deduce the effectiveness of water flow from the data the fish provide. The
specific computer programs used in the process fall under the Azure Machine Learning and the
Azure IoT Hub platforms.[74]
Infrastructure applications[edit]
Monitoring and controlling operations of sustainable urban and rural infrastructures like bridges,
railway tracks and on- and offshore wind-farms is a key application of the IoT. [63] The IoT
infrastructure can be used for monitoring any events or changes in structural conditions that can
compromise safety and increase risk. The IoT can benefit the construction industry by cost
saving, time reduction, better quality workday, paperless workflow and increase in productivity.
It can help in taking faster decisions and save money with Real-Time Data Analytics. It can also
be used for scheduling repair and maintenance activities in an efficient manner, by coordinating
tasks between different service providers and users of these facilities. [45] IoT devices can also be
used to control critical infrastructure like bridges to provide access to ships. Usage of IoT
devices for monitoring and operating infrastructure is likely to improve incident management
and emergency response coordination, and quality of service, up-times and reduce costs of
operation in all infrastructure related areas. [75] Even areas such as waste management can
benefit[76] from automation and optimization that could be brought in by the IoT. [77]
Metropolitan scale deployments[edit]
There are several planned or ongoing large-scale deployments of the IoT, to enable better
management of cities and systems. For example, Songdo, South Korea, the first of its kind fully
equipped and wired smart city, is gradually being built, with approximately 70 percent of the
business district completed as of June 2018. Much of the city is planned to be wired and
automated, with little or no human intervention.[78]
Another application is a currently undergoing project in Santander, Spain. For this deployment,
two approaches have been adopted. This city of 180,000 inhabitants has already seen 18,000
downloads of its city smartphone app. The app is connected to 10,000 sensors that enable
services like parking search, environmental monitoring, digital city agenda, and more. City
context information is used in this deployment so as to benefit merchants through a spark deals
mechanism based on city behavior that aims at maximizing the impact of each notification. [79]
Other examples of large-scale deployments underway include the Sino-Singapore Guangzhou
Knowledge City;[80] work on improving air and water quality, reducing noise pollution, and
increasing transportation efficiency in San Jose, California;[81] and smart traffic management in
western Singapore.[82] French company, Sigfox, commenced building an Ultra
Narrowband wireless data network in the San Francisco Bay Area in 2014, the first business to
achieve such a deployment in the U.S.[83][84] It subsequently announced it would set up a total of
4000 base stations to cover a total of 30 cities in the U.S. by the end of 2016, making it the
largest IoT network coverage provider in the country thus far.[85][86] Cisco also participates in
smart cities projects. Cisco has started deploying technologies for Smart Wi-Fi, Smart Safety &
Security, Smart Lighting, Smart Parking, Smart Transports, Smart Bus Stops, Smart Kiosks,
Remote Expert for Government Services (REGS) and Smart Education in the five km area in the
city of Vijaywada.[87]
Another example of a large deployment is the one completed by New York Waterways in New
York City to connect all the city's vessels and be able to monitor them live 24/7. The network
was designed and engineered by Fluidmesh Networks, a Chicago-based company developing
wireless networks for critical applications. The NYWW network is currently providing coverage
on the Hudson River, East River, and Upper New York Bay. With the wireless network in place,
NY Waterway is able to take control of its fleet and passengers in a way that was not previously
possible. New applications can include security, energy and fleet management, digital signage,
public Wi-Fi, paperless ticketing and others.[88]
Energy management[edit]
Significant numbers of energy-consuming devices (e.g. switches, power outlets, bulbs,
televisions, etc.) already integrate Internet connectivity, which can allow them to communicate
with utilities to balance power generation and energy usage[89] and optimize energy consumption
as a whole.[45] These devices allow for remote control by users, or central management via
a cloud-based interface, and enable functions like scheduling (e.g., remotely powering on or off
heating systems, controlling ovens, changing lighting conditions etc.).[45] The smart grid is a
utility-side IoT application; systems gather and act on energy and power-related information to
improve the efficiency of the production and distribution of electricity. [89] Using advanced
metering infrastructure (AMI) Internet-connected devices, electric utilities not only collect data
from end-users, but also manage distribution automation devices like transformers. [45]
Environmental monitoring[edit]
Environmental monitoring applications of the IoT typically use sensors to assist in environmental
protection[90] by monitoring air or water quality,[91] atmospheric or soil conditions,[92]and can
even include areas like monitoring the movements of wildlife and their habitats.[93] Development
of resource-constrained devices connected to the Internet also means that other applications
like earthquake or tsunami early-warning systems can also be used by emergency services to
provide more effective aid. IoT devices in this application typically span a large geographic area
and can also be mobile.[45] It has been argued that the standardization IoT brings to wireless
sensing will revolutionize this area. [94]
Living Lab
Another example of integrating the IoT is Living Lab which integrates and combines research
and innovation process, establishing within a public-private-people-partnership.[95]There are
currently 320 Living Labs that use the IoT to collaborate and share knowledge between
stakeholders to co-create innovative and technological products. For companies to implement
and develop IoT services for smart cities, they need to have incentives. The governments play
key roles in smart cities projects as changes in policies will help cities to implement the IoT
which provides effectiveness, efficiency, and accuracy of the resources that are being used. For
instance, the government provides tax incentives and cheap rent, improves public transports, and
offers an environment where start-up companies, creative industries, and multinationals may co-
create, share common infrastructure and labor markets, and take advantages of locally embedded
technologies, production process, and transaction costs.[95] The relationship between the
technology developers and governments who manage city's assets, is key to provide open access
of resources to users in an efficient way.
Trends and characteristics[edit]

Technology roadmap: Internet of things.


The IoT's major significant trend in recent years is the explosive growth of devices connected
and controlled by the Internet.[96] The wide range of applications for IoT technology mean that
the specifics can be very different from one device to the next but there are basic characteristics
shared by most.
The IoT creates opportunities for more direct integration of the physical world into computer-
based systems, resulting in efficiency improvements, economic benefits, and reduced human
exertions.[97][98][99][100]
The number of IoT devices increased 31% year-over-year to 8.4 billion in the year 2017[101] and
it is estimated that there will be 30 billion devices by 2020. [96] The global market value of IoT is
projected to reach $7.1 trillion by 2020.[102]
Intelligence[edit]
Ambient intelligence and autonomous control are not part of the original concept of the Internet
of things. Ambient intelligence and autonomous control do not necessarily require Internet
structures, either. However, there is a shift in research (by companies such as Intel) to integrate
the concepts of the IoT and autonomous control, with initial outcomes towards this direction
considering objects as the driving force for autonomous IoT. [103] A promising approach in this
context is deep reinforcement learning where most of IoT systems provide a dynamic and
interactive environment.[104] Training an agent (i.e., IoT device) to behave smartly in such an
environment cannot be addressed by conventional machine learning algorithms such
as supervised learning. By reinforcement learning approach, a learning agent can sense the
environment’s state (e.g., sensing home temperature), perform actions (e.g., turn HVAC on or
off) and learn through the maximizing accumulated rewards it receives in long term.
IoT intelligence can be offered at three levels: IoT devices, Edge/Fog nodes, and Cloud
computing.[105] The need for intelligent control and decision at each level depends on the time
sensitiveness of the IoT application. For example, an autonomous vehicle's camera needs to
make real-time obstacle detection to avoid an accident. This fast decision making would not be
possible through transferring data from the vehicle to cloud instances and return the predictions
back to the vehicle. Instead, all the operation should be performed locally in the vehicle.
Integrating advanced machine learning algorithms including deep learning into IoT devices is an
active research area to make smart objects closer to reality. Moreover, it is possible to get the
most value out of IoT deployments through analyzing IoT data, extracting hidden information,
and predicting control decisions. A wide variety of machine learning techniques have been used
in IoT domain ranging from traditional methods such as regression, support vector machine,
and random forest to advanced ones such as convolutional neural networks, LSTM,
and variational autoencoder. [106][105]
In the future, the Internet of Things may be a non-deterministic and open network in which auto-
organized or intelligent entities (web services, SOA components) and virtual objects (avatars)
will be interoperable and able to act independently (pursuing their own objectives or shared
ones) depending on the context, circumstances or environments. Autonomous behavior through
the collection and reasoning of context information as well as the object's ability to detect
changes in the environment (faults affecting sensors) and introduce suitable mitigation measures
constitutes a major research trend, [107] clearly needed to provide credibility to the IoT
technology. Modern IoT products and solutions in the marketplace use a variety of different
technologies to support such context-aware automation, but more sophisticated forms of
intelligence are requested to permit sensor units and intelligent cyber-physical systems to be
deployed in real environments.[108]
Architecture[edit]
This section needs attention from an expert in Technology. The
specific problem is: The information is partially outdated, unclear,
and uncited. Requires more details, but not so technical that others
won't understand it.. WikiProject Technologymay be able to help
recruit an expert. (July 2018)

IIoT system architecture, in its simplistic view, consists of three tiers: Tier 1: Devices, Tier 2: the
Edge Gateway, and Tier 3: the Cloud. [109] Devices include networked things, such as the sensors
and actuators found in IIoT equipment, particularly those that use protocols such as Modbus,
Zigbee, or proprietary protocols, to connect to an Edge Gateway. [109] The Edge Gateway consists
of sensor data aggregation systems called Edge Gateways that provide functionality, such as pre-
processing of the data, securing connectivity to cloud, using systems such as WebSockets, the
event hub, and, even in some cases, edge analytics or fog computing. [109] The final tier includes
the cloud application built for IIoT using the microservices architecture, which are usually
polyglot and inherently secure in nature using HTTPS/OAuth. It includes various database
systems that store sensor data, such as time series databases or asset stores using backend data
storage systems (e.g. Cassandra, Postgres). [109] The cloud tier in most cloud-based IoT system
features event queuing and messaging system that handles communication that transpires in all
tiers.[110] Some experts classified the three-tiers in the IIoT system as edge, platform, and
enterprise and these are connected by proximity network, access network, and service network,
respectively.[111]
Building on the Internet of things, the web of things is an architecture for the application layer of
the Internet of things looking at the convergence of data from IoT devices into Web applications
to create innovative use-cases. In order to program and control the flow of information in the
Internet of things, a predicted architectural direction is being called BPM Everywhere which is a
blending of traditional process management with process mining and special capabilities to
automate the control of large numbers of coordinated devices. [citation needed]
Network architecture[edit]
The Internet of things requires huge scalability in the network space to handle the surge of
devices.[112] IETF 6LoWPAN would be used to connect devices to IP networks. With billions of
devices[113] being added to the Internet space, IPv6 will play a major role in handling the network
layer scalability. IETF's Constrained Application Protocol, ZeroMQ, and MQTT would provide
lightweight data transport.
Fog computing is a viable alternative to prevent such large burst of data flow through
Internet.[114] The edge devices' computation power can be used to analyse and process data, thus
providing easy real time scalability. [citation needed]
Complexity[edit]
In semi-open or closed loops (i.e. value chains, whenever a global finality can be settled) the IoT
will often be considered and studied as a complex system[115] due to the huge number of different
links, interactions between autonomous actors, and its capacity to integrate new actors. At the
overall stage (full open loop) it will likely be seen as a chaoticenvironment
(since systems always have finality). As a practical approach, not all elements in the Internet of
things run in a global, public space. Subsystems are often implemented to mitigate the risks of
privacy, control and reliability. For example, domestic robotics (domotics) running inside a
smart home might only share data within and be available via a local network.[116] Managing and
controlling high dynamic ad hoc IoT things/devices network is a tough task with the traditional
networks architecture, Software Defined Networking (SDN) provides the agile dynamic solution
that can cope with the special requirements of the diversity of innovative IoT applications. [117]
Size considerations[edit]
The Internet of things would encode 50 to 100 trillion objects, and be able to follow the
movement of those objects. Human beings in surveyed urban environments are each surrounded
by 1000 to 5000 trackable objects.[118] In 2015 there were already 83 million smart devices in
people's homes. This number is about to grow up to 193 million devices in 2020 and will for sure
go on growing in the near future.[29]
The figure of online capable devices grew 31% from 2016 to 8.4 billion in 2017. [101]
Space considerations[edit]
In the Internet of things, the precise geographic location of a thing—and also the precise
geographic dimensions of a thing—will be critical.[119] Therefore, facts about a thing, such as its
location in time and space, have been less critical to track because the person processing the
information can decide whether or not that information was important to the action being taken,
and if so, add the missing information (or decide to not take the action). (Note that some things
in the Internet of things will be sensors, and sensor location is usually important. [120])
The GeoWeb and Digital Earth are promising applications that become possible when things can
become organized and connected by location. However, the challenges that remain include the
constraints of variable spatial scales, the need to handle massive amounts of data, and an
indexing for fast search and neighbor operations. In the Internet of things, if things are able to
take actions on their own initiative, this human-centric mediation role is eliminated. Thus, the
time-space context that we as humans take for granted must be given a central role in this
information ecosystem. Just as standards play a key role in the Internet and the Web, geospatial
standards will play a key role in the Internet of things. [121][122]
A solution to "basket of remotes"[edit]
Many IoT devices have a potential to take a piece of this market. Jean-Louis Gassée (Apple
initial alumni team, and BeOS co-founder) has addressed this topic in an article on Monday
Note,[123] where he predicts that the most likely problem will be what he calls the "basket of
remotes" problem, where we'll have hundreds of applications to interface with hundreds of
devices that don't share protocols for speaking with one another.[123] For improved user
interaction, some technology leaders are joining forces to create standards for communication
between devices to solve this problem. Others are turning to the concept of predictive interaction
of devices, "where collected data is used to predict and trigger actions on the specific devices"
while making them work together.[124]

Enabling technologies for IoT[edit]


There are many technologies that enable the IoT. Crucial to the field is the network used to
communicate between devices of an IoT installation, a role that several wireless or wired
technologies may fulfill: [125][126][127]
Addressability[edit]
The original idea of the Auto-ID Center is based on RFID-tags and distinct identification through
the Electronic Product Code. This has evolved into objects having an IP address or URI.[128] An
alternative view, from the world of the Semantic Web[129] focuses instead on making all things
(not just those electronic, smart, or RFID-enabled) addressable by the existing naming protocols,
such as URI. The objects themselves do not converse, but they may now be referred to by other
agents, such as powerful centralized servers acting for their human owners. [130] Integration with
the Internet implies that devices will use an IP address as a distinct identifier. Due to the limited
address space of IPv4 (which allows for 4.3 billion different addresses), objects in the IoT will
have to use the next generation of the Internet protocol (IPv6) to scale to the extremely large
address space required.[131][132][133]Internet-of-things devices additionally will benefit from the
stateless address auto-configuration present in IPv6,[134] as it reduces the configuration overhead
on the hosts,[132] and the IETF 6LoWPAN header compression. To a large extent, the future of
the Internet of things will not be possible without the support of IPv6; and consequently, the
global adoption of IPv6 in the coming years will be critical for the successful development of the
IoT in the future.[133]
Short-range wireless[edit]

 Bluetooth mesh networking – Specification providing a mesh networking variant


to Bluetooth low energy (BLE) with increased number of nodes and standardized application
layer (Models).
 Light-Fidelity (Li-Fi) – Wireless communication technology similar to the Wi-Fi standard,
but using visible light communication for increased bandwidth.
 Near-field communication (NFC) – Communication protocols enabling two electronic
devices to communicate within a 4 cm range.
 QR codes and barcodes – Machine-readable optical tags that store information about the item
to which they are attached.
 Radio-frequency identification (RFID) – Technology using electromagnetic fields to read
data stored in tags embedded in other items.
 Transport Layer Security – Network security protocol.
 Wi-Fi – technology for local area networking based on the IEEE 802.11 standard, where
devices may communicate through a shared access point or directly between individual
devices.
 ZigBee – Communication protocols for personal area networking based on the IEEE
802.15.4 standard, providing low power consumption, low data rate, low cost, and high
throughput.
Medium-range wireless[edit]

 LTE-Advanced – High-speed communication specification for mobile networks. Provides


enhancements to the LTE standard with extended coverage, higher throughput, and lower
latency.
Long-range wireless[edit]

 Low-power wide-area networking (LPWAN) – Wireless networks designed to allow long-


range communication at a low data rate, reducing power and cost for transmission. Available
LPWAN technologies and protocols: LoRaWan, Sigfox, NB-IoT, Weightless.
 Very small aperture terminal (VSAT) – Satellite communication technology using small dish
antennas for narrowband and broadband data.
Wired[edit]

 Ethernet – General purpose networking standard using twisted pair and fiber optic links in
conjunction with hubs or switches.
 Power-line communication (PLC) – Communication technology using electrical wiring to
carry power and data. Specifications such as HomePlug or G.hn utilize PLC for networking
IoT devices.
Standards and standards organizations[edit]
This section needs
expansion. You can help
by adding to it.(September
2016)

This is a list of technical standards for the IoT, most of which are open standards, and
the standards organizations that aspire to successfully setting them. [135][136]

Short
Long name Standards under development Other notes
name

Auto Networked RFID (radiofrequency


Auto-ID
Identification identification) and
Labs
Center emerging sensing technologies

EPCglobal Electronic Standards for adoption


Product code of EPC (Electronic Product Code)
Short
Long name Standards under development Other notes
name

Technology technology

U.S. Food and UDI (Unique Device Identification)


FDA Drug system for distinct identifiers
Administration for medical devices

Standards for UIDs ("unique"


Parent organization
identifiers) and RFID of fast-moving
comprises member
GS1 — consumer goods (consumer packaged
organizations such
goods), health care supplies, and other
as GS1 US
things

Institute of
Electrical and Underlying communication technology
IEEE
Electronics standards such as IEEE 802.15.4
Engineers

Internet
Standards that comprise TCP/IP (the
IETF Engineering
Internet protocol suite)
Task Force

MTConnect is a manufacturing industry


standard for data exchange with machine
MTConnect
— tools and related industrial equipment. It
Institute
is important to the IIoT subset of the
IoT.

O-DF is a standard published by the


Internet of Things Work Group of The
Open Data Open Group in 2014, which specifies a
O-DF
Format generic information model structure that
is meant to be applicable for describing
any "Thing", as well as for publishing,
Short
Long name Standards under development Other notes
name

updating and querying information when


used together with O-MI (Open
Messaging Interface).

O-MI is a standard published by the


Internet of Things Work Group of The
Open Open Group in 2014, which specifies a
O-MI Messaging limited set of key operations needed in
Interface IoT systems, notably different kinds of
subscription mechanisms based on
the Observer pattern.

OCF (Open
Connectivity
Open Standards for simple devices
Foundation)
OCF Connectivity using CoAP(Constrained Application
supersedes OIC (Open
Foundation Protocol)
Interconnect
Consortium)

OMA DM and OMA LWM2M for IoT


Open Mobile device management, as well as GotAPI,
OMA
Alliance which provides a secure framework for
IoT applications

Protocol extensions
XMPP
of XMPP (Extensible Messaging and
XSF Standards
Presence Protocol), the open standard
Foundation
of instant messaging

Politics and civic engagement[edit]


Some scholars and activists argue that the IoT can be used to create new models of civic
engagement if device networks can be open to user control and inter-operable platforms. Philip
N. Howard, a professor and author, writes that political life in both democracies and
authoritarian regimes will be shaped by the way the IoT will be used for civic engagement. For
that to happen, he argues that any connected device should be able to divulge a list of the
"ultimate beneficiaries" of its sensor data and that individual citizens should be able to add new
organizations to the beneficiary list. In addition, he argues that civil society groups need to start
developing their IoT strategy for making use of data and engaging with the public. [137]

Government regulation on IoT[edit]


One of the key drivers of the IoT is data. The success of the idea of connecting devices to make
them more efficient is dependent upon access to and storage & processing of data. For this
purpose, companies working on the IoT collect data from multiple sources and store it in their
cloud network for further processing. This leaves the door wide open for privacy and security
dangers and single point vulnerability of multiple systems. [138] The other issues pertain to
consumer choice and ownership of data[139] and how it is used. Though still in their infancy,
regulations and governance regarding these issues of privacy, security, and data ownership
continue to develop.[140][141][142] IoT regulation depends on the country. Some examples of
legislation that is relevant to privacy and data collection are: the US Privacy Act of 1974, OECD
Guidelines on the Protection of Privacy and Transborder Flows of Personal Data of 1980, and the
EU Directive 95/46/EC of 1995.[143]
Current regulatory environment:
A report published by the Federal Trade Commission (FTC) in January 2015 made the following
three recommendations:[144]

 Data security – At the time of designing IoT companies should ensure that data collection,
storage and processing would be secure at all times. Companies should adopt a "defence in
depth" approach and encrypt data at each stage. [145]
 Data consent – users should have a choice as to what data they share with IoT companies and
the users must be informed if their data gets exposed.
 Data minimization – IoT companies should collect only the data they need and retain the
collected information only for a limited time.
However, the FTC stopped at just making recommendations for now. According to an FTC
analysis, the existing framework, consisting of the FTC Act, the Fair Credit Reporting Act, and
the Children's Online Privacy Protection Act, along with developing consumer education and
business guidance, participation in multi-stakeholder efforts and advocacy to other agencies at
the federal, state and local level, is sufficient to protect consumer rights. [146]
A resolution passed by the Senate in March 2015, is already being considered by the
Congress.[147] This resolution recognized the need for formulating a National Policy on IoT and
the matter of privacy, security and spectrum. Furthermore, to provide an impetus to the IoT
ecosystem, in March 2016, a bipartisan group of four Senators proposed a bill, The Developing
Innovation and Growing the Internet of Things (DIGIT) Act, to direct the Federal
Communications Commission to assess the need for more spectrum to connect IoT devices.
Several standards for the IoT industry are actually being established relating to automobiles
because most concerns arising from use of connected cars apply to healthcare devices as well. In
fact, the National Highway Traffic Safety Administration (NHTSA) is preparing cybersecurity
guidelines and a database of best practices to make automotive computer systems more
secure.[148]
A recent report from the World Bank examines the challenges and opportunities in government
adoption of IoT.[149] These include –
 Still early days for the IoT in government
 Underdeveloped policy and regulatory frameworks
 Unclear business models, despite strong value proposition
 Clear institutional and capacity gap in government AND the private sector
 Inconsistent data valuation and management
 Infrastructure a major barrier
 Government as an enabler
 Most successful pilots share common characteristics (public-private partnership, local,
leadership)

Criticism and controversies[edit]


Platform fragmentation[edit]
The IoT suffers from platform fragmentation and lack of technical
standards[150][151][152][153][154][155][156] a situation where the variety of IoT devices, in terms of both
hardware variations and differences in the software running on them, makes the task of
developing applications that work consistently between different inconsistent
technology ecosystemshard.[1] Customers may be hesitant to bet their IoT future on a proprietary
software or hardware devices that uses proprietary protocols that may fade or become difficult to
customize and interconnect.[2]
The IoT's amorphous computing nature is also a problem for security, since patches to bugs
found in the core operating system often do not reach users of older and lower-price
devices.[157][158][159] One set of researchers say that the failure of vendors to support older devices
with patches and updates leaves more than 87% of active Android devices vulnerable. [160][161]
Privacy, autonomy, and control[edit]
Philip N. Howard, a professor and author, writes that the Internet of things offers immense
potential for empowering citizens, making government transparent, and broadening information
access. Howard cautions, however, that privacy threats are enormous, as is the potential for
social control and political manipulation. [162]
Concerns about privacy have led many to consider the possibility that big data infrastructures
such as the Internet of things and data mining are inherently incompatible with
privacy.[163] Writer Adam Greenfield claims that these technologies are not only an invasion of
public space but are also being used to perpetuate normative behavior, citing an instance of
billboards with hidden cameras that tracked the demographics of passersby who stopped to read
the advertisement.[164]
The Internet of Things Council compared the increased prevalence of digital surveillance due to
the Internet of things to the conceptual panopticon described by Jeremy Bentham in the 18th
Century.[165] The assertion was defended by the works of French philosophers Michel
Foucault and Gilles Deleuze. In Discipline and Punish: The Birth of the PrisonFoucault asserts
that the panopticon was a central element of the discipline society developed during
the Industrial Era.[166] Foucault also argued that the discipline systems established in factories
and school reflected Bentham's vision of panopticism.[166] In his 1992 paper "Postscripts on the
Societies of Control," Deleuze wrote that the discipline society had transitioned into a control
society, with the computer replacing the panopticon as an instrument of discipline and control
while still maintaining the qualities similar to that of panopticism.[167]
The privacy of households could be compromised by solely analyzing smart home network
traffic patterns without dissecting the contents of encrypted application data, yet a synthetic
packet injection scheme can be used to safely overcome such invasion of privacy. [168]
Peter-Paul Verbeek, a professor of philosophy of technology at the University of Twente,
Netherlands, writes that technology already influences our moral decision making, which in turn
affects human agency, privacy and autonomy. He cautions against viewing technology merely as
a human tool and advocates instead to consider it as an active agent.[169]
Justin Brookman, of the Center for Democracy and Technology, expressed concern regarding the
impact of the IoT on consumer privacy, saying that "There are some people in the commercial
space who say, 'Oh, big data — well, let's collect everything, keep it around forever, we'll pay for
somebody to think about security later.' The question is whether we want to have some sort of
policy framework in place to limit that."[170]
Tim O'Reilly believes that the way companies sell the IoT devices on consumers are misplaced,
disputing the notion that the IoT is about gaining efficiency from putting all kinds of devices
online and postulating that the "IoT is really about human augmentation. The applications are
profoundly different when you have sensors and data driving the decision-making."[171]
Editorials at WIRED have also expressed concern, one stating "What you're about to lose is your
privacy. Actually, it's worse than that. You aren't just going to lose your privacy, you're going to
have to watch the very concept of privacy be rewritten under your nose." [172]
The American Civil Liberties Union (ACLU) expressed concern regarding the ability of IoT to
erode people's control over their own lives. The ACLU wrote that "There's simply no way to
forecast how these immense powers – disproportionately accumulating in the hands of
corporations seeking financial advantage and governments craving ever more control – will be
used. Chances are big data and the Internet of things will make it harder for us to control our own
lives, as we grow increasingly transparent to powerful corporations and government institutions
that are becoming more opaque to us."[173]
In response to rising concerns about privacy and smart technology, in 2007 the British
Government stated it would follow formal Privacy by Design principles when implementing
their smart metering program. The program would lead to replacement of traditional power
meters with smart power meters, which could track and manage energy usage more
accurately.[174] However the British Computer Society is doubtful these principles were ever
actually implemented.[175] In 2009 the Dutch Parliament rejected a similar smart metering
program, basing their decision on privacy concerns. The Dutch program later revised and passed
in 2011.[175]
Data storage[edit]
A challenge for producers of IoT applications is to clean, process and interpret the vast amount
of data which is gathered by the sensors. There is a solution proposed for the analytics of the
information referred to as Wireless Sensor Networks. [176] These networks share data among
sensor nodes that are sent to a distributed system for the analytics of the sensory data. [177]
Another challenge is the storage of this bulk data. Depending on the application, there could be
high data acquisition requirements, which in turn lead to high storage requirements. Currently
the Internet is already responsible for 5% of the total energy generated,[176] and a "daunting
challenge to power" IoT devices to collect and even store data still remains. [178]
Security[edit]
Concerns have been raised that the IoT is being developed rapidly without appropriate
consideration of the profound security challenges involved [179] and the regulatory changes that
might be necessary.[180][181] Most of the technical security concerns are similar to those of
conventional servers, workstations and smartphones, but security challenges unique to the IoT
continue to develop, including industrial security controls, hybrid systems, IoT-specific business
processes, and end nodes.[182]
Security is the biggest concern in adopting Internet of things technology. [183] In particular, as the
Internet of things spreads widely, cyber attacks are likely to become an increasingly physical
(rather than simply virtual) threat. [184] The current IoT space comes with numerous security
vulnerabilities. These vulnerabilities include weak authentication (IoT devices are being used
with default credentials), unencrypted messages sent between devices, SQL injections and lack
of verification or encryption of software updates. [185] This allows attackers to easily intercept
data to collect PII (Personally Identifiable Information), user credentials can be stolen at login or
malware can be injected into newly updated firmware. [185]
In a January 2014 article in Forbes, cyber-security columnist Joseph Steinberg listed many
Internet-connected appliances that can already "spy on people in their own homes" including
televisions, kitchen appliances,[186] cameras, and thermostats.[187] Computer-controlled devices in
automobiles such as brakes, engine, locks, hood and trunk releases, horn, heat, and dashboard
have been shown to be vulnerable to attackers who have access to the on-board network. In some
cases, vehicle computer systems are Internet-connected, allowing them to be exploited
remotely.[188] For example, a hacker can gain unauthorized access to IoT devices due to their set-
up; that is, because these devices are connected, Internet-enabled, and lack the necessary
protective measures.[189] By 2008 security researchers had shown the ability to remotely control
pacemakers without authority. Later hackers demonstrated remote control of insulin
pumps[190] and implantable cardioverter defibrillators. [191] Many of these IoT devices have severe
operational limitations on their physical size and by extension the computational power available
to them. These constraints often make them unable to directly use basic security measures such
as implementing firewalls or using strong cryptosystems to encrypt their communications with
other devices.[192]
The U.S. National Intelligence Council in an unclassified report maintains that it would be hard
to deny "access to networks of sensors and remotely-controlled objects by enemies of the United
States, criminals, and mischief makers... An open market for aggregated sensor data could serve
the interests of commerce and security no less than it helps criminals and spies identify
vulnerable targets. Thus, massively parallel sensor fusion may undermine social cohesion, if it
proves to be fundamentally incompatible with Fourth-Amendment guarantees against
unreasonable search."[193] In general, the intelligence community views the Internet of things as a
rich source of data.[194]
In 2016, a distributed denial of service attack powered by Internet of things devices running
the Mirai malware took down a DNS provider and major web sites.[195] The Mirai Botnethad
infected roughly 65,000 IoT devices within the first 20 hours. [196] Eventually the infections
increased to 200,000 to 300,000 infections.[196] Brazil, Columbia and Vietnam made up of 41.5%
of the infections.[196] The Mirai Botnet had singled out specific IoT devices that consisted of
DVRs, IP cameras, routers and printers.[196] Top vendors that contained the most infected devices
were identified as Dahua, Huawei, ZTE, Cisco, ZyXEL and MikroTik.[196] In May 2017, Junade
Ali, a Computer Scientist at Cloudflare noted that native DDoS vulnerabilities exist in IoT
devices due to a poor implementation of the Publish–subscribe pattern.[197][198] These sorts of
attacks have caused security experts to view IoT as a real threat to Internet services. [199]
On 31 January 2019, the Washington Post wrote an article regarding the security and ethical
challenges that can occur with IoT doorbells and cameras: "Last month, Ring got caught
allowing its team in Ukraine to view and annotate certain user videos; the company says it only
looks at publicly shared videos and those from Ring owners who provide consent. Just last week,
a California family’s Nest camera let a hacker take over and broadcast fake audio warnings about
a missile attack, not to mention peer in on them, when they used a weak password"[200]
There have been a range of responses to concerns over security. The Internet of Things Security
Foundation (IoTSF) was launched on 23 September 2015 with a mission to secure the Internet of
things by promoting knowledge and best practice. Its founding board is made from technology
providers and telecommunications companies. In addition, large IT companies are continuously
developing innovative solutions to ensure the security for IoT devices. In 2017, Mozilla
launched Project Things, which allows to route IoT devices through a safe Web of Things
gateway.[201] As per the estimates from KBV Research, [202] the overall IoT security
market[203] would grow at 27.9% rate during 2016–2022 as a result of growing infrastructural
concerns and diversified usage of Internet of things. [204][205]
Governmental regulation is argued by some to be necessary to secure IoT devices and the wider
Internet – as market incentives to secure IoT devices is insufficient. [206][180][181]
Safety[edit]
IoT systems are typically controlled by event-driven smart apps that take as input either sensed
data, user inputs, or other external triggers (from the Internet) and command one or more
actuators towards providing different forms of automation. [207] Examples of sensors include
smoke detectors, motion sensors, and contact sensors. Examples of actuators include smart locks,
smart power outlets, and door controls. Popular control platforms on which third-party
developers can build smart apps that interact wirelessly with these sensors and actuators include
Samsung's SmartThings,[208] Apple's HomeKit,[209] and Amazon's Alexa,[210] among others.
A problem specific to IoT systems is that buggy apps, unforeseen bad app interactions, or
device/communication failures, can cause unsafe and dangerous physical states, e.g., "unlock the
entrance door when no one is at home" or "turn off the heater when the temperature is below 0
degrees Celcius and people are sleeping at night". [207] Detecting flaws that lead to such states,
requires a holistic view of installed apps, component devices, their configurations, and more
importantly, how they interact. Recently, researchers from the University of California Riverside
have proposed IotSan, a novel practical system that uses model checking as a building block to
reveal "interaction-level" flaws by identifying events that can lead the system to unsafe
states.[207] They have evaluated IotSan on the Samsung SmartThings platform. From 76 manually
configured systems, IotSan detects 147 vulnerabilities (i.e., violations of safe physical
states/properties).
Design[edit]
Given widespread recognition of the evolving nature of the design and management of the
Internet of things, sustainable and secure deployment of IoT solutions must design for "anarchic
scalability."[211] Application of the concept of anarchic scalability can be extended to physical
systems (i.e. controlled real-world objects), by virtue of those systems being designed to account
for uncertain management futures. This hard anarchic scalability thus provides a pathway
forward to fully realize the potential of Internet-of-things solutions by selectively constraining
physical systems to allow for all management regimes without risking physical failure. [211]
Brown University computer scientist Michael Littman has argued that successful execution of
the Internet of things requires consideration of the interface's usability as well as the technology
itself. These interfaces need to be not only more user-friendly but also better integrated: "If users
need to learn different interfaces for their vacuums, their locks, their sprinklers, their lights, and
their coffeemakers, it's tough to say that their lives have been made any easier." [212]
Environmental sustainability impact[edit]
A concern regarding Internet-of-things technologies pertains to the environmental impacts of the
manufacture, use, and eventual disposal of all these semiconductor-rich devices.[213]Modern
electronics are replete with a wide variety of heavy metals and rare-earth metals, as well as
highly toxic synthetic chemicals. This makes them extremely difficult to properly recycle.
Electronic components are often incinerated or placed in regular landfills. Furthermore, the
human and environmental cost of mining the rare-earth metals that are integral to modern
electronic components continues to grow. This leads to societal questions concerning the
environmental impacts of IoT devices over its lifetime. [214]
Intentional obsolescence of devices[edit]
The Electronic Frontier Foundation has raised concerns that companies can use the technologies
necessary to support connected devices to intentionally disable or "brick" their customers'
devices via a remote software update or by disabling a service necessary to the operation of the
device. In one example, home automation devices sold with the promise of a "Lifetime
Subscription" were rendered useless after Nest Labs acquired Revolv and made the decision to
shut down the central servers the Revolv devices had used to operate. [215] As Nest is a company
owned by Alphabet (Google's parent company), the EFF argues this sets a "terrible precedent for
a company with ambitions to sell self-driving cars, medical devices, and other high-end gadgets
that may be essential to a person's livelihood or physical safety." [216]
Owners should be free to point their devices to a different server or collaborate on improved
software. But such action violates the United States DMCA section 1201, which only has an
exemption for "local use". This forces tinkerers who want to keep using their own equipment
into a legal grey area. EFF thinks buyers should refuse electronics and software that prioritize the
manufacturer's wishes above their own. [216]
Examples of post-sale manipulations include Google Nest Revolv, disabled privacy settings
on Android, Sony disabling Linux on PlayStation 3, enforced EULA on Wii U.[216]
Confusing terminology[edit]
Kevin Lonergan at Information Age, a business-technology magazine, has referred to the terms
surrounding the IoT as a "terminology zoo".[217] The lack of clear terminology is not "useful from
a practical point of view" and a "source of confusion for the end user".[217] A company operating
in the IoT space could be working in anything related to sensor technology, networking,
embedded systems, or analytics.[217] According to Lonergan, the term IoT was coined before
smart phones, tablets, and devices as we know them today existed, and there is a long list of
terms with varying degrees of overlap and technological convergence: Internet of things, Internet
of everything (IoE), Internet of Goods (Supply Chain), industrial Internet, pervasive computing,
pervasive sensing, ubiquitous computing, cyber-physical systems (CPS), wireless sensor
networks (WSN), smart objects, digital twin, cyberobjects or avatars,[115] cooperating
objects, machine to machine (M2M), ambient intelligence (AmI), Operational technology (OT),
and information technology (IT).[217]Regarding IIoT, an industrial sub-field of IoT, the Industrial
Internet Consortium's Vocabulary Task Group has created a "common and reusable vocabulary
of terms"[218] to ensure "consistent terminology"[218][219] across publications issued by the
Industrial Internet Consortium. IoT One has created an IoT Terms Database including a New
Term Alert[220] to be notified when a new term is published. As of March 2017, this database
aggregates 711 IoT-related terms, while keeping material "transparent and
comprehensive."[221][222]

IoT adoption barriers[edit]

GE Digital CEO William Ruh speaking about GE's attempts to gain a foothold in the market for
IoT services at the first IEEE Computer SocietyTechIgnite conference.
Lack of interoperability and unclear value propositions[edit]
Despite a shared belief in the potential of the IoT, industry leaders and consumers are facing
barriers to adopt IoT technology more widely. Mike Farley argued in Forbes that while IoT
solutions appeal to early adopters, they either lack interoperability or a clear use case for end-
users.[223] A study by Ericsson regarding the adoption of IoT among Danish companies suggests
that many struggle "to pinpoint exactly where the value of IoT lies for them". [224]
Privacy and security concerns[edit]
According to a recent study by Noura Aleisa and Karen Renaud at the University of Glasgow,
"the Internet of things' potential for major privacy invasion is a concern" [225] with much of
research "disproportionally focused on the security concerns of IoT."[225] Among the "proposed
solutions in terms of the techniques they deployed and the extent to which they satisfied core
privacy principles",[225] only very few turned out to be fully satisfactory. Louis Basenese,
investment director at Wall Street Daily, has criticized the industry's lack of attention to security
issues:
"Despite high-profile and alarming hacks, device manufacturers remain undeterred, focusing on
profitability over security. Consumers need to have ultimate control over collected data,
including the option to delete it if they choose...Without privacy assurances, wide-scale
consumer adoption simply won't happen."[226]
In a post-Snowden world of global surveillance disclosures, consumers take a more active
interest in protecting their privacy and demand IoT devices to be screened for potential security
vulnerabilities and privacy violations before purchasing them. According to the
2016 Accenture Digital Consumer Survey, in which 28000 consumers in 28 countries were
polled on their use of consumer technology, security "has moved from being a nagging problem
to a top barrier as consumers are now choosing to abandon IoT devices and services over
security concerns."[227] The survey revealed that "out of the consumers aware of hacker attacks
and owning or planning to own IoT devices in the next five years, 18 percent decided to
terminate the use of the services and related services until they get safety guarantees." [227] This
suggests that consumers increasingly perceive privacy risks and security concerns to outweigh
the value propositions of IoT devices and opt to postpone planned purchases or service
subscriptions.[227]
Traditional governance structures[edit]
Town of Internet of Things in Hangzhou, China
A study issued by Ericsson regarding the adoption of Internet of things among Danish companies
identified a "clash between IoT and companies' traditional governance structures, as IoT still
presents both uncertainties and a lack of historical precedence."[224] Among the respondents
interviewed, 60 percent stated that they "do not believe they have the organizational capabilities,
and three of four do not believe they have the processes needed, to capture the IoT
opportunity."[224] This has led to a need to understand organizational culture in order to
facilitate organizational design processes and to test new innovation management practices. A
lack of digital leadership in the age of digital transformation has also stifled innovation and IoT
adoption to a degree that many companies, in the face of uncertainty, "were waiting for the
market dynamics to play out",[224] or further action in regards to IoT "was pending competitor
moves, customer pull, or regulatory requirements."[224] Some of these companies risk being
'kodaked' – "Kodak was a market leader until digital disruption eclipsed film photography with
digital photos"[228] – failing to "see the disruptive forces affecting their industry" [229] and "to truly
embrace the new business models the disruptive change opens up."[229] Scott Anthony has
written in Harvard Business Review that Kodak "created a digital camera, invested in the
technology, and even understood that photos would be shared online" [229] but ultimately failed to
realize that "online photo sharing was the new business, not just a way to expand the printing
business."[229]
Business planning and models[edit]
According to 2018 study, 70–75% of IoT deployments were stuck in the pilot or prototype stage,
unable to reach scale due in part to a lack of business planning. [230][page needed]
Studies on IoT literature and projects show a disproportionate prominence of technology in the
IoT projects, which are often driven by technological interventions rather than business model
innovation.[231][232][improper synthesis?]

The Internet of Things (IoT) is defined as a paradigm in which objects equipped with
sensors, actuators, and processors communicate with each other to serve a meaningful
purpose. In this paper, we survey state-of-the-art methods, protocols, and applications in
this new emerging area. This survey paper proposes a novel taxonomy for IoT
technologies, highlights some of the most important technologies, and profiles some
applications that have the potential to make a striking difference in human life, especially
for the differently abled and the elderly. As compared to similar survey papers in the
area, this paper is far more comprehensive in its coverage and exhaustively covers most
major technologies spanning from sensors to applications.

1. Introduction
Today the Internet has become ubiquitous, has touched almost every corner of the globe,
and is affecting human life in unimaginable ways. However, the journey is far from over.
We are now entering an era of even more pervasive connectivity where a very wide
variety of appliances will be connected to the web. We are entering an era of the “Internet
of Things” (abbreviated as IoT). This term has been defined by different authors in many
different ways. Let us look at two of the most popular definitions. Vermesan et al. [1]
define the Internet of Things as simply an interaction between the physical and digital
worlds. The digital world interacts with the physical world using a plethora of sensors
and actuators. Another definition by Peña-López et al. [2] defines the Internet of Things
as a paradigm in which computing and networking capabilities are embedded in any kind
of conceivable object. We use these capabilities to query the state of the object and to
change its state if possible. In common parlance, the Internet of Things refers to a new
kind of world where almost all the devices and appliances that we use are connected to a
network. We can use them collaboratively to achieve complex tasks that require a high
degree of intelligence.
For this intelligence and interconnection, IoT devices are equipped with embedded
sensors, actuators, processors, and transceivers. IoT is not a single technology; rather it is
an agglomeration of various technologies that work together in tandem.
Sensors and actuators are devices, which help in interacting with the physical
environment. The data collected by the sensors has to be stored and processed
intelligently in order to derive useful inferences from it. Note that we broadly define the
term sensor; a mobile phone or even a microwave oven can count as a sensor as long as it
provides inputs about its current state (internal state + environment). An actuator is a
device that is used to effect a change in the environment such as the temperature
controller of an air conditioner.
The storage and processing of data can be done on the edge of the network itself or in a
remote server. If any preprocessing of data is possible, then it is typically done at either
the sensor or some other proximate device. The processed data is then typically sent to a
remote server. The storage and processing capabilities of an IoT object are also restricted
by the resources available, which are often very constrained due to limitations of size,
energy, power, and computational capability. As a result the main research challenge is to
ensure that we get the right kind of data at the desired level of accuracy. Along with the
challenges of data collection, and handling, there are challenges in communication as
well. The communication between IoT devices is mainly wireless because they are
generally installed at geographically dispersed locations. The wireless channels often
have high rates of distortion and are unreliable. In this scenario reliably communicating
data without too many retransmissions is an important problem and thus communication
technologies are integral to the study of IoT devices.
Now, after processing the received data, some action needs to be taken on the basis of the
derived inferences. The nature of actions can be diverse. We can directly modify the
physical world through actuators. Or we may do something virtually. For example, we
can send some information to other smart things.
The process of effecting a change in the physical world is often dependent on its state at
that point of time. This is called context awareness. Each action is taken keeping in
consideration the context because an application can behave differently in different
contexts. For example, a person may not like messages from his office to interrupt him
when he is on vacation.
Sensors, actuators, compute servers, and the communication network form the core
infrastructure of an IoT framework. However, there are many software aspects that need
to be considered. First, we need a middleware that can be used to connect and manage all
of these heterogeneous components. We need a lot of standardization to connect many
different devices. We shall discuss methods to exchange information and prevailing
standards in Section 7.
The Internet of Things finds various applications in health care, fitness, education,
entertainment, social life, energy conservation, environment monitoring, home
automation, and transport systems. We shall focus on these application areas in Section 9.
We shall find that, in all these application areas, IoT technologies have significantly been
able to reduce human effort and improve the quality of life.

2. Architecture of IoT
There is no single consensus on architecture for IoT, which is agreed universally.
Different architectures have been proposed by different researchers.
2.1. Three- and Five-Layer Architectures

The most basic architecture is a three-layer architecture [3–5] as shown in Figure 1. It


was introduced in the early stages of research in this area. It has three layers, namely, the
perception, network, and application layers.(i)The perception layer is the physical
layer, which has sensors for sensing and gathering information about the
environment. It senses some physical parameters or identifies other smart
objects in the environment.(ii)The network layer is responsible for connecting to
other smart things, network devices, and servers. Its features are also used for
transmitting and processing sensor data.(iii)The application layer is responsible
for delivering application specific services to the user. It defines various
applications in which the Internet of Things can be deployed, for example, smart
homes, smart cities, and smart health.
Figure 1: Architecture of IoT (A: three layers) (B: five layers).

The three-layer architecture defines the main idea of the Internet of Things, but it is not
sufficient for research on IoT because research often focuses on finer aspects of the
Internet of Things. That is why, we have many more layered architectures proposed in the
literature. One is the five-layer architecture, which additionally includes the processing
and business layers [3–6]. The five layers are perception, transport, processing,
application, and business layers (see Figure 1). The role of the perception and application
layers is the same as the architecture with three layers. We outline the function of the
remaining three layers.(i)The transport layer transfers the sensor data from the
perception layer to the processing layer and vice versa through networks such as
wireless, 3G, LAN, Bluetooth, RFID, and NFC.(ii)The processing layer is also
known as the middleware layer. It stores, analyzes, and processes huge
amounts of data that comes from the transport layer. It can manage and provide
a diverse set of services to the lower layers. It employs many technologies such
as databases, cloud computing, and big data processing
modules.(iii)The business layer manages the whole IoT system, including
applications, business and profit models, and users’ privacy. The business layer
is out of the scope of this paper. Hence, we do not discuss it further.
Another architecture proposed by Ning and Wang [7] is inspired by the layers of
processing in the human brain. It is inspired by the intelligence and ability of human
beings to think, feel, remember, make decisions, and react to the physical environment. It
is constituted of three parts. First is the human brain, which is analogous to the
processing and data management unit or the data center. Second is the spinal cord, which
is analogous to the distributed network of data processing nodes and smart gateways.
Third is the network of nerves, which corresponds to the networking components and
sensors.
2.2. Cloud and Fog Based Architectures

Let us now discuss two kinds of systems architectures: cloud and fog computing (see the
reference architectures in [8]). Note that this classification is different from the
classification in Section 2.1, which was done on the basis of protocols.
In particular, we have been slightly vague about the nature of data generated by IoT
devices, and the nature of data processing. In some system architectures the data
processing is done in a large centralized fashion by cloud computers. Such a cloud centric
architecture keeps the cloud at the center, applications above it, and the network of smart
things below it [9]. Cloud computing is given primacy because it provides great
flexibility and scalability. It offers services such as the core infrastructure, platform,
software, and storage. Developers can provide their storage tools, software tools, data
mining, and machine learning tools, and visualization tools through the cloud.
Lately, there is a move towards another system architecture, namely, fog computing [10–
12], where the sensors and network gateways do a part of the data processing and
analytics. A fog architecture [13] presents a layered approach as shown in Figure 2,
which inserts monitoring, preprocessing, storage, and security layers between the
physical and transport layers. The monitoring layer monitors power, resources, responses,
and services. The preprocessing layer performs filtering, processing, and analytics of
sensor data. The temporary storage layer provides storage functionalities such as data
replication, distribution, and storage. Finally, the security layer performs
encryption/decryption and ensures data integrity and privacy. Monitoring and
preprocessing are done on the edge of the network before sending data to the cloud.
Figure 2: Fog architecture of a smart IoT gateway.
Often the terms “fog computing” and “edge computing” are used interchangeably. The
latter term predates the former and is construed to be more generic. Fog
computing originally termed by Cisco refers to smart gateways and smart sensors,
whereas edge computing is slightly more penetrative in nature. This paradigm envisions
adding smart data preprocessing capabilities to physical devices such as motors, pumps,
or lights. The aim is to do as much of preprocessing of data as possible in these devices,
which are termed to be at the edge of the network. In terms of the system architecture, the
architectural diagram is not appreciably different from Figure 2. As a result, we do not
describe edge computing separately.
Finally, the distinction between protocol architectures and system architectures is not
very crisp. Often the protocols and the system are codesigned. We shall use the generic 5-
layer IoT protocol stack (architectural diagram presented in Figure 2) for both the fog and
cloud architectures.
2.3. Social IoT

Let us now discuss a new paradigm: social IoT (SIoT). Here, we consider social
relationships between objects the same way as humans form social relationships (see
[14]). Here are the three main facets of an SIoT system:(i)The SIoT is navigable. We
can start with one device and navigate through all the devices that are connected
to it. It is easy to discover new devices and services using such a social network
of IoT devices.(ii)A need of trustworthiness (strength of the relationship) is
present between devices (similar to friends on Facebook).(iii)We can use models
similar to studying human social networks to also study the social networks of IoT
devices.
2.3.1. Basic Components

In a typical social IoT setting, we treat the devices and services as bots where they can set
up relationships between them and modify them over time. This will allow us to
seamlessly let the devices cooperate among each other and achieve a complex task.
To make such a model work, we need to have many interoperating components. Let us
look at some of the major components in such a system.(1)ID: we need a unique
method of object identification. An ID can be assigned to an object based on
traditional parameters such as the MAC ID, IPv6 ID, a universal product code, or
some other custom method.(2)Metainformation: along with an ID, we need some
metainformation about the device that describes its form and operation. This is
required to establish appropriate relationships with the device and also
appropriately place it in the universe of IoT devices.(3)Security controls: this is
similar to “friend list” settings on Facebook. An owner of a device might place
restrictions on the kinds of devices that can connect to it. These are typically
referred to as owner controls.(4)Service discovery: such kind of a system is like a
service cloud, where we need to have dedicated directories that store details of
devices providing certain kinds of services. It becomes very important to keep
these directories up to date such that devices can learn about other
devices.(5)Relationship management: this module manages relationships with
other devices. It also stores the types of devices that a given device should try to
connect with based on the type of services provided. For example, it makes
sense for a light controller to make a relationship with a light sensor.(6)Service
composition: this module takes the social IoT model to a new level. The ultimate
goal of having such a system is to provide better integrated services to users. For
example, if a person has a power sensor with her air conditioner and this device
establishes a relationship with an analytics engine, then it is possible for the
ensemble to yield a lot of data about the usage patterns of the air conditioner. If
the social model is more expansive, and there are many more devices, then it is
possible to compare the data with the usage patterns of other users and come up
with even more meaningful data. For example, users can be told that they are the
largest energy consumers in their community or among their Facebook friends.
2.3.2. Representative Architecture

Most architectures proposed for the SIoT have a server side architecture as well. The
server connects to all the interconnected components, aggregates (composes) the services,
and acts as a single point of service for users.
The server side architecture typically has three layers. The first is the base layer that
contains a database that stores details of all the devices, their attributes, metainformation,
and their relationships. The second layer (Component layer) contains code to interact
with the devices, query their status, and use a subset of them to effect a service. The
topmost layer is the application layer, which provides services to the users.
On the device (object) side, we broadly have two layers. The first is the object layer,
which allows a device to connect to other devices, talk to them (via standardized
protocols), and exchange information. The object layer passes information to
the social layer. The social layer manages the execution of users’ applications, executes
queries, and interacts with the application layer on the server.

3. Taxonomy
Let us now propose taxonomy for research in IoT technologies (see Figure 3). Our
taxonomy is based on the architectural elements of IoT as presented in Section 2.
Figure 3: Taxonomy of research in IoT technologies.

The first architectural component of IoT is the perception layer. It collects data using
sensors, which are the most important drivers of the Internet of Things [15]. There are
various types of sensors used in diverse IoT applications. The most generic sensor
available today is the smartphone. The smartphone itself has many types of sensors
embedded in it [16] such as the location sensor (GPS), movement sensors (accelerometer,
gyroscope), camera, light sensor, microphone, proximity sensor, and magnetometer.
These are being heavily used in different IoT applications. Many other types of sensors
are beginning to be used such as sensors for measuring temperature, pressure, humidity,
medical parameters of the body, chemical and biochemical substances, and neural
signals. A class of sensors that stand out is infrared sensors that predate smartphones.
They are now being used widely in many IoT applications: IR cameras, motion detectors,
measuring the distance to nearby objects, presence of smoke and gases, and as moisture
sensors. We shall discuss the different types of sensors used in IoT applications in
Section 5.
Subsequently, we shall discuss related work in data preprocessing. Such applications
(also known as fog computing applications) mainly filter and summarize data before
sending it on the network. Such units typically have a little amount of temporary storage,
a small processing unit, and some security features.
The next architectural component that we shall discuss is communication. We shall
discuss related work (in Section 7) on different communication technologies used for the
Internet of Things. Different entities communicate over the network [17–19] using a
diverse set of protocols and standards. The most common communication technologies
for short range low power communication protocols are RFID (Radio Frequency
Identification) and NFC (Near Field Communication). For the medium range, they are
Bluetooth, Zigbee, and WiFi. Communication in the IoT world requires special
networking protocols and mechanisms. Therefore, new mechanisms and protocols have
been proposed and implemented for each layer of the networking stack, according to the
requirements imposed by IoT devices.
We shall subsequently look at two kinds of software components: middleware and
applications. The middleware creates an abstraction for the programmer such that the
details of the hardware can be hidden. This enhances interoperability of smart things and
makes it easy to offer different kinds of services [20]. There are many commercial and
open source offerings for providing middleware services to IoT devices. Some examples
are OpenIoT [21], MiddleWhere [22], Hydra [23], FiWare [24], and Oracle Fusion
Middleware. Finally, we discuss the applications of IoT in Section 9. We primarily focus
on home automation, ambient assisted living, health and fitness, smart vehicular systems,
smart cities, smart environments, smart grids, social life, and entertainment.

4. Related Survey Papers


Our taxonomy describes the technologies in the IoT domain and is classified on the basis
of architectural layers. We have tried to cover all subareas and recent technologies in our
taxonomy. There have been many survey papers on the Internet of Things in the past.
Table 1 shows how our survey is different from other highly cited surveys in the
literature.
Table 1: Comparison with other surveys on the basis of topics covered.

Let us first consider our novel contributions. Our paper looks at each and every layer in
the IoT stack, and as a result the presentation is also far more balanced. A novel addition
in our survey is that we have discussed different IoT architectures. This has not been
discussed in prior surveys on the Internet of Things. The architecture section also
considers newer paradigms such as fog computing, which have also hitherto not been
considered. Moreover, our survey nicely categorizes technologies based on the
architectural layer that they belong to. We have also thoroughly categorized the network
layer and tried to consolidate almost all the technologies that are used in IoT systems.
Such kind of a thorough categorization and presentation of technologies is novel to the
best of our knowledge.
Along with these novel contributions our survey is far more comprehensive, detailed, and
exhaustive as compared to other surveys in the area. Most of the other surveys look at
only one or two types of sensors, whereas we describe 9 types of sensors with many
examples. Other surveys are also fairly restricted when they discuss communication
technologies and applications. We have discussed many types of middleware
technologies as well. Prior works have not given middleware technologies this level of
attention. We cover 10 communication technologies in detail and consider a large variety
of applications encompassing smart homes, health care, logistics, transport, agriculture,
environment, smart cities, and green energy. No other survey in this area profiles so
many technologies, applications, and use cases.

5. Sensors and Actuators


All IoT applications need to have one or more sensors to collect data from the
environment. Sensors are essential components of smart objects. One of the most
important aspects of the Internet of Things is context awareness, which is not possible
without sensor technology. IoT sensors are mostly small in size, have low cost, and
consume less power. They are constrained by factors such as battery capacity and ease of
deployment. Schmidt and Van Laerhoven [25] provide an overview of various types of
sensors used for building smart applications.
5.1. Mobile Phone Based Sensors

First of all, let us look at the mobile phone, which is ubiquitous and has many types of
sensors embedded in it. In specific, the smartphone is a very handy and user friendly
device that has a host of built in communication and data processing features. With the
increasing popularity of smartphones among people, researchers are showing interest in
building smart IoT solutions using smartphones because of the embedded sensors
[16, 26]. Some additional sensors can also be used depending upon the requirements.
Applications can be built on the smartphone that uses sensor data to produce meaningful
results. Some of the sensors inside a modern smartphone are as follows.(1)The
accelerometer senses the motion and acceleration of a mobile phone. It typically
measures changes in velocity of the smartphone in three dimensions. There are
many types of accelerometers [27]. In a mechanical accelerometer, we have a
seismic mass in a housing, which is tied to the housing with a spring. The mass
takes time to move and is left behind as the housing moves, so the force in the
spring can be correlated with the acceleration. In a capacitive accelerometer,
capacitive plates are used with the same setup. With a change in velocity, the
mass pushes the capacitive plates together, thus changing the capacitance. The
rate of change of capacitance is then converted into acceleration. In a
piezoelectric accelerometer, piezoelectric crystals are used, which when
squeezed generate an electric voltage. The changes in voltage can be translated
into acceleration. The data patterns captured by the accelerometer can be used
to detect physical activities of the user such as running, walking, and
bicycling.(2)The gyroscope detects the orientation of the phone very precisely.
Orientation is measured using capacitive changes when a seismic mass moves
in a particular direction.(3)The camera and microphone are very powerful
sensors since they capture visual and audio information, which can then be
analyzed and processed to detect various types of contextual information. For
example, we can infer a user’s current environment and the interactions that she
is having. To make sense of the audio data, technologies such as voice
recognition and acoustic features can be exploited.(4)The magnetometer detects
magnetic fields. This can be used as a digital compass and in applications to
detect the presence of metals.(5)The GPS (Global Positioning System) detects
the location of the phone, which is one of the most important pieces of contextual
information for smart applications. The location is detected using the principle of
trilateration [28]. The distance is measured from three or more satellites (or
mobile phone towers in the case of A-GPS) and coordinates are
computed.(6)The light sensor detects the intensity of ambient light. It can be used
for setting the brightness of the screen and other applications in which some
action is to be taken depending on the intensity of ambient light. For example, we
can control the lights in a room.(7)The proximity sensor uses an infrared (IR)
LED, which emits IR rays. These rays bounce back when they strike some
object. Based on the difference in time, we can calculate the distance. In this
way, the distance to different objects from the phone can be measured. For
example, we can use it to determine when the phone is close to the face while
talking. It can also be used in applications in which we have to trigger some
event when an object approaches the phone.(8)Some smartphones such as
Samsung’s Galaxy S4 also have a thermometer, barometer, and humidity sensor
to measure the temperature, atmospheric pressure, and humidity, respectively.
We have studied many smart applications that use sensor data collected from
smartphones. For example, activity detection [29] is achieved by applying machine
learning algorithms to the data collected by smartphone sensors. It detects activities such
as running, going up and down stairs, walking, driving, and cycling. The application is
trained with patterns of data using data sets recorded by sensors when these activities are
being performed.
Many health and fitness applications are being built to keep track of a person’s health
continuously using smartphones. They keep track of users’ physical activities, diet,
exercises, and lifestyle to determine the fitness level and give suggestions to the user
accordingly. Wang et al. [30] describe a mobile application that is based completely on a
smartphone. They use it to assess the overall mental health and performance of a college
student. To track the location and activities in which the student is involved, activity
recognition (accelerometer) and GPS data are used. To keep a check on how much the
student sleeps, the accelerometer and light sensors are used. For social life and
conversations, audio data from a microphone is used. The application also conducts quick
questionnaires with the students to know about their mood. All this data can be used to
assess the stress levels, social life, behavior, and exercise patterns of a student.
Another application by McClernon and Choudhury [31] detects when the user is going to
smoke using context information such as the presence of other smokers, location, and
associated activities. The sensors provide information related to the user’s movement,
location, visual images, and surrounding sounds. To summarize smartphone sensors are
being used to study different kinds of human behavior (see [32]) and to improve the
quality of human life.
5.2. Medical Sensors

The Internet of Things can be really beneficial for health care applications. We can use
sensors, which can measure and monitor various medical parameters in the human body
[33]. These applications can aim at monitoring a patient’s health when they are not in
hospital or when they are alone. Subsequently, they can provide real time feedback to the
doctor, relatives, or the patient. McGrath and Scanaill [34] have described in detail the
different sensors that can be worn on the body for monitoring a person’s health.
There are many wearable sensing devices available in the market. They are equipped with
medical sensors that are capable of measuring different parameters such as the heart rate,
pulse, blood pressure, body temperature, respiration rate, and blood glucose levels [35].
These wearables include smart watches, wristbands, monitoring patches, and smart
textiles.
Moreover, smart watches and fitness trackers are becoming fairly popular in the market
as companies such as Apple, Samsung, and Sony are coming up with very innovative
features. For example, a smart watch includes features such as connectivity with a
smartphone, sensors such as an accelerometer, and a heart rate monitor (see Figure 4).
Figure 4: Smart watches and fitness trackers
(source: https://www.pebble.com/ and http://www.fitbit.com/).

Another novel IoT device, which has a lot of promise are monitoring patches that are
pasted on the skin. Monitoring patches are like tattoos. They are stretchable and
disposable and are very cheap. These patches are supposed to be worn by the patient for a
few days to monitor a vital health parameter continuously [15]. All the electronic
components are embedded in these rubbery structures. They can even transmit the sensed
data wirelessly. Just like a tattoo, these patches can be applied on the skin as shown in
Figure 5. One of the most common applications of such patches is to monitor blood
pressure.
Figure 5: Embedded skin patches (source: MC10 Electronics).

A very important consideration here is the context [34]. The data collected by the
medical sensors must be combined with contextual information such as physical activity.
For example, the heart rate depends on the context. It increases when we exercise. In that
case, we cannot infer abnormal heart rate. Therefore, we need to combine data from
different sensors for making the correct inference.
5.3. Neural Sensors
Today, it is possible to understand neural signals in the brain, infer the state of the brain,
and train it for better attention and focus. This is known as neurofeedback [36] (see
Figure 6). The technology used for reading brain signals is called EEG
(Electroencephalography) or a brain computer interface. The neurons inside the brain
communicate electronically and create an electric field, which can be measured from
outside in terms of frequencies. Brain waves can be categorized into alpha, beta, gamma,
theta, and delta waves depending upon the frequency.
Figure 6: Brain sensing headband with embedded neurosensors
(source: http://www.choosemuse.com/).

Based on the type of wave, it can be inferred whether the brain is calm or wandering in
thoughts. This type of neurofeedback can be obtained in real time and can be used to train
the brain to focus, pay better attention towards things, manage stress, and have better
mental well-being.
5.4. Environmental and Chemical Sensors

Environmental sensors are used to sense parameters in the physical environment such as
temperature, humidity, pressure, water pollution, and air pollution. Parameters such as the
temperature and pressure can be measured with a thermometer and barometer. Air quality
can be measured with sensors, which sense the presence of gases and other particulate
matter in the air (refer to Sekhar et al. [37] for more details).
Chemical sensors are used to detect chemical and biochemical substances. These sensors
consist of a recognition element and a transducer. The electronic nose (e-nose) and
electronic tongue (e-tongue) are technologies that can be used to sense chemicals on the
basis of odor and taste, respectively [38]. The e-nose and e-tongue consist of an array of
chemical sensors coupled with advance pattern recognition software. The sensors inside
the e-nose and e-tongue produce complex data, which is then analyzed through pattern
recognition to identify the stimulus.
These sensors can be used in monitoring the pollution level in smart cities [39], keeping a
check on food quality in smart kitchens, testing food, and agricultural products in supply
chain applications.
5.5. Radio Frequency Identification (RFID)

RFID is an identification technology in which an RFID tag (a small chip with an antenna)
carries data, which is read by a RFID reader. The tag transmits the data stored in it via
radio waves. It is similar to bar code technology. But unlike a traditional bar code, it does
not require line of sight communication between the tag and the reader and can identify
itself from a distance even without a human operator. The range of RFID varies with the
frequency. It can go up to hundreds of meters.
RFID tags are of two types: active and passive. Active tags have a power source and
passive tags do not have any power source. Passive tags draw power from the
electromagnetic waves emitted by the reader and are thus cheap and have a long lifetime
[40, 41].
There are two types of RFID technologies: near and far [40]. A near RFID reader uses a
coil through which we pass alternating current and generate a magnetic field. The tag has
a smaller coil, which generates a potential due to the ambient changes in the magnetic
field. This voltage is then coupled with a capacitor to accumulate a charge, which then
powers up the tag chip. The tag can then produce a small magnetic field that encodes the
signal to be transmitted, and this can be picked up by the reader.
In far RFID, there is a dipole antenna in the reader, which propagates EM waves. The tag
also has a dipole antenna on which an alternating potential difference appears and it is
powered up. It can then use this power to transmit messages.
RFID technology is being used in various applications such as supply chain management,
access control, identity authentication, and object tracking. The RFID tag is attached to
the object to be tracked and the reader detects and records its presence when the object
passes by it. In this manner, object movement can be tracked and RFID can serve as a
search engine for smart things.
For access control, an RFID tag is attached to the authorized object. For example, small
chips are glued to the front of vehicles. When the car reaches a barricade on which there
is a reader, it reads the tag data and decides whether it is an authorized car. If yes, it
opens automatically. RFID cards are issued to the people, who can then be identified by a
RFID reader and given access accordingly.
The low level data collected from the RFID tags can be transformed into higher level
insights in IoT applications [42]. There are many user level tools available, in which all
the data collected by particular RFID readers and data associated with the RFID tags can
be managed. The high level data can be used to draw inferences and take further action.
5.6. Actuators
Let us look at some examples of actuators that are used in the Internet of Things. An
actuator is a device, which can effect a change in the environment by converting
electrical energy into some form of useful energy. Some examples are heating or cooling
elements, speakers, lights, displays, and motors.
The actuators, which induce motion, can be classified into three categories, namely,
electrical, hydraulic, and pneumatic actuators depending on their operation. Hydraulic
actuators facilitate mechanical motion using fluid or hydraulic power. Pneumatic
actuators use the pressure of compressed air and electrical ones use electrical energy.
As an example, we can consider a smart home system, which consists of many sensors
and actuators. The actuators are used to lock/unlock the doors, switch on/off the lights or
other electrical appliances, alert users of any threats through alarms or notifications, and
control the temperature of a home (via a thermostat).
A sophisticated example of an actuator used in IoT is a digital finger, which is used to
turn on/off the switches (or anything which requires small motion) and is controlled
wirelessly.

6. Preprocessing
As smart things collect huge amount of sensor data, compute and storage resources are
required to analyze, store, and process this data. The most common compute and storage
resources are cloud based because the cloud offers massive data handling, scalability, and
flexibility. But this will not be sufficient to meet the requirements of many IoT
applications because of the following reasons [43].(1)Mobility: most of the smart
devices are mobile. Their changing location makes it difficult to communicate
with the cloud data center because of changing network conditions across
different locations.(2)Reliable and real time actuation: communicating with the
cloud and getting back responses takes time. Latency sensitive applications,
which need real time responses, may not be feasible with this model. Also, the
communication may be lossy due to wireless links, which can lead to unreliable
data.(3)Scalability: more devices means more requests to the cloud, thereby
increasing the latency.(4)Power constraints: communication consumes a lot of
power, and IoT devices are battery powered. They thus cannot afford to
communicate all the time.
To solve the problem of mobility, researchers have proposed mobile cloud computing
(MCC) [44]. But there are still problems associated with latency and power. MCC also
suffers from mobility problems such as frequently changing network conditions due to
which problems such as signal fading and service degradation arise.
As a solution to these problems, we can bring some compute and storage resources to the
edge of the network instead of relying on the cloud for everything. This concept is known
as fog computing [11, 45] (also see Section 2.2). The fog can be viewed as a cloud,
which is close to the ground. Data can be stored, processed, filtered, and analyzed on the
edge of the network before sending it to the cloud through expensive communication
media. The fog and cloud paradigms go together. Both of them are required for the
optimal performance of IoT applications. A smart gateway [13] can be employed
between underlying networks and the cloud to realize fog computing as shown in
Figure 7.
Figure 7: Smart gateway for preprocessing.

The features of fog computing [11] are as follows:(1)Low latency: less time is
required to access computing and storage resources on fog nodes (smart
gateways).(2)Location awareness: as the fog is located on the edge of the
network, it is aware of the location of the applications and their context. This is
beneficial as context awareness is an important feature of IoT
applications.(3)Distributed nodes: fog nodes are distributed unlike centralized
cloud nodes. Multiple fog nodes need to be deployed in distributed geographical
areas in order to provide services to mobile devices in those areas. For example,
in vehicular networks, deploying fog nodes at highways can provide low latency
data/video streaming to vehicles.(4)Mobility: the fog supports mobility as smart
devices can directly communicate with smart gateways present in their
proximity.(5)Real time response: fog nodes can give an immediate response
unlike the cloud, which has a much greater latency.(6)Interaction with the cloud:
fog nodes can further interact with the cloud and communicate only that data,
which is required to be sent to the cloud.
The tasks performed by a smart gateway [46] are collecting sensor data, preprocessing
and filtering collected data, providing compute, storage and networking services to IoT
devices, communicating with the cloud and sending only necessary data, monitoring
power consumption of IoT devices, monitoring activities and services of IoT devices, and
ensuring security and privacy of data. Some applications of fog computing are as follows
[10, 11]:(1)Smart vehicular networks: smart traffic lights are deployed as smart
gateways to locally detect pedestrians and vehicles through sensors, calculate
their distance and speed, and finally infer traffic conditions. This is used to warn
oncoming vehicles. These sensors also interact with neighboring smart traffic
lights to perform traffic management tasks. For example, if sensors detect an
approaching ambulance, they can change the traffic lights to let the ambulance
pass first and also inform other lights to do so. The data collected by these smart
traffic lights are locally analyzed in real time to serve real time needs of traffic
management. Further, data from multiple gateways is combined and sent to the
cloud for further global analysis of traffic in the city.(2)Smart grid: the smart
electrical grid facilitates load balancing of energy on the basis of usage and
availability. This is done in order to switch automatically to alternative sources of
energy such as solar and wind power. This balancing can be done at the edge of
the network using smart meters or microgrids connected by smart gateways.
These gateways can analyze and process data. They can then project future
energy demand, calculate the availability and price of power, and supply power
from both conventional and alternative sources to consumers.

7. Communication
As the Internet of Things is growing very rapidly, there are a large number of
heterogeneous smart devices connecting to the Internet. IoT devices are battery powered,
with minimal compute and storage resources. Because of their constrained nature, there
are various communication challenges involved, which are as follows
[19]:(1)Addressing and identification: since millions of smart things will be
connected to the Internet, they will have to be identified through a unique
address, on the basis of which they communicate with each other. For this, we
need a large addressing space, and a unique address for each smart
object.(2)Low power communication: communication of data between devices is
a power consuming task, specially, wireless communication. Therefore, we need
a solution that facilitates communication with low power consumption.(3)Routing
protocols with low memory requirement and efficient communication
patterns.(4)High speed and nonlossy communication.(5)Mobility of smart things.
IoT devices typically connect to the Internet through the IP (Internet Protocol) stack. This
stack is very complex and demands a large amount of power and memory from the
connecting devices. The IoT devices can also connect locally through non-IP networks,
which consume less power, and connect to the Internet via a smart gateway. Non-IP
communication channels such as Bluetooth, RFID, and NFC are fairly popular but are
limited in their range (up to a few meters). Therefore, their applications are limited to
small personal area networks. Personal area networks (PAN) are being widely used in
IoT applications such as wearables connected to smartphones. For increasing the range of
such local networks, there was a need to modify the IP stack so as to facilitate low power
communication using the IP stack. One of the solutions is 6LoWPAN, which incorporates
IPv6 with low power personal area networks. The range of a PAN with 6LoWPAN is
similar to local area networks, and the power consumption is much lower.
The leading communication technologies used in the IoT world are IEEE 802.15.4, low
power WiFi, 6LoWPAN, RFID, NFC, Sigfox, LoraWAN, and other proprietary protocols
for wireless networks.
7.1. Near Field Communication (NFC)

Near Field Communication [47–49] is a very short range wireless communication


technology, through which mobile devices can interact with each other over a distance of
few centimeters only. All types of data can be transferred between two NFC enabled
devices in seconds by bringing them close to each other. This technology is based on
RFID. It uses variations in the magnetic field to communicate data between two NFC
enabled devices. NFC operates over a frequency band of 13.56 MHz, which is the same
as high frequency RFID. There are two modes of operation: active and passive. In the
active mode, both the devices generate magnetic fields, while in the passive mode, only
one device generates the field and the other uses load modulation to transfer the data. The
passive mode is useful in battery powered devices to optimize energy use. One benefit of
the requirement of close proximity between devices is that it is useful for secure
transactions such as payments. Finally, note that NFC can be used for two-way
communication unlike RFID. Consequently, almost all smartphones in the market today
are NFC enabled.
7.2. Wireless Sensor Networks (WSN) Based on IP for Smart Objects

Many times, data from a single sensor is not useful in monitoring large areas and
complex activities. Different sensor nodes need to interact with each other wirelessly.
The disadvantage of non-IP technologies such as RFID, NFC, and Bluetooth is that their
range is very small. So, they cannot be used in many applications, where a large area
needs to be monitored through many sensor nodes deployed in diverse locations. A
wireless sensor network (WSN) consists of tens to thousands of sensor nodes connected
using wireless technologies. They collect data about the environment and communicate it
to gateway devices that relay the information to the cloud over the Internet. The
communication between nodes in a WSN may be direct or multihop. The sensor nodes
are of a constrained nature, but gateway nodes have sufficient power and processing
resources. The popular network topologies used in a WSN are a star, a mesh, and a
hybrid network. Most of the communication in WSN is based on the IEEE 802.15.4
standard (discussed in Section 7.3). There are clearly a lot of protocols that can be used
in IoT scenarios. Let us discuss the design of a typical IoT network protocol stack with
the most popular alternatives.
7.3. IoT Network Protocol Stack
The Internet Engineering Task Force (IETF) has developed alternative protocols for
communication between IoT devices using IP because IP is a flexible and reliable
standard [50, 51]. The Internet Protocol for Smart Objects (IPSO) Alliance has published
various white papers describing alternative protocols and standards for the layers of the
IP stack and an additional adaptation layer, which is used for communication [51–54]
between smart objects.
(1) Physical and MAC Layer (IEEE 802.15.4). The IEEE 802.15.4 protocol is designed
for enabling communication between compact and inexpensive low power embedded
devices that need a long battery life. It defines standards and protocols for the physical
and link (MAC) layer of the IP stack. It supports low power communication along with
low cost and short range communication. In the case of such resource constrained
environments, we need a small frame size, low bandwidth, and low transmit power.
Transmission requires very little power (maximum one milliwatt), which is only one
percent of that used in WiFi or cellular networks. This limits the range of communication.
Because of the limited range, the devices have to operate cooperatively in order to enable
multihop routing over longer distances. As a result, the packet size is limited to 127 bytes
only, and the rate of communication is limited to 250 kbps. The coding scheme in IEEE
802.15.4 has built in redundancy, which makes the communication robust, allows us to
detect losses, and enables the retransmission of lost packets. The protocol also supports
short 16-bit link addresses to decrease the size of the header, communication overheads,
and memory requirements [55].
Readers can refer to the survey by Vasseur et al. [54] for more information on different
physical and link layer technologies for communication between smart objects.
(2) Adaptation Layer. IPv6 is considered the best protocol for communication in the IoT
domain because of its scalability and stability. Such bulky IP protocols were initially not
thought to be suitable for communication in scenarios with low power wireless links such
as IEEE 802.15.4.
6LoWPAN, an acronym for IPv6 over low power wireless personal area networks, is a
very popular standard for wireless communication. It enables communication using IPv6
over the IEEE 802.15.4 [52] protocol. This standard defines an adaptation layer between
the 802.15.4 link layer and the transport layer. 6LoWPAN devices can communicate with
all other IP based devices on the Internet. The choice of IPv6 is because of the large
addressing space available in IPv6. 6LoWPAN networks connect to the Internet via a
gateway (WiFi or Ethernet), which also has protocol support for conversion between
IPv4 and IPv6 as today’s deployed Internet is mostly IPv4. IPv6 headers are not small
enough to fit within the small 127 byte MTU of the 802.15.4 standard. Hence, squeezing
and fragmenting the packets to carry only the essential information is an optimization that
the adaptation layer performs.
Specifically, the adaptation layer performs the following three optimizations in order to
reduce communication overhead [55]:(i)Header compression 6loWPAN defines header
compression of IPv6 packets for decreasing the overhead of IPv6. Some of the
fields are deleted because they can be derived from link level information or can
be shared across packets.(ii)Fragmentation: the minimum MTU size (maximum
transmission unit) of IPv6 is 1280 bytes. On the other hand, the maximum size of
a frame in IEEE 802.15.4 is 127 bytes. Therefore, we need to fragment the IPv6
packet. This is done by the adaptation layer.(iii)Link layer forwarding 6LoWPAN
also supports mesh under routing, which is done at the link layer using link level
short addresses instead of in the network layer. This feature can be used to
communicate within a 6LoWPAN network.
(3) Network Layer. The network layer is responsible for routing the packets received
from the transport layer. The IETF Routing over Low Power and Lossy Networks
(ROLL) working group has developed a routing protocol (RPL) for Low Power and
Lossy Networks (LLNs) [53].
For such networks, RPL is an open routing protocol, based on distance vectors. It
describes how a destination oriented directed acyclic graph (DODAG) is built with the
nodes after they exchange distance vectors. A set of constraints and an objective function
is used to build the graph with the best path [53]. The objective function and constraints
may differ with respect to their requirements. For example, constraints can be to avoid
battery powered nodes or to prefer encrypted links. The objective function can aim to
minimize the latency or the expected number of packets that need to be sent.
The making of this graph starts from the root node. The root starts sending messages to
neighboring nodes, which then process the message and decide whether to join or not
depending upon the constraints and the objective function. Subsequently, they forward
the message to their neighbors. In this manner, the message travels till the leaf nodes and
a graph is formed. Now all the nodes in the graph can send packets upwards hop by hop
to the root. We can realize a point to point routing algorithm as follows. We send packets
to a common ancestor, from which it travels downwards (towards leaves) to reach the
destination.
To manage the memory requirements of nodes, nodes are classified into storing and
nonstoring nodes depending upon their ability to store routing information. When nodes
are in a nonstoring mode and a downward path is being constructed, the route
information is attached to the incoming message and forwarded further till the root. The
root receives the whole path in the message and sends a data packet along with the path
message to the destination hop by hop. But there is a trade-off here because nonstoring
nodes need more power and bandwidth to send additional route information as they do
not have the memory to store routing tables.
(4) Transport Layer. TCP is not a good option for communication in low power
environments as it has a large overhead owing to the fact that it is a connection oriented
protocol. Therefore, UDP is preferred because it is a connectionless protocol and has low
overhead.
(5) Application Layer. The application layer is responsible for data formatting and
presentation. The application layer in the Internet is typically based on HTTP. However,
HTTP is not suitable in resource constrained environments because it is fairly verbose in
nature and thus incurs a large parsing overhead. Many alternate protocols have been
developed for IoT environments such as CoAP (Constrained Application Protocol) and
MQTT (Message Queue Telemetry Transport).(a)Constrained Application Protocol:
CoAP can be thought of as an alternative to HTTP. It is used in most IoT
applications [56, 57]. Unlike HTTP, it incorporates optimizations for constrained
application environments [50]. It uses the EXI (Efficient XML Interchanges) data
format, which is a binary data format and is far more efficient in terms of space
as compared to plain text HTML/XML. Other supported features are built in
header compression, resource discovery, autoconfiguration, asynchronous
message exchange, congestion control, and support for multicast messages.
There are four types of messages in CoAP: nonconfirmable, confirmable, reset
(nack), and acknowledgement. For reliable transmission over UDP, confirmable
messages are used [58]. The response can be piggybacked in the
acknowledgement itself. Furthermore, it uses DTLS (Datagram Transport Layer
Security) for security purposes.(b)Message Queue Telemetry Transport: MQTT is
a publish/subscribe protocol that runs over TCP. It was developed by IBM [59]
primarily as a client/server protocol. The clients are publishers/subscribers and
the server acts as a broker to which clients connect through TCP. Clients can
publish or subscribe to a topic. This communication takes place through the
broker whose job is to coordinate subscriptions and also authenticate the client
for security. MQTT is a lightweight protocol, which makes it suitable for IoT
applications. But because of the fact that it runs over TCP, it cannot be used with
all types of IoT applications. Moreover, it uses text for topic names, which
increases its overhead.
MQTT-S/MQTT-SN is an extension of MQTT [60], which is designed for low power
and low cost devices. It is based on MQTT but has some optimizations for WSNs as
follows [61]. The topic names are replaced by topic IDs, which reduce the overheads of
transmission. Topics do not need registration as they are preregistered. Messages are also
split so that only the necessary information is sent. Further, for power conservation, there
is an offline procedure for clients who are in a sleep state. Messages can be buffered and
later read by clients when they wake up. Clients connect to the broker through a gateway
device, which resides within the sensor network and connects to the broker.
7.4. Bluetooth Low Energy (BLE)

Bluetooth Low Energy, also known as “Bluetooth Smart,” was developed by the
Bluetooth Special Interest Group. It has a relatively shorter range and consumes lower
energy as compared to competing protocols. The BLE protocol stack is similar to the
stack used in classic Bluetooth technology. It has two parts: controller and host. The
physical and link layer are implemented in the controller. The controller is typically a
SOC (System on Chip) with a radio. The functionalities of upper layers are included in
the host [62]. BLE is not compatible with classic Bluetooth. Let us look at the differences
between classic Bluetooth and BLE [63, 64].
The main difference is that BLE does not support data streaming. Instead, it supports
quick transfer of small packets of data (packet size is small) with a data rate of 1 Mbps.
There are two types of devices in BLE: master and slave. The master acts as a central
device that can connect to various slaves. Let us consider an IoT scenario where a phone
or PC serve as the master and mobile devices such as a thermostat, fitness tracker, smart
watch, or any monitoring device act as slaves. In such cases, slaves must be very power
efficient. Therefore, to save energy, slaves are by default in sleep mode and wake up
periodically to receive packets from the master.
In classic Bluetooth, the connection is on all the time even if no data transfer is going on.
Additionally, it supports 79 data channels (1 MHz channel bandwidth) and a data rate of
1 million symbols/s, whereas, BLE supports 40 channels with 2 MHz channel bandwidth
(double of classic Bluetooth) and 1 million symbols/s data rate. BLE supports low duty
cycle requirements as its packet size is small and the time taken to transmit the smallest
packet is as small as 80 s. The BLE protocol stack supports IP based communication also.
An experiment conducted by Siekkinen et al. [65] recorded the number of bytes
transferred per Joule to show that BLE consumes far less energy as compared to
competing protocols such as Zigbee. The energy efficiency of BLE is 2.5 times better
than Zigbee.
7.5. Low Power WiFi
The WiFi alliance has recently developed “WiFi HaLow,” which is based on the IEEE
802.11ah standard. It consumes lower power than a traditional WiFi device and also has a
longer range. This is why this protocol is suitable for Internet of Things applications. The
range of WiFi HaLow is nearly twice that of traditional WiFi.
Like other WiFi devices, devices supporting WiFi HaLow also support IP connectivity,
which is important for IoT applications. Let us look at the specifications of the IEEE
802.11ah standard [66, 67]. This standard was developed to deal with wireless sensor
network scenarios, where devices are energy constrained and require relatively long
range communication. IEEE 802.11ah operates in the sub-gigahertz band (900 MHz).
Because of the relatively lower frequency, the range is longer since higher frequency
waves suffer from higher attenuation. We can extend the range (currently 1 km) by
lowering the frequency further; however, the data rate will also be lower and thus the
tradeoff is not justified. IEEE 802.11ah is also designed to support large star shaped
networks, where a lot of stations are connected to a single access point.
7.6. Zigbee

It is based on the IEEE 802.15.4 communication protocol standard and is used for
personal area networks or PANs [68]. The IEEE 802.15.4 standard has low power MAC
and physical layers and has already been explained in Section 7.3. Zigbee was developed
by the Zigbee alliance, which works for reliable, low energy, and cheap communication
solutions. The range of Zigbee device communication is very small (10–100 meters). The
details of the network and application layers are also specified by the Zigbee standard.
Unlike BLE, the network layer here provides for multihop routing.
There are three types of devices in a Zigbee network: FFD (Fully Functional Device),
RFD (Reduced Functional Device), and one Zigbee coordinator. A FFD node can
additionally act as a router. Zigbee supports star, tree, and mesh topologies. The routing
scheme depends on the topology. Other features of Zigbee are discovery and maintenance
of routes, support for nodes joining/leaving the network, short 16-bit addresses, and
multihop routing.
The framework for communication and distributed application development is provided
by the application layer. The application layer consists of Application Objects (APO),
Application Sublayer (APS), and a Zigbee Device Object (ZDO). APOs are spread over
the network nodes. These are pieces of software, which control some underlying device
hardware (examples: switch and transducer). The device and network management
services are provided by the ZDO, which are then used by the APOs. Data transfer
services are provided by the Application Sublayer to the APOs and ZDO. It is responsible
for secure communication between the Application Objects. These features can be used
to create a large distributed application.
7.7. Integration of RFID and WSN

RFID and wireless sensor networks (WSN) are both important technologies in the IoT
domain. RFID can only be used for object identification, but WSNs serve a far greater
purpose. The two are very different but merging them has many advantages. The
following components can be added to RFID to enhance its usability:(a)Sensing
capabilities(b)Multihop communication(c)Intelligence
RFID is inexpensive and uses very little power. That is why its integration with WSN is
very useful. The integration is possible in the following ways [69, 70]:(a)Integration of
RFID tags with sensors: RFID tags with sensing capabilities are called sensor tags.
These sensor tags sense data from the environment and then the RFID reader
can read this sensed data from the tag. In such cases, simple RFID protocols are
used, where there is only single hop communication. RFID sensing technologies
can be further classified on the basis of the power requirement of sensor tags as
explained earlier in the section on RFIDs (active and passive) (see
Section 5.5).(b)Integration of RFID tags with WSN nodes: the communication
capabilities of sensor tags are limited to a single hop. To extend its capabilities,
the sensor tag is equipped with a wireless transceiver, little bit of Flash memory,
and computational capabilities such that it can initiate communication with other
nodes and wireless devices. The nodes can in this fashion be used to form a
wireless mesh network. In such networks, sensor tags can communicate with
each other over a large range (via intermediate hops). With additional processing
capabilities at a node, we can reduce the net amount of data communicated and
thus increase the power efficiency of the WSN.(c)Integration of RFID readers with
WSN nodes: this type of integration is also done to increase the range of RFID tag
readers. The readers are equipped with wireless transceivers and
microcontrollers so that they can communicate with each other and therefore, the
tag data can reach a reader, which is not in the range of that tag. It takes
advantage of multihop communication of wireless sensor network devices. The
data from all the RFID readers in the network ultimately reaches a central
gateway or base station that processes the data or sends it to a remote server.
These kinds of integrated solutions have many applications in a diverse set of domains
such as security, healthcare, and manufacturing.
7.8. Low Power Wide-Area-Networks (LPWAN)
Let us now discuss a protocol for long range communication in power constrained
devices. The LPWAN class of protocols is low bit-rate communication technologies for
such IoT scenarios.
Let us now discuss some of the most common technologies in this area. Narrow band
IoT: it is a technology made for a large number of devices that are energy
constrained. It is thus necessary to reduce the bit rate. This protocol can be
deployed with both the cellular phone GSM and LTE spectra. The downlink
speeds vary between 40 kbps (LTE M2) and 10 Mbps (LTE category 1). Sigfox: it
is one more protocol that uses narrow band communication (10 MHz). It uses
free sections of the radio spectrum (ISM band) to transmit its data. Instead of 4G
networks, Sigfox focuses on using very long waves. Thus, the range can
increase to a 1000 kms. Because of this the energy for transmission is
significantly lower (0.1%) than contemporary cell phones. Again the cost is
bandwidth. It can only transmit 12 bytes per message, and a device is limited to
140 messages per day. This is reasonable for many kinds of applications:
submarine applications, sending control (emergency) codes, geolocation,
monitoring remote locations, and medical applications. Weightless: it uses a
differential binary phase shift keying based method to transmit narrow band
signals. To avoid interference, the protocol hops across frequency bands
(instead of using CSMA). It supports cryptographic encryption and mobility. Along
with frequency hopping, two additional mechanisms are used to reduce
collisions. The downlink service uses time division multiple access (TDMA) and
the uplink service uses multiple subchannels that are first allocated to
transmitting nodes by contacting a central server. Some applications include
smart meters, vehicle tracking, health monitoring, and industrial machine
monitoring. Neul: this protocol operates in the sub-1 GHz band. It uses small
chunks of the TV whitespace spectrum to create low cost and low power
networks with very high scalability. It has a 10 km range and uses the Weightless
protocol for communication. LoRaWAN: this protocol is similar to Sigfox. It targets
wide area network applications and is designed to be a low power protocol. Its
data rates can vary from 0.3 kbps to 50 kbps, and it can be used within an urban
or a suburban environment (2–5 kms range in a crowded urban area). It was
designed to serve as a standard for long range IoT protocols. It thus has features
to support multitenancy, enable multiple applications, and include several
different network domains.
7.9. Lightweight Application Layer Protocols

Along with physical and MAC layer protocols, we also need application layer protocols
for IoT networks. These lightweight protocols need to be able to carry application
messages, while simultaneously reducing power as far as possible.
OMA Lightweight M2M (LWM2M) is one such protocol. It defines the communication
protocol between a server and a device. The devices often have limited capabilities and
are thus referred to as constrained devices. The main aims of the OMA protocol are as
follows:(1)Remote device management.(2)Transferring service data/information
between different nodes in the LWM2M network.
All the protocols in this class treat all the network resources as objects. Such resources
can be created, deleted, and remotely configured. These devices have their unique
limitations and can use different kinds of protocols for internally representing
information. The LWM2M protocol abstracts all of this away and provides a convenient
interface to send messages between a generic LWM2M server and a distributed set of
LWM2M clients.
This protocol is often used along with CoAP (Constrained Application Protocol). It is an
application layer protocol that allows constrained nodes such as sensor motes or small
embedded devices to communicate across the Internet. CoAP seamlessly integrates with
HTTP, yet it provides additional facilities such as support for multicast operations. It is
ideally suited for small devices because of its low overhead and parsing complexity and
reliance on UDP rather than TCP.

8. Middleware
Ubiquitous computing is the core of the Internet of Things, which means incorporating
computing and connectivity in all the things around us. Interoperability of such
heterogeneous devices needs well-defined standards. But standardization is difficult
because of the varied requirements of different applications and devices. For such
heterogeneous applications, the solution is to have a middleware platform, which will
abstract the details of the things for applications. That is, it will hide the details of the
smart things. It should act as a software bridge between the things and the applications. It
needs to provide the required services to the application developers [20] so that they can
focus more on the requirements of applications rather than on interacting with the
baseline hardware. To summarize, the middleware abstracts the hardware and provides an
Application Programming Interface (API) for communication, data management,
computation, security, and privacy.
The challenges, which are addressed by any IoT middleware, are as follows:
[20, 71, 72].(1)Interoperability and programming abstractions: for facilitating
collaboration and information exchange between heterogeneous devices,
different types of things can interact with each other easily with the help of
middleware services. Interoperability is of three types: network, semantic, and
syntactic. Network interoperability deals with heterogeneous interface protocols
for communication between devices. It insulates the applications from the
intricacies of different protocols. Syntactic interoperability ensures that
applications are oblivious of different formats, structures, and encoding of data.
Semantic interoperability deals with abstracting the meaning of data within a
particular domain. It is loosely inspired by the semantic web.(2)Device discovery
and management: this feature enables the devices to be aware of all other devices
in the neighborhood and the services provided by them. In the Internet of Things,
the infrastructure is mostly dynamic. The devices have to announce their
presence and the services they provide. The solution needs to be scalable
because the devices in an IoT network can increase. Most solutions in this
domain are loosely inspired by semantic web technologies. The middleware
provides APIs to list the IoT devices, their services, and capabilities. In addition,
typically APIs are provided to discover devices based on their capabilities.
Finally, any IoT middleware needs to perform load balancing, manage devices
based on their levels of battery power, and report problems in devices to the
users.(3)Scalability: a large number of devices are expected to communicate in
an IoT setup. Moreover, IoT applications need to scale due to ever increasing
requirements. This should be managed by the middleware by making required
changes when the infrastructure scales.(4)Big data and analytics: IoT sensors
typically collect a huge amount of data. It is necessary to analyze all of this data
in great detail. As a result a lot of big data algorithms are used to analyze IoT
data. Moreover, it is possible that due to the flimsy nature of the network some of
the data collected might be incomplete. It is necessary to take this into account
and extrapolate data by using sophisticated machine learning
algorithms.(5)Security and privacy: IoT applications are mostly related to
someone’s personal life or an industry. Security and privacy issues need to be
addressed in all such environments. The middleware should have built in
mechanisms to address such issues, along with user authentication, and the
implementation of access control.(6)Cloud services: the cloud is an important part
of an IoT deployment. Most of the sensor data is analyzed and stored in a
centralized cloud. It is necessary for IoT middleware to seamlessly run on
different types of clouds and to enable users to leverage the cloud to get better
insights from the data collected by the sensors.(7)Context detection: the data
collected from the sensors needs to be used to extract the context by applying
various types of algorithms. The context can subsequently be used for providing
sophisticated services to users.
There are many middleware solutions available for the Internet of Things, which address
one or more of the aforementioned issues. All of them support interoperability and
abstraction, which is the foremost requirement of middleware. Some examples are
Oracle’s Fusion Middleware, OpenIoT [21], MiddleWhere [22], and Hydra [23].
Middlewares can be classified as follows on the basis of their design [72]:(1)Event
based: here, all the components interact with each other through events. Each
event has a type and some parameters. Events are generated by producers and
received by the consumers. This can be viewed as a publish/subscribe
architecture, where entities can subscribe for some event types and get notified
for those events.(2)Service oriented: service oriented middlewares are based on
Service Oriented Architectures (SOA), in which we have independent modules
that provide services through accessible interfaces. A service oriented
middleware views resources as service providers. It abstracts the underlying
resources through a set of services that are used by applications. There is a
service repository, where services are published by providers. The consumers
can discover services from the repository and then bind with the provider to
access the service. Service oriented middleware must have runtime support for
advertising services by providers and support for discovering and using services
by consumers. HYDRA [23] is a service oriented middleware. It incorporates
many software components, which are used in handling various tasks required
for the development of intelligent applications. Hydra also provides semantic
interoperability using semantic web technologies. It supports dynamic
reconfiguration and self-management.(3)Database oriented: in this approach, the
network of IoT devices is considered as a virtual relational database system. The
database can then be queried by the applications using a query language. There
are easy to use interfaces for extracting data from the database. This approach
has issues with scaling because of its centralized model.(4)Semantic: semantic
middleware focuses on the interoperation of different types of devices, which
communicate using different formats of data. It incorporates devices with different
data formats and ontologies and ties all of them together in a common
framework. The framework is used for exchanging data between diverse types of
devices. For a common semantic format, we need to have adapters for
communication between devices because; for each device, we need adapters to
map standards to one abstract standard [73]. In such a semantic middleware
[74], a semantic layer is introduced, in which there is a mapping from each
resource to a software layer for that resource. The software layers then
communicate with each other using a mutually intelligible language (based on the
semantic web). This technique allows multiple physical resources to
communicate even though they do not implement or understand the same
protocols.(5)Application specific: this type of middleware is used specifically for an
application domain for which it is developed because the whole architecture of
this middleware software is fine-tuned on the basis of requirements of the
application. The application and middleware are tightly coupled. These are not
general purpose solutions.
8.1. Popular IoT Middleware

8.1.1. FiWare

FiWare is a very popular IoT middleware framework that is promoted by the EU. It has
been designed keeping smart cities, logistics, and shop floor analytics in mind. FiWare
contains a large body of code, reusable modules, and APIs that have been contributed by
thousands of FiWare developers. Any application developer can take a subset of these
components and build his/her IoT application.
A typical IoT application has many producers of data (sensors), a set of servers to process
the data, and a set of actuators. FiWare refers to the information collected by sensors
as context information. It defines generic REST APIs to capture the context from
different scenarios. All the context information is sent to a dedicated service called a
context broker. FiWare provides APIs to store the context and also query it. Moreover,
any application can register itself as a context consumer, and it can request the context
broker for information. It also supports the publish-subscribe paradigm. Subsequently,
the context can be supplied to systems using context adapters whose main role is to
transform the data (the context) based on the requirements of the destination nodes.
Moreover, FiWare defines a set of SNMP APIs via which we can control the behavior of
IoT devices and also configure them.
The target applications are provided APIs to analyze, query, and mine the information
that is collected from the context broker. Additionally, with advanced visualization APIs,
it is possible to create and deploy feature rich applications very quickly.
8.1.2. OpenIoT

OpenIoT is another popular open source initiative. It has 7 different components. At the
lowest level, we have a physical plane. It collects data from IoT devices and also does
some preprocessing of data. It has different APIs to interface with different types of
physical nodes and get information from them.
The next plane is the virtualized plane, which has 3 components. We first have the
scheduler, which manages the streams of data generated by devices. It primarily assigns
them to resources and takes care of their QoS requirements. The data storage component
manages the storage and archival of data streams. Finally, the service delivery component
processes the streams. It has several roles. It combines data streams, preprocesses them,
and tracks some statistics associated with these streams such as the number of unique
requests or the size of each request.
The uppermost layer, that is, the application layer, also has 3 components: request
definition, request presentation, and configuration. The request definition component
helps us create requests to be sent to the IoT sensors and storage layers. It can be used to
fetch and query data. The request presentation component creates mashups of data by
issuing different queries to the storage layer, and finally the configuration component
helps us configure the IoT devices.

9. Applications of IoT
There are a diverse set of areas in which intelligent applications have been developed. All
of these applications are not yet readily available; however, preliminary research
indicates the potential of IoT in improving the quality of life in our society. Some uses of
IoT applications are in home automation, fitness tracking, health monitoring,
environment protection, smart cities, and industrial settings.
9.1. Home Automation

Smart homes are becoming more popular today because of two reasons. First, the sensor
and actuation technologies along with wireless sensor networks have significantly
matured. Second, people today trust technology to address their concerns about their
quality of life and security of their homes (see Figure 8).
Figure 8: Block diagram of a smart home system.

In smart homes, various sensors are deployed, which provide intelligent and automated
services to the user. They help in automating daily tasks and help in maintaining a routine
for individuals who tend to be forgetful. They help in energy conservation by turning off
lights and electronic gadgets automatically. We typically use motion sensors for this
purpose. Motion sensors can be additionally used for security also.
For example, the project, MavHome [75], provides an intelligent agent, which uses
various prediction algorithms for doing automated tasks in response to user triggered
events and adapts itself to the routines of the inhabitants. Prediction algorithms are used
to predict the sequence of events [76] in a home. A sequence matching algorithm
maintains sequences of events in a queue and also stores their frequency. Then a
prediction is made using the match length and frequency. Other algorithms used by
similar applications use compression based prediction and Markov models.
Energy conservation in smart homes [77] is typically achieved through sensors and
context awareness. The sensors collect data from the environment (light, temperature,
humidity, gas, and fire events). This data from heterogeneous sensors is fed to a context
aggregator, which forwards the collected data to the context aware service engine. This
engine selects services based on the context. For example, an application can
automatically turn on the AC when the humidity rises. Or, when there is a gas leak, it can
turn all the lights off.
Smart home applications are really beneficial for the elderly and differently abled. Their
health is monitored and relatives are informed immediately in case of emergencies.
Floors are equipped with pressure sensors, which track the movement of an individual
across the smart home and also help in detecting if a person has fallen down. In smart
homes, CCTV cameras can be used to record events of interest. These can then be used
for feature extraction to find out what is going on.
In specific, fall detection applications in smart environments [78–80] are useful for
detecting if elderly people have fallen down. Yu et al. [80] use computer vision based
techniques for analyzing postures of the human body. Sixsmith et al. [79] used low cost
infrared sensor array technology, which can provide information such as the location,
size, and velocity of a target object. It detects dynamics of a fall by analyzing the motion
patterns and also detects inactivity and compares it with activity in the past. Neural
networks are employed and sample data is provided to the system for various types of
falls. Many smartphone based applications are also available, which detect a fall on the
basis of readings from the accelerometer and gyroscope data.
There are many challenges and issues with regard to smart home applications [81]. The
most important is security and privacy [82] since all the data about the events taking
place in the home is being recorded. If the security and trustworthiness of the system are
not guaranteed, an intruder may attack the system and may make the system behave
maliciously. Smart home systems are supposed to notify the owners in case they detect
such abnormalities. This is possible using AI and machine learning algorithms, and
researchers have already started working in this direction [83]. Reliability is also an issue
since there is no system administrator to monitor the system.
9.2. Smart Cities

9.2.1. Smart Transport

Smart transport applications can manage daily traffic in cities using sensors and
intelligent information processing systems. The main aim of intelligent transport systems
is to minimize traffic congestion, ensure easy and hassle-free parking, and avoid
accidents by properly routing traffic and spotting drunk drivers. The sensor technologies
governing these types of applications are GPS sensors for location, accelerometers for
speed, gyroscopes for direction, RFIDs for vehicle identification, infrared sensors for
counting passengers and vehicles, and cameras for recording vehicle movement and
traffic. There are many types of applications in this area (refer to [84]):(1)Traffic
surveillance and management applications: vehicles are connected by a network
to each other, the cloud, and to a host of IoT devices such as GPS sensors,
RFID devices, and cameras. These devices can estimate traffic conditions in
different parts of the city. Custom applications can analyze traffic patterns so that
future traffic conditions can be estimated. Yu et al. [85] implement a vehicle
tracking system for traffic surveillance using video sequences captured on the
roads. Traffic congestion detection can also be implemented using smartphone
sensors such as accelerometers [86] and GPS sensors. These applications can
detect movement patterns of the vehicle while the user is driving. Such kind of
information is already being collected by Google maps and users are using it to
route around potentially congested areas of the city.(2)Applications to ensure
safety: smart transport does not only imply managing traffic conditions. It also
includes safety of people travelling in their vehicles, which up till now was mainly
in the hands of drivers. There are many IoT applications developed to help
drivers become safer drivers. Such applications monitor driving behavior of
drivers and help them drive safely by detecting when they are feeling drowsy or
tired and helping them to cope with it or suggesting rest [87, 88]. Technologies
used in such applications are face detection, eye movement detection, and
pressure detection on the steering (to measure the grip of the driver’s hands on
the steering). A smartphone application, which estimates the driver’s driving
behavior using smartphone sensors such as the accelerometer, gyroscope, GPS,
and camera, has been proposed by Eren et al. [89]. It can decide whether the
driving is safe or rash by analyzing the sensor data.(3)Intelligent parking
management (see Figure 9): in a smart transportation system, parking is
completely hassle free as one can easily check on the Internet to find out which
parking lot has free spaces. Such lots use sensors to detect if the slots are free
or occupied by vehicles. This data is then uploaded to a central server.(4)Smart
traffic lights: traffic lights equipped with sensing, processing, and communication
capabilities are called smart traffic lights. These lights sense the traffic
congestion at the intersection and the amount of traffic going each way. This
information can be analyzed and then sent to neighboring traffic lights or a
central controller. It is possible to use this information creatively. For example, in
an emergency situation the traffic lights can preferentially give way to an
ambulance. When the smart traffic light senses an ambulance coming, it clears
the path for it and also informs neighboring lights about it. Technologies used in
these lights are cameras, communication technologies, and data analysis
modules. Such systems have already been deployed in Rio De
Janeiro.(5)Accident detection applications: a smartphone application designed by
White et al. [90] detects the occurrence of an accident with the help of an
accelerometer and acoustic data. It immediately sends this information along with
the location to the nearest hospital. Some additional situational information such
as on-site photographs is also sent so that the first responders know about the
whole scenario and the degree of medical help that is required.
Figure 9: Block diagram of a smart parking system.

9.2.2. Smart Water Systems


Given the prevailing amount of water scarcity in most parts of the world, it is very
important to manage our water resources efficiently. As a result most cities are opting for
smart solutions that place a lot of meters on water supply lines and storm drains. A good
reference in this area is the paper by Hauber-Davidson and Idris [91]. They describe
various designs for smart water meters. These meters can be used to measure the degree
of water inflow and outflow and to identify possible leaks. Smart water metering systems
are also used in conjunction with data from weather satellites and river water sensors.
They can also help us predict flooding.
9.2.3. Examples of Smart Cities

Barcelona and Stockholm stand out in the list of smart cities. Barcelona has
a CityOS project, where it aims to create a single virtualized OS for all the smart devices
and services offered within the city. Barcelona has mainly focused on smart
transportation (as discussed in Section 9.2.1) and smart water. Smart transportation is
implemented using a network of sensors, centralized analysis, and smart traffic lights. On
similar lines Barcelona has sensors on most of its storm drains, water storage tanks, and
water supply lines. This information is integrated with weather and usage information.
The result of all of this is a centralized water planning strategy. The city is able to
estimate the water requirements in terms of domestic usage and industrial usage and for
activities such as landscaping and gardening.
Stockholm started way back in 1994, and its first step in this direction was to install an extensive
fiber optic system. Subsequently, the city added thousands of sensors for smart traffic and smart
water management applications. Stockholm was one of the first cities to
implement congestion charging. Users were charged money, when they drove into congested
areas. This was enabled by smart traffic technologies. Since the city has a solid network
backbone, it is very easy to deploy sensors and applications. For example, recently the city
created a smart parking system, where it is possible to easily locate parking spots nearby.
Parking lots have sensors, which let a server know about their usage. Once a driver queries the
server with her/his GPS location, she/he is guided to the nearest parking lot with free slots.
Similar innovations have taken place in the city’s smart buildings, snow clearance, and political
announcement systems.
9.3. Social Life and Entertainment

Social life and entertainment play an important role in an individual’s life. Many applications
have been developed, which keep track of such human activities. The term “opportunistic IoT”
[92] refers to information sharing among opportunistic devices (devices that seek to make
contact with other devices) based on movement and availability of contacts in the vicinity.
Personal devices such as tablets, wearables, and mobile phones have sensing and short range
communication capabilities. People can find and interact with each other when there is a
common purpose.
Circle Sense [93] is an application, which detects social activities of a person with the help of
various types of sensor data. It identifies the social circle of a person by analyzing the patterns of
social activities and the people present in those activities. Various types of social activities and
the set of people participating in those activities are identified. It uses location sensors to find out
where the person is and uses Bluetooth for searching people around her. The system has built in
machine learning algorithms, and it gradually improves its behavior with learning.
Affective computing [94] is a technology, which recognizes, understands, stimulates, and
responds to the emotions of human beings. There are many parameters, which are considered
while dealing with human affects such as facial expressions, speech, body gestures, hand
movements, and sleep patterns. These are analyzed to figure out how a person is feeling. The
utterance of emotional keywords is identified by voice recognition and the quality of voice by
looking at acoustic features of speech.
One of the applications of affective computing is Camy, an artificial pet dog [95], which is
designed to interact with human beings and show affection and emotions. Many sensors and
actuators are embedded in it. It provides emotional support to the owner, encourages playful and
active behavior, recognizes its owner, and shows love for her and increases the owner’s
communication with other people. Based on the owner’s mood, Camy interacts with the owner
and gives her suggestions.
Logmusic [96] is an entertainment application, which recommends music on the basis of the
context, such as the weather, temperature, time, and location.
9.4. Health and Fitness

IoT appliances have proven really beneficial in the health and wellness domains. Many wearable
devices are being developed, which monitor a person’s health condition (see Figure 10).
Figure 10: Block diagram of a smart healthcare system.

Health applications make independent living possible for the elderly and patients with serious
health conditions. Currently, IoT sensors are being used to continuously monitor and record their
health conditions and transmit warnings in case any abnormal indicators are found. If there is a
minor problem, the IoT application itself may suggest a prescription to the patient.

IoT applications can be used in creating an Electronic Health Record (EHR), which is a record of
all the medical details of a person. It is maintained by the health system. An EHR can be used to
record allergies, surges in blood sugar and blood pressure.

Stress recognition applications are also fairly popular [97]. They can be realized using
smartphone sensors. Wang et al. describe an application [30], which measures the stress level of
a college student and is installed on the student’s smartphone. It senses the locations the student
visits during the whole day, the amount of physical activity, amount of sleep and rest, and her/his
interaction and relationships with other people (audio data and calls). In addition, it also conducts
surveys with the student by randomly popping up a question in the smartphone. Using all of this
data and analyzing it intelligently, the level of stress and academic performance can be
measured.
In the fitness sector, we have applications that monitor how fit we are based on our daily activity
level. Smartphone accelerometer data can be used for activity detection by applying complex
algorithms. For example, we can measure the number of steps taken and the amount of exercise
done by using fitness trackers. Fitness trackers are available in the market as wearables to
monitor the fitness level of an individual. In addition, gym apparatus can be fitted with sensors to
count the number of times an exercise is performed. For example, a smart mat [98] can count the
number of exercise steps performed on it. This is implemented using pressure sensors on the mat
and then by analyzing the patterns of pressure, and the shape of the contact area.
9.5. Smart Environment and Agriculture

Environmental parameters such as temperature and humidity are important for agricultural
production. Sensors are used by farmers in the field to measure such parameters and this data can
be used for efficient production. One application is automated irrigation according to weather
conditions.

Production using greenhouses [99] is one of the main applications of IoT in agriculture.
Environmental parameters measured in terms of temperature, soil information, and humidity are
measured in real time and sent to a server for analysis. The results are then used to improve crop
quality and yield.
Pesticide residues in crop production are detected using an Acetylcholinesterase biosensor [100].
This data is saved and analyzed for extracting useful information such as the sample size, time,
location, and amount of residues. We can thus maintain the quality of the crop. Moreover, a QR
code can be used to uniquely identify a carton of farm produce. Consumers can scan the QR code
and check the amount of pesticides in it (via a centralized database) online before buying.
Air pollution is an important concern today because it is changing the climate of the earth and
degrading air quality. Vehicles cause a lot of air pollution. An IoT application proposed by
Manna et al. [39] monitors air pollution on the roads. It also tracks vehicles that cause an undue
amount of pollution. Electrochemical toxic gas sensors can also be used to measure air pollution.
Vehicles are identified by RFID tags. RFID readers are placed on both sides of the road along
with the gas sensors. With this approach it is possible to identify and take action against
polluting vehicles.
9.6. Supply Chain and Logistics

IoT tries to simplify real world processes in business and information systems [101]. The goods
in the supply chain can be tracked easily from the place of manufacture to the final places of
distribution using sensor technologies such as RFID and NFC. Real time information is recorded
and analyzed for tracking. Information about the quality and usability of the product can also be
saved in RFID tags attached with the shipments.
Bo and Guangwen [102] explain an information transmission system for supply chain
management, which is based on the Internet of Things. RFID tags uniquely identify a product
automatically and a product information network is created to transmit this information in real
time along with location information. This system helps in automatic collection and analysis of
all the information related to supply chain management, which may help examine past demand
and come up with a forecast of future demand. Supply chain components can get access to real
time data and all of this information can be analyzed to get useful insights. This will in the long
run improve the performance of supply chain systems.
9.7. Energy Conservation

The smart grid is information and communication technology enabled modern electricity
generation, transmission, distribution, and consumption system [103].
To make electric power generation, transmission, and distribution smart, the concept of smart
grids adds intelligence at each step and also allows the two-way flow of power (back from the
consumer to the supplier). This can save a lot of energy and help consumers better understand
the flow of power and dynamic pricing. In a smart grid, power generation is distributed. There
are sensors deployed throughout the system to monitor everything. It is actually a distributed
network of microgrids [104]. Microgrids generate power to meet demands of local sites and
transmit back the surplus energy to the central grid. Microgrids can also demand energy from the
central grid in case of a shortfall.

Two-way flow of power also benefits consumers, who are also using their own generated energy
occasionally (say, solar, or wind power); the surplus power can be transmitted back so that it is
not wasted. The user will also get paid for that power.

Some of the IoT applications in a smart grid are online monitoring of transmission lines for
disaster prevention and efficient use of power in smart homes by having a smart meter for
monitoring energy consumption [105].

Smart meters read and analyze consumption patterns of power at regular and peak load times.
This information is then sent to the server and also made available to the user. The generation is
then set according to the consumption patterns. In addition, the user can adjust her/his use so as
to reduce costs. Smart power appliances can leverage this information and operate when the
prices are low.

10. Design Considerations in an IoT System

Now, that we have profiled most of the IoT technologies, let us look at some of the design
considerations for designing a practical IoT network.

The first consideration is the design of the sensors. Even though there might not be much of a
choice regarding the sensors, there is definitely a lot of choice regarding the processing and
networking capabilities that are bundled along with the sensors. Our choices range from small
sub-mW boards meant for sensor motes to Arduino or Atom boards that consume 300–500 mW
of power. This choice depends on the degree of analytics and data preprocessing that we want to
perform at the sensor itself. Secondly, there is an issue of logistics also. To create a sub-mW
board, we need board design expertise, and this might not be readily available. Hence, it might
be advisable to bundle a sensor with commercially available embedded processor kits.

The next important consideration is communication. In IoT nodes, power is the most dominant
issue. The power required to transmit and receive messages is a major fraction of the overall
power, and as a result a choice of the networking technology is vital. The important factors that
we need to consider are the distance between the sender and the receiver, the nature of obstacles,
signal distortion, ambient noise, and governmental regulations. Based on these key factors, we
need to choose a given wireless networking protocol. For example, if we just need to
communicate inside a small building, we can use Zigbee, whereas if we need communication in
a smart city, we should choose Sigfox or LoraWAN. In addition, often there are significant
constraints on the frequency and the power that can be spent in transmission. These limitations
are mainly imposed by government agencies. An apt decision needs to be made by taking all of
these factors into account.

Let us then come to the middleware. The first choice that needs to be made is to choose between
an open source middleware such as FiWare or a proprietary solution. There are pros and cons of
both. It is true that open source middleware is in theory more flexible; however, they may have
limited support for IoT devices. We ideally want a middleware solution to interoperate with all
kinds of communication protocols and devices; however, that might not be the case. Hence, if we
need strict compatibility with certain devices and protocols, a proprietary solution is better.
Nevertheless, open source offerings have cost advantages and are sometimes easier to deploy.
We also need to choose the communication protocol and ensure that it is compatible with the
firewalls in the organizations involved. In general choosing a protocol based on HTTP is the best
from this point of view. We also need to choose between TCP and UDP. UDP is always better
from the point of view of power consumption. Along with these considerations, we also need to
look at options to store sensor data streams, querying languages, and support for generating
dynamic alerts.

Finally, let us consider the application layer. Most IoT frameworks provide significant amount of
support for creating the application layer. This includes data mining, data processing, and
visualization APIs. Creating mashups and dashboards of data is nowadays very easy to do given
the extensive support provided by IoT frameworks. Nevertheless, here the tradeoff is between
the features provided and the resources that are required. We do not need a very heavy
framework if we do not desire a lot of features. This call needs to be taken by the application
developers.

Smart manufacturing and the IoT is driving the next industrial revolution

Manufacturing is on the cusp of a revolution – the Internet of Things (IoT) revolution! In


2016, IDC estimates the manufacturing segment invested $178 billion in IoT spending, twice as
much as the transportation segment, the second largest IoT vertical market 1

According to Markets and Markets Research, the smart factory market is projected to reach
205.42 Billion USD by 2022, growing at a CAGR of 9.3% between 2017 and 2022 2.

In this fiercely competitive market, IoT-enabled smart manufacturing provides full visibility of
assets, processes, resources, and products. This, in turn, supports streamlined business
operations, optimized productivity and improved ROI. The key to success is connecting
equipment, integrating diverse industrial data, and securing industrial systems for the entire
lifespan of equipment.

For two decades, Gemalto has been a trusted partner, helping customers Connect, Secure and
Monetize their enterprise operations with IoT technology. In this web dossier, we'd like to share
some of the best practices we've gathered to help companies interested in making the leap to
"Industry 4.0."

What is smart manufacturing and how is it

related to the IoT?

Smart manufacturing allows factory managers to automatically collect and analyze data to make
better-informed decisions and optimize production. The data from sensors and machines is
communicated to the Cloud by IoT connectivity solutions deployed in the factory. That data is
analyzed and combined with contextual information and then shared with authorized
stakeholders. IoT technology, leveraging both wired and wireless connectivity, enables this flow
of data, providing the ability to remotely monitor and manage processes and change production
plans quickly, in real time when needed. It greatly improves outcomes of manufacturing
reducing waste, speeding production and improving yield and the quality of goods produced.

Replacing the hierarchical structure that has historically defined the "shop floor" with an open,
flatter, fully-interconnected model that links R&D processes with supply chain management has
many benefits, including the optimization of global manufacturing processes related to
performance, quality, cost, and resource management. It also enables the manufactured products
themselves to play a key role in development and design of the manufacturing process. This is
because connected smart products are able to feed information back to the factory so that quality
issues can be detected and fixed during the manufacturing stage by adjusting product design
and/or the manufacturing processes. Smart products can also provide insights on how they are
actually used by consumers, providing the opportunity to adapt features to better meet the real
needs of the marketplace.

How is the manufacturing marketplace evolving?

The manufacturing sector is being fundamentally reshaped by the unstoppable progress of the
4th Industrial Revolution, powered by the IoT. The changes to this segment are made possible by
technological breakthroughs that are occurring at an unprecedented pace. Just as the steam
engine ushered in massive changes in the early 17th Century and the advent of the digital age
rocked the world in the second half of the 20th century, today's technological innovations are
forcing decision makers to reimagine how products are designed and produced. In addition to the
IoT, consider how Artificial Intelligence (AI), machine learning, and Virtual Reality (VR) will
impact manufacturing.

This IoT revolution is expected to profoundly increase productivity and value. This is why the
world's largest manufacturers, China, the US and Europe, have launched dedicated initiatives to
bolster their own manufacturing sector. In essence, these manufacturing leaders are engaged in a
global battle for smart manufacturing competitiveness.

The expectation is that all types of manufacturing have something to gain from the 4th industrial
revolution and the IoT. For instance, discrete manufacturing is the production of distinct items
that can be individually touched and counted and are typically associated with assembly lines.
This includes items such as cars, furniture, and airplanes, that are increasingly connected. Smart
processes will play a prominent role in balancing supply and demand, improving product design,
optimizing manufacturing efficiency and greatly reducing waste. Similarly process
manufacturing where goods are produced in bulk using carefully crafted recipes, gains from the
IoT revolution in terms of improved plant monitoring, a streamlined supply chain, and quality
improvements in track and trace and distribution processes.

Why is security a huge concern in smart manufacturing?

Today, the manufacturing sector is the leading victim of infrastructure cybercrime, accounting

for one-third of all attacks.3That's because most


conventional manufacturing plants were not designed with cybersecurity in mind, and because
hacking technology has become increasingly sophisticated.

As manufacturers migrate from traditional factories to IoT-connected, IP-based systems, new


vulnerabilities emerge. Inherent in connecting processes and elements of smart manufacturing is
an expansion of the cyber attack surface. Each point of connection becomes an added risk of
attacks and cybercrimes that can lead to interference, remote access, theft of intellectual
property, and data loss or alteration. Although many tried-and-true security tools remain
effective, they are not always planned into systems from the beginning. To assure adequate
security, manufacturers must adapt by building in defensive measures to legacy equipment and
systems that are now connected. And they must consider security architecture from the beginning
for new, state of the art manufacturing centers.

Security challenges have also slowed the pace of adoption of new IoT technologies,
organizational changes, and business models that could immensely improve processes, enhance
competitiveness and bring new services to customers. Unfortunately, enterprises that do not keep
pace will find it more difficult to compete with their more forward-thinking counterparts who are
tackling the challenge head-on.

What should industry players consider as they transform manufacturing to smart manufacturing?

To stay competitive, manufacturers need to partner with manufacturing automation vendors and
systems integrators that provide solutions to upgrade factories or build new systems from
scratch. Strong automation partners are adding security architecture to the value chain since they
know this is a major concern for manufacturers and a key to competitive advantage in the
marketplace.

Manufacturers should work with experienced integrators, developers and technology partners
who have already exhibited excellence and longevity
in connecting, securing and monetizing smart manufacturing systems. Experienced partners can
provide the direction needed to develop the best system to meet business needs.

For instance, manufacturing processes can be connected by hard wiring, WiFi, Bluetooth, RFID,
Low-Power Wide-Area Networks including LoRa and LTE M, and even IoT Terminals that
work out of the box and connect via flexible industrial interfaces. Each has different strengths
and ideal use cases and a strong partner with experience in connecting smart manufacturing
systems can help decide which solution is best for individual use cases.

They must also consider security and how to protect smart manufacturing systems from intrusion
or error. For instance, Gemalto Secure Elements and Hardware Security Modules (HSM) are
used to secure product manufacturing systems. SEs and HSMs allow manufacturers to generate
and distribute IDs and certificates for devices and they authenticate devices, users, and
applications that interact with devices. They also help secure communication and protect data at
rest.

Similarly, Trusted Key Manager (TKM) plays a key role in security. TKM manages credentials
for LoRa devices and networks as well as IoT devices not connected to a cellular network,
historically a challenge for manufacturers. The TKM solution allows manufacturers to decouple
these credentials from the production process, making the business scalable and preserving trust
between manufacturers and customers.

Another area for industry players to consider is successful software monetization. Licensing and
IP protection is an important component in manufacturing industrial devices, which includes
more and increasingly complex software, trade secrets, and pricing models based on usage and
variable feature sets.

UNIT V

CLOUD SERVICES AND FILE SYSTEMS


Industry 4.0 Cheat Sheet – As a result of huge developments in the modern world, manufacturing
is changing. Here we explain key terms; Industry 4.0, Cloud Computing, Smart Factory, Internet
of Things (IoT).
With the use of computers, automation and cloud technology, factories are becoming
increasingly efficient and “smart”.
Industry 4.0 is the latest phase for the manufacturing sector which has come about because of the
Internet of Things and the accessibility of data. Factories that are known as “Smart Factories” are
becoming more prominent, particularly in Europe.
What can we learn from our European friends? We can learn how and why they are connecting
manufacturing machinery within their production lines, their administration systems and external
suppliers and in some cases, the entire lifecycle of the product. This connectivity between each
stage of the manufacturing process, enables a more streamlined automated production
environment. Fundamentally, it is also driving product innovation that will give entrepreneurs a
competitive edge in the market.
Industry 4.0 Cheat Sheet Key terms explained:

Industry 4.0
Industry 4.0 (the ‘fourth industrial revolution’) refers to the current trend of improved
automation, machine-to-machine and human-to-machine communication, artificial intelligence,
continued technological improvements and digitalisation in manufacturing.

Industry 4.0 Cheat Sheet – Industry 4.0 explained


Industry 4.0 has been driven by 4 disruptors;

 a rise in data volumes


 computational power and connectivity
 emergence of analytics and business intelligence capabilities – e.g. new forms of human-
machine interaction such as touch interfaces and augmented-reality systems
 improvements in transferring digital instructions to the physical world such as advanced
robotics and 3D printing.

Smart Factory (Another term for Industry 4.0)


The terms “Smart Factory,” “Smart Manufacturing,” “Intelligent Factory” and “Factory of the
Future” all describe a vision of what industrial production will look like in the future.
In this vision, the Smart Factory will be much more intelligent, flexible and dynamic.
Manufacturing processes will be organised differently, with entire production chains – from
suppliers to logistics to the life cycle management of a product – closely connected across
corporate boundaries.
Individual production steps will be seamlessly connected.
The processes impacted will include:

 Factory and production planning.


 Product development.
 Logistics.
 Enterprise resource planning (ERP).
 Manufacturing execution systems (MES).
 Control technologies.
 Individual sensors and actuators in the field.

In a Smart Factory, machinery and equipment will have the ability to improve processes through
self-optimisation and autonomous decision-making. This is in stark contrast to running fixed
program operations, as is the case today. Read more.

Internet of Things (IoT) within Industry 4.0


The concept of connecting any device with an on and off switch to the Internet (and/or to each
other). This includes everything from cellphones, coffee makers, washing machines, headphones,
lamps, wearable devices and almost anything else you can think of. This also applies to
components of machines, for example a jet engine of an airplane or the drill of an oil rig. If it has
an on and off switch then chances are it can be a part of the IoT. Read more.

Cloud Technology / Cloud Computing within Industry 4.0


Cloud computing means storing and accessing data and programs over the Internet instead of
your computer’s hard drive. The cloud is just a metaphor for the Internet. Read more.
Industry 4.0 – An example of a Smart Factory

A pilot facility, developed by The German Research Centre for Artificial Intelligence (DFKI) in
Kaiserslautern, Germany, is demonstrating how a “smart” factory can operate.
This pilot facility uses soap bottles to show how products and manufacturing machines can
communicate with one another. Empty soap bottles have RFID tags attached to them, and these
tags inform machines whether the bottles should be given a black or a white cap. A product that
is in the process of being manufactured carries a digital product memory with it from the
beginning and can communicate with its environment via radio signals. This product becomes a
cyber-physical system that enables the real world and the virtual world to merge.
Industry 4.0 – Why should you care about all this?

Within the manufacturing industry today, there are said to be two groups: the traditional, first
generation who may be struggling in the Australian market due to a lack of desire to invest in
technology, and the innovators, who are finding themselves more success in a tough climate
because they are open to adopting new ways.
Annaliese Kloe, Managing Director of Headland Machinery explains that “Industry 4.0 is being
spoken about everywhere. In particular, it was widely reflected at EuroBLECH 2016. It will
widely change the approach to the way that manufacturers work, so if you aren’t looking into
this now then you’ll be left behind. It will revolutionise your business, so it is vital to get on
board.”
With an increasingly digital future ahead of us, this new era for manufacturing looks set to
transform businesses worldwide. It is imperative for manufacturers to consider new technologies
arising and explore how they can adapt their processes to comply with the expectations of the
modern world.
For more information about cloud technology, read more here. If you’d like to discuss how you
can modernise your business processes and systems, get in touch to speak with an expert.
1 Introduction

This text provides you with a basic information about the Cloud Computing, a new and fastly
growing term. It is structured to seven chapters for better orientation and easy understanding.
The first chapter talks about the very basis such as definition, its attributes or history.

1.1 Definition

Cloud Computing is a buzzword of 2010 and many experts disagree on its exact definition. But
the most used one and concurred one includes the notion of web‐based services which are
available on demand from and optimized and highly scalable service provider. Since such a
disagreement on the definition, one will be provided to better understand of the notion. The
cloud is IT as a service, delivered by IT resources that are independent of location. It is a style of
computing in which dynamically scalable and often virtualized resources are provided as a
service over the Internet where end‐users have no knowledge of, expertise in, or control over the
technology infrastructure (the cloud) that supports them. [1]

1.2 Attributes

Before some of the attributes will be defined, the term cloud should be explained. A cloud has
been long used in IT, in network diagrams respectively, to represent a sort of black box where
the interfaces are well known but the internal routing and processing is not visible to the network
users. Key attributes in cloud computing:

 Service‐Based: Consumer concerns are abstracted from provider concerns through


service interfaces that are well‐defined. The interfaces hide the implementation details
and enable a completely automated response by the service provider. The service could
be considered "ready to use" or "off the shelf" because it is designed to serve the specific
needs of a set of consumers, and the technologies are tailored to that need rather than the
service being tailored to how the technology works. The articulation of the service feature
is based on service levels and IT outcomes such as availability, response time,
performance versus price, and clear and predefined operational processes, rather than
technology and its capabilities. In other words, what the service needs to do is more
important than how the technologies are used to implement the solution.
 Scalable and Elastic: The service can scale capacity up or down as the consumer
demands at the speed of full automation (from seconds for some services to hours for
others). Elasticity is a trait of shared pools of resources. Scalability is a feature of the
underlying infrastructure and software platforms. Elasticity is associated with not only
scale but also an economic model that enables scaling in both directions in an automated
fashion. This means that services scale on demand to add or remove resources as needed.

 Shared: Services share a pool of resources to build economies of scale and IT resources
are used with maximum efficiency. The underlying infrastructure, software or platforms
are shared among the consumers of the service (usually unknown to the consumers). This
enables unused resources to serve multiple needs for multiple consumers, all working at
the same time.

 Metered by Use: Services are tracked with usage metrics to enable multiple payment
models. The service provider has a usage accounting model for measuring the use of the
services, which could then be used to create different pricing plans and models. These
may include pay‐as‐you go plans, subscriptions, fixed plans and even free plans. The
implied payment plans will be based on usage, not on the cost of the equipment. These
plans are based on the amount of the service used by the consumers, which may be in
terms of hours, data transfers or other use‐based attributes delivered.

 Uses Internet Technologies: The service is delivered using Internet identifiers, formats
and protocols, such as URLs, HTTP, IP and representational state transfer Web‐oriented
architecture. Many examples of Web technology exist as the foundation for
Internet‐based services. Google's Gmail, Amazon.com's book buying, eBay's auctions
sharing all exhibit the use of Internet and Web technologies and protocols. More details
about examples are in the chapter four – Intergration [2]

1.3 History

History of Cloud Computing surprisingly began almost 50 years ago. The father of this idea is
considered to be John McCarthy, a professor at MIT University in US, who first in 1961
presented the idea of sharing the same computer technology as being the same as for example
sharing electricity. Electrical power needs many households/firms that possess a variety of
electrical appliances but do not possess power plant. One power plant serves many customers
and using the electricity example, power plant=service provider, distribution network=internet
and the households/firms=computers. [3]

Since that time, Cloud computing has evolved through a number of phases which include grid
and utility computing, application service provision (ASP), and Software as a Service (SaaS).
One of the first milestones was the arrival of Salesforce.com in 1999, which pioneered the
concept of delivering enterprise applications via a simple website. The next development was
Amazon Web Services in 2002, which provided a suite of cloud‐based services including
storage, computation and even human intelligence. Another big milestone came in 2009 as
Google and others started to offer browser‐based enterprise applications, though services such as
Google Apps. [4]

2 Architecture

A basis infromation about the architecture is provided in this chapter, together with the
explanations of relevant terms such as virtualization, Frond/Back end or Middleware.

 Virtualization is best described as essentially designating one computer to do the job of


multiple computers by sharing the resources of that single computer across multiple
environments. Virtual servers and virtual desktops allow you to host multiple operating
systems and multiple applications locally and in remote locations, freeing your business
from physical and geographical limitations. [5]

The Cloud Computing architecture can be divided into two sections, the front end and the back
end, connected together through a network, usually Internet. The Front End includes the client's
computer and the application required to access the cloud computing system. Not all cloud
computing systems have the same user interface. Services like Web‐based e‐mail programs
leverage existing Web browsers like Internet Explorer or Firefox. Other systems have unique
applications that provide network access to clients.

The Back End of the system is represented by various computers, servers and data storage
systems that create the "cloud" of computing services. Practically, Cloud Computing system
could include any program, from data processing to video games and each application will have
its own server.

A central server administers the system, monitoring traffic and client demands to ensure
everything runs smoothly. It follows a set of rules called protocols and uses a special kind of
software called Middleware. Middleware allows networked computers to communicate with
each other. [6]

Public Cloud (external cloud) is a model where services are available from a provider over the
Internet, such as applications and storage. There are free Public Cloud Services available, as well
as pay‐per‐usage or other monetized models. Private Cloud (Internal Cloud/Corporate Cloud) is
computing architecture providing hosted services to a limited number of people behind a
company’s protective firewall and it sometimes attracts criticism as firms still have to buy, build,
and manage some resources and thus do not benefit from lower up‐front capital costs and less
hands‐on management, the core concept of Cloud Computing. [7]

Private/Public cloud

Source:
http://www.technologyevaluation.com/login.aspx?returnURL=http://www.technologyevaluation.
com%2fresearch%2farticles%2fi-want-my-private-cloud-21964%2f

3 Cloud computing categories

There are three main categories in CC, Infrastructure as a Service (IaaS), Software as a Service
(SaaS) and Platform as a Service (PaaS). All of them are described below in more details.

 Infrastructure as a Service is a provision model in which an organization outsources the


equipment used to support operations, including storage, hardware, servers and
networking components. The service provider owns the equipment and is responsible for
housing, running and maintaining it. [8]

 Software as a Service is a software distribution model in which applications are hosted


by a vendor or service provider and made available to customers over a network,
typically the Internet. It is becoming an increasingly prevalent delivery model as
underlying technologies that support Web services and service‐oriented architecture
become increasingly available. [9]

 Platform as a Service is an outgrowth of Software as a Service (SaaS). It is a way to rent


hardware, operating systems, storage and network capacity over the Internet. The service
delivery model allows the customer to rent virtualized servers and associated services for
running existing applications or developing and testing new ones. [10]

4 Intergration

Once the definition, categories and componencts needed for the user´s solution have been
identified the next challenge is to determine how to put them all together. This chapter provides
information about the Cloud Computing degisn and integrability as well as gives some examples.

4.1 End to end design - definition

It is a major feature of the Internet. The intelligence and functions in an Internet‐based


application reside at both ends of the network (client side and server side), not within the Internet
backbone. The Internet acts as a transport between these two.

 Technical design – in its simplest form, the end‐to‐end design will include the end‐user
device, user connectivity, Internet, cloud connectivity, and the cloud itself.

At a minimum, most organizations will have users who connect to the cloud service remotely
(from home or while travelling) and through the internal network. In addition to connectivity at
the network level, the interfaces at the application layer need to be compatible and it will be
necessary to ensure this connectivity is reliable and secure.

 Devices – cloud services should be device agnostic. They should work with traditional
desktop, mobile devices and thin client. Unfortunately, this is much easier said than done.
Regression testing on five or ten client platforms can be challenging. A good start is to
bundle the sets of supported devices into separate services. With Microsoft Exchange
2007 you have the option of supporting Windows platforms through HTTP (Outlook web
access) and using RPC over HTTP. You can also support Windows Mobile (as well as
Symbian, iPhone and Blackberry devices using ActiveSync). The platform is just
beginning. You would also want to take an inventory of existing systems to determine the
actual operating platforms, which might range from Mac OS and Linux to Google
Chrome, Android, Symbian, RIM Blackberry and iPhones.

 Connectivity – in order to assess the connectivity demands you need to identify all
required connections. At high level the connections will include categories such as:

o Enterprise to cloud

o Remote to cloud

o Remote to enterprise

o Cloud to cloud

o Cloud to enterprise

Once you put these together into a high level connectivity diagram you can then proceed to the
next step of identifying and selecting connectivity options. Unless the systems are connected
they cannot operate, at least for any extended periods of time. It the case of cloud computing,
data and processing are both highly distributed making reliable, efficient and secure connectivity
and are the most critical.

 Management – generally, for each component in the design we need to investigate how
we will manage it. This includes all the end‐user devices, the connectivity, and legacy
infrastructure and all the applications involved. The challenge of splitting management
components will be that you may have policies that need to be kept synchronized.
Imagine for example, that you have a minimum password length of 8 characters which is
increased to 10. If you have only two management servers and this is not a frequent type
of occurrence then you can easily apply challenge manually. However, if you are dealing
with hundreds of management servers and you receive minor policy changes on a weekly
basis you can imagine how cumbersome and error‐prone the task will become.

 Security – the impact of Cloud Computing on security is profound. There are some
benefits and unfortunately some hurdles to overcome. One challenge in trying to evaluate
security is that it tends to relate to all aspects of IT and, since Cloud Computing`s impact
is similarly pervasive. Security domains:

 Access control – provides mechanism to protect critical resources from unauthorized


access and modification while facilitating access to authorized users

 Cryptography ‐ presents various methods for taking legible, readable data, and
transforming it into unreadable data for the purpose of secure transmission, and then
using a key to transform it back into readable data when it reaches its destination. [11]
 Operations security – includes procedures for back‐ups and change control
management.

The Cloud Computing Manifesto is a manifesto containing a "public declaration of principles


and intentions" for cloud computing providers and vendors, annotated as "a call to action for the
worldwide cloud community" and "dedicated belief that the cloud should be open". It follows the
earlier development of the Cloud Computing Bill of Rights, which addresses similar issues from
the users' point of view. [12]

4.2 Examples

Most common public known examples of a Cloud are Google Apps. This service provide
number of on‐line applications like Word‐processor, Application for creating and editing
presentations, documents storage and sharing, email functions with connection on MS Outlook
or MS exchange services, account and contacts sharing, Instant Messenger functions, etc., all
provided by Google. Other Clouds examples include CloudX Technology Group, Yahoo, E‐bay,
Facebook, Citric XennApp, AJAX, etc.

Device using CC

 Chromebook ‐ is a mobile device running Google Chrome OS. The two first devices for
sale are by Samsung and Acer Inc. and are slated for release on June 15, 2011 [14]
Chromebook (CR‐48) is Google prototype model. These machines boot‐up very quickly
and offer basic tools for internet communication. Such as 3G/4G and Wifi connectivity,
Web cam and microphone, mobile processor and enought RAM for webbrowsing and
works on‐line only. Basic Hardrive is optional.
Chromebook by Acer

Source: http://gearburn.com/2011/05/chromebook-awesome-if-it-wasn%E2%80%99t-from-
google/

5 Pros and Cons

Neither Cloud Computing is an exception and experience both prons and cons. Some of them are
stated and described in more details in this chapter.

5.1 Pros

 Lower costs ‐ the principle of sharing resources (HW, SW, infrastructure...) gives to
customer also the benefit of sharing its costs. Customer do not has to buy expensive
hardware, such as powerful workstations, large server solution and software applications.
Customer needs only internet connection and basic PC with not high requirements.
Simple laptop, netbook or mobile phone is enought. Customer also pays only for what the
real usege. These could be services, hardware resources or infrastructure or its
combination.

 Less IT employees - there is also no neccesary by customer to employ IT department in


such wide range. There is only need to provide secure connection and PC with
webrowser. For all other, the technical support such as back-ups, recovery, virus
protection, updates, software and hardware stability and functionality, helpdesk and
support is maintained by the provider of a service.
 No special knowledge - client (customer) also does not need to have a high knowledge
about hardware and complex software applications at all. Client just uses a service
throught webrowser. Harware resources can be shared between all clients and managed
by usage or their requirements.

 Easy to upgrade - massive increase of performance (such as speed or storage size) is


provided immidiately after simple order and applied by “a few clicks”. Data centre can
provide higher performance than common desktop PC or, on the other hand, can be very
efficient and deliver just what customer needs at the moment (low performance) and thus
again it saves resources and money. This approach saves also time, costs for new
hardware, transport, is power (energy) efficient and as a result saves the environment,
which is very discussed issue these days.

 Instant access anywhere - one of the most important benefit is availability of a service
anywhere. What is needed for accessing the service is computer connected to the internet.
There is no dependence on platform (PC, MAC, mobile phone, car etc.).

 Security - is a very discussed issue in the Cloud Computing service providing and could
be put in both pros and cons as you see in a while. Service is protected by usage an
authorization. Users identify themselves by using an ID (Username) and Password (or
also more sophisticated method such as chip, fingerprint, face detection etc. can be used).
Communication between client and provider servers is secured. Data centre is protected
by firewalls and kept in secured buildings. There generally there is a very low risk of
danger caused by attack of third parties. BUT on the other hand, a problem could be that
client (customer) keeps all the data out of his computer – just at the providers´ servers. It
means the client entrusts the data to the provider (provider company) and has in fact no
physical control over them.

 Requirements - technology, which customer needs are very simple. Importatnt is only
terminal as a laptop, desktop, mobile phone, netbook etc. with web‐browser, internet
connection and usually also created account on a service at providers place.

5.2 Cons

 Legal differences – as already aforementioned, we can describe one particular example.


US companies are obliged to follow the PATRIOT Act (2001) which states that
companies can be watched and have to provide information and data about clients, if they
are asked for in the correspondence of anti‐terrorist policy.

 Dependence on provider – if company starts using the Cloud Computing service and
replaces its previous information system or changes IT structure, it becomes dependant
on its service provider. Risks connected with such a dependency may include sudden
change of prices or conditions of a contract. Provider could be hit by bankruptcy and end
its business activities. Functions and applications might be changed without will of a
customer and if a provider suffers from technical problems, all the customers are out of
service which means without their data.

 Reputation – Cloud Computing is very new type of service. Not many companies has an
experience with such a kind of services and application outsourcing. Many users are still
worried about data security tranmitted over the internet.

 Migration costs – in some cases there can be higher start‐up costs. Company may have
to invest into users training, any amendments which allows the communication of service
provider and current company software and in some cases, switching to Cloud
Computing could lead to a change of business processes.

 Less functions – solutions, which are targeted to the wide range of companies that can’t
provide specific functions and therefore are not flexible.

 Dependence on internet connection ‐ all the Cloud Computing applications can be used
on‐line only thus any connection failure could be fatal.

6 Operation

After reading through this chapter you will understand the terms such as administration, support
or monitoring.

6.1 Service management

 Service strategy relates very closely to the Strategic Impact. Service providers only have
limited resources and usually have more requests for services and functionality than can
provide within their budget. In order to maximize their impact they must therefore
prioritize these services. So IT organization must determine the value of potential internal
and external services.

 Service design covers all elements relevant to the service delivery including service
catalogue management, service level management, capacity management, availability
management, IT service continuity management, information security management and
supplier management. A key aspect of this design is the definition of service levels in
terms of key performance indicators (KPIs). The key challenge is not to derive a number
of KPIs, but to select a few that are critical to the overall strategy.
Example of KPIs

Source: http://mkhairul.sembangprogramming.com/2008/04/24/key-performance-indicators-kpi-
for-software-development/

 Service transition represents the intersection between project and service management.
In a cloud-based solution this is not only covers the initial implementation of cloud
services but also any updates to them, launches of new services or retirement and
migration of existing services.

 Service operation is the core of the ITIL model. Its focus is on the day-to-day operations
that are required in order to deliver service to its users at the agreed levels of availability,
reliability and performance. It includes concepts such as event management, incident
management, problem management, access management, request fulfillment and service
desk.

6.2 Administration
Since Cloud Computing is primarily web-based, the logical interface for administering is a
portal. It can offer facilities such as billing, analytic, account management, service management,
package install, configuration, instance flexing and tracing problems and incidents.

The area between service request and more extensive change management is not always obvious
and depends to a large extent on the organization involved. However, in all companies there are
likely to be services that are too critical for automated change requests.

One major recurring change is the need to perform upgrades to increase functionality, solve
problems and sometimes improve performance. New version can disrupt services because they
may drop functions, implement them differently or contain undiscovered bugs. It is therefore
important to understand whether they will have any impact on business processes before rolling
them out live. One approach is to stage all services locally and test them with on-premise
equipment before overwriting the production services.

Long-term capacity management is less critical for on-demand services. Elasticity of resources
means that enterprises can scale up and down as demand dictates without need for extensive
planning. It´s also a good idea to verify that your services provider will actually be in a position
to deliver all the resource requirements that you anticipate. Several aspects of capacity planning
have to be evaluated in parallel.

Managing identities and access control for enterprise applications remains one of the greatest
challenges facing IT today. While an enterprise may be able to leverage several Cloud
Computing services without a good identity and access management strategy, in the long run
extending an organization’s identity services into the cloud is a necessary precursor towards
strategic use of on-demand computing services. Supporting today’s aggressive adoption of an
admittedly immature cloud ecosystem requires an honest assessment of an organization’s
readiness to conduct cloud-based Identity and Access Management (IAM), as well as
understanding the capabilities of that organization’s Cloud Computing providers.
Identity and Access Management Model

Source: http://radio-
weblogs.com/0100367/stories/2002/05/11/enterpriseIdentityAndAccessManagement.html

We will discuss the following major IAM functions that are essential for successful and effective
management of identities in the cloud:

 Identity provisioning/deprovisioning

 Authentication

 Federation

 Authorization & user profile management [15]

6.3 Monitoring

Part of the incentive of moving to a public cloud is to reduce the amount of internal operational
activity. Much of the internal infrastructure is local such as the printers, scanners and local
equipment. End user desktops and mobile device s are also closer to on-site operations personnel.
One area that is of particular concern to business continuity is backup. Backups are required for a
variety of reasons including:

 End user access to data that has been removed


 End user access to historical data

 Audits, troubleshooting, IP retention

 Legal requirements for eDiscovery

Problem management refers to tracking and resolving unknown causes of incidents. It is closely
related to Incident management but focuses on solving root causes for a set of incidents rather
than applying what may be a temporary fix to an incident.

6.4 Support

There is some diversity in the user roles that may require assistance in a cloud solution. There are
two types: end user and IT support. End-user support should progress in tiers that successively
address more difficult and less common problems. It begins with simple documentation and on-
line help to orient the user and clarify any obvious points of confusion. A self-service portal can
then help to trigger automatic process to fulfill common requests.

In addition to end users there is also a requirement for IT and business users to receive assistance
from the service providers. There must be mechanism in place for obtaining and sharing
documentation and training on all cloud services and technologies. Vendor architecture diagrams
and specifications for all technical interfaces can help IT staff.

6.5 Control

Most of the legal provisions that relate to cloud computing fall into one of three categories:

 Data privacy

 Electronic discovery

 Notification

There are also threats connected such as data leakage, data loss, non-compliance, loss of service
and impairment of service

7 Conclusion

From the text and infromation aforementioned, you should have a basis information about what
is Cloud Computing and its history, features or architecture. To summarize it, Cloud Computing
is very new and modern technology based on sharing resources (especially software, hardware
and infrastructure). It helps companies but also individuals in saving costs for IT resources. All
data are stored out‐of‐company at a providers place which brings both advantages and
disadbvnatges especially problematic issue about security and data privacy. Most common Cloud
service you as a user may come across with are Google Apps.
A File System in the Cloud vs. A Cloud File System

There are two types of cloud based file systems. One is designed to extend cloud storage into the
organization. The other is designed to allow organizations to run applications in the cloud but use
more traditional file protocols like NFS and SMB. Both have their roles, but the organizations
needs to make sure it is picking the right tool for the job.

A Cloud File System

A cloud file system is a file system that creates a hub and spoke method of distributing data. The
“hub” is the central storage area, typically located at a public cloud provider like Amazon AWS,
Microsoft Azure or Google Cloud. The “spokes” are the organization’s local locations (data
centers, branch offices, remote offices). At each spoke a software or hardware appliance is
installed and it acts as a cache for that location’s most active data.

It is important to note that all vendors claiming to have a cloud file system are not created equal.
The distribution of data, while critical, is just the first step. Organizations need to examine how
accurate the vendor’s caching algorithms are and how efficiently they can transfer data to and
from the cloud. Beyond the critical first step of data distribution, organizations need to look for
solutions that provide high performance access, global file locking, granular scaling of capacity
and intelligent archiving.

A File System in the Cloud

A file system in the cloud is exactly what it sounds like. The vendor creates a file system that
offers traditional file protocols like NFS or SMB to cloud hosted applications. Essentially, the
vendor provides an instance of their file system and the organization implements it in the cloud
provider of their choice. It then allocates the appropriate storage compute performance and the
storage IO.

The goal with these file systems is to speed the migration of applications to the cloud. By using a
file system in the cloud the organization does not need to re-write the storage IO components of
the application.
AWS Vs Azure Vs Google: Cloud Services Comparison [Latest]
The most defining cloud battle of the present time is AWS vs Azure vs Google. Choosing one
public cloud from AWS, Azure or Google is the most difficult task for the one who wants to
enter and grow in the cloud world. This blog will help you make a right decision!

With the growing importance of Cloud Computing, public cloud service is nowadays in huge
demand. This increasing demand for public cloud is thus opening the doors of more growth and
opportunities for cloud service providers. In order to grow in the cloud market, cloud companies
are focused to increase their services while reducing prices to lead in the market of the public
cloud.

According to Gartner Survey Report, The market for public cloud is predicted to reach from
$260 billion in 2017 to around $411 billion in 2020.

AWS, Google, and Azure are the multi-indweller cloud services that are based on their cloud
computing model where the cloud service provider supply resources like database, applications,
and storage over the internet.

Public Cloud consists of a wide range of cloud services and products including

 Software as a Service (SaaS),


 Infrastructure as a Service (IaaS), and
 Platform as a Service (PaaS).
Out of these three cloud services, IaaS is well known and demanded service in the cloud market
which is assumed to cross $45 billion with the passage of the year 2018.

Become a certified cloud professional! Start your preparation with Whizlabs online training and
practice material Now!

AWS Vs Azure Vs Google Comparison

The Public Cloud market is governed by top three public clouds – AWS, Google, and Azure.
There is a strong competition between these three that can’t be recouped by any additional public
cloud provider in nearest future.
Amazon Web Services is dominating public cloud over MS Azure and Google since 2006 when
it started offering services. Microsoft Azure and Google are far from the race but growing
continuously to be at the top.

On the basis of features and solutions, AWS vs Azure vs Google Features comparison is:
AWS vs Azure vs Google Features Comparison

Cloud Services Comparison: AWS Vs Azure Vs Google

Cloud Service Providers (CSPs) offer high-quality services with multiple capabilities, excellent
availability, good performance, high security, and customer support. The cloud market is
governed by top three cloud services providers – Amazon Web Services (AWS), Microsoft
Azure, and Google Cloud platform.

Each cloud services provider supplies multiple products according to the user requirements over
the internet. Most commonly used cloud services include –

 Compute
 Storage
 Database
 Networking and Content Delivery
 Management tools
 Development Tools
 SecurityLet’s study the AWS Vs Azure Vs Google cloud services comparison on the
basis of the services offered.
AWS Vs Azure Vs Google: Compute
Compute is a computer’s fundamental role. It contains services related to compute workloads.
An effective cloud provider has the ability to scale thousands of nodes in just a few minutes.
Amazon EC2 provides core computing services to configure VM (Virtual Machines) with the
use of custom or pre-configured AMIs while Azure provides a VHD (Virtual Hard Disk), which
is similar to Amazon’s AMI to configure VM. Google provides Google Compute Engine to
introduce cloud computing services.

On the basis of services provided by the compute domain, the difference between the top three
public clouds is given by the following table.

AWS Vs Azure Vs Google: Compute


AWS Vs Azure Vs Google: Storage

Storage is one of the key functions of cloud services; the services offered by storage domain are
related to data storage. AWS provide long-running storage services while storage services
provided by Microsoft Azure and Google Cloud Platform are also reliable and respectable
options. On the basis of services provided by storage domain, the difference between the top
three public clouds is given by

AWS vs Azure vs Google: Storage Services

Both the Amazon and Microsoft has been named as the leaders in Gartner’s infrastructure as
a Service Magic Quadrant 2017, Google and IBM are among those following the leaders.

AWS Vs Azure Vs Google: Database

A database provides services related to database workloads. It is worth note here that Azure
supports big data and both the NoSQL and relational databases. On the basis of services provided
by database domain, the difference between the top three public clouds is given by the following
table.
AWS vs Azure vs Google: Database Services

AWS Vs Azure Vs Google: Networking and Content Delivery

Each cloud service provider offers different networks. Amazon’s network is the Virtual Private
Cloud, Azure’s network is the Virtual Network, and Google’s network is the Subnet. On the
basis of services provided by networking & content delivery domain, the difference between the
top three public clouds is given by the table below.

AWS Vs Azure Vs Google: Networking and Content Delivery


AWS Vs Azure Vs Google: Management and Monitoring

Each of the top three public cloud providers provides a range of monitoring and management
services. These services support performance, infrastructure, workloads, visibility into health,
and utilization of the applications. On the basis of services provided by management &
monitoring domain, the difference between the top three public clouds is given by the following
table.

AWS Vs Azure Vs Google: Management and Monitoring

Confused about which AWS certification is right for you? Let’s clear out the confusion
which AWS Certification you should choose!

AWS Vs Azure Vs Google: Development Tools

Development tools are used to build, diagnose, debug, deploy and, manage multiplatform
scalable applications and services. On the basis of services provided by development tools
domain, the difference between the top three public clouds can be given as given in the following
table.
AWS Vs Azure Vs Google: Development Tools

AWS Vs Azure Vs Google: Security

Amazon provides top-rated cloud security services. Fortinet in Amazon Web Services (AWS)
provides security features to Amazon Virtual Private Cloud (VPC) in many availability zones
on-demand. While in Microsoft Azure, Fortinet supplies optimized security for data and
applications and remove extra security expenditures during migration.

FortiGate Next-Generation Firewall provides advanced security and critical firewalling for
Google Cloud Platform (GCP). On the basis of services provided by security domain, the
difference between the top three public clouds is given by

AWS Vs Azure Vs Google: Security

What is AWS Vs Azure Vs Google Market Share?

In revenue terms, Amazon Web Services (AWS) outshines the Cloud Market with 62 % market
share which is more than three times from the market share of Azure and five times from the
Google Cloud Platform. While Microsoft Azure and Google Cloud Platform have only 20% and
12% market shares.

The revenue and thus, the market share of Azure and Google is showing a considerable growth
with the time. It is making Google and Microsoft Azure the other two giants of cloud market
after AWS. These two have got all the technology, power, wealth, and marketing in order to
attract enterprises and individual customers to their services.

According to KeyBanc, Amazon lost 6% share while Microsoft Azure moved from 16% to 20%,
and Google jumped its share from 10% to 12% in the cloud business.

What is AWS Vs Azure Vs Google Pricing Structure?

Whether the customer is an individual or an enterprise, all of these cloud services make cloud
offerings with a pricing model. This is the best characteristic of these cloud service providers as
you don’t need to buy a cloud solution. The pricing model these clouds follow is to pay as you
go; it means you need to pay on the basis of usage. Considering AWS Vs Azure Vs Google,
Amazon charges on an hourly basis while Azure and Google charge on the minute basis. One
can choose to make advanced payments i.e. prepaid or monthly payments i.e. postpaid.

You might also like