You are on page 1of 67

Unit-2

IoT and M2M: Machine-to-Machine (M2M),


Difference between M2M and IoT, SDN (Software
Defined Networking) and NFV (Network Function
Virtualization) for IoT, Data Storage in IoT, IoT
Cloud Based Services.
M2M---

Machine to machine (M2M) is a broad label that can be


used to describe any technology that enables networked
devices to exchange information and perform actions
without the manual assistance of humans. 
The first M2M applications were based on communication
networks and used for machine monitoring, control, and
alarms. M2M are nowadays used for a great number of
applications connecting any type of mechanical, electrical
and electronic machines over a wide variety of networks.
A few typical examples are:--

•Remote Data Collection (state of machines),


•Remote control (management),
•Telemetry (sensors, measurements in real time),
•Remote payment (home banking),
•Wireless in healthcare,
•Telematics (Intelligent Transport, navigation),
•Remote monitoring (access control, management of
alarms, emergency services),
It forms the basis for a concept known as the Internet of
Things (IoT).

Key components of an M2M system include---

• sensors, RFID,
•a Wi-Fi or cellular communications link and
•autonomic computing software programmed to help a
networked device interpret data and make decisions.
M2M technology is all around us. It’s in our homes, on the
commute to work--
•Commuting: if your train is cancelled due to poor weather, a smart
alarm clock would determine the extra time you’ll need to take a
different route, and wake you up early enough so that you’re not late
for work.
•Smart homes: a connected thermostat can automatically switch the
heating on when room temperature falls below a certain point. You
might also have a remote-locking system enabling you to open the
door to a visitor via your smart phone if you’re not at home.
•Health and fitness: wearable devices can track the number of steps
you take in a day, monitor your heart beat and count calories to
determine dietary patterns and work out whether you’re missing vital
nutrients.
•Shopping: based on your location, previous shopping experiences
and personal preferences, your local supermarket could ping you a
voucher for your favourite groceries when you’re in the area.
M2M in business ---
M2M drives considerable benefits for businesses too. Connected
devices collect information about every point of business – from
product development, manufacturing, supply chain right through to
point of sale – which can be used to identify and eliminate points of
inefficiency.
Here are some examples:
•Smart asset tracking: embedded sensors and GPS capabilities
keep track of your assets. A fleet of connected delivery trucks
could share their location, contents and state of repair.
•Predictive maintenance: sensors on your equipment detect faults,
order replacement components and schedule repair before the
equipment breaks and causes costly downtime.
•Product development: with M2M technology, product development
can continue beyond a point of sale. A connected product could
feed back information about its state of repair, and how it responds
to continued usage, identifying strengths and weaknesses to help
influence future production.
M2M in city infrastructure---

•Adaptive traffic management: connected cars can sense their


location on the road, understand proximity to obstacles or other vehicles,
and even share data about available parking spaces with other vehicles
and traffic management teams. Sensor nodes placed in each parking
space could send data to a real-time application in drivers’ cars via the
cloud, letting drivers know where to find available spaces – easing
congestion, and saving time and fuel.
•Connected weather insights: The Personal Weather Station
Network, part of IBM’s Weather Company solutions, provides hyperlocal
forecasts to millions of people around the world. Multiple sensors detect
barometric pressure, temperature, wind speed, humidity, in order to help
governments and communities anticipate and act on developing weather
conditions before it’s too late. Weather
•Connected buildings: smart buildings capture information such as
which sections of a building are most frequently occupied, helping to
determine where energy use (lighting and heating, for example) can be
reduced without adversely affecting the building’s occupants.
Basis of IoT M2M

Abbreviation Internet of Things Machine to Machine

Devices have objects that Some degree of


Intelligence are responsible for decision intelligence is observed in
making this.

The connection is via


Connection The connection is a point to
Network and using various
type used point
communication types.

Communicatio Internet protocols are used Traditional protocols and


n protocol such as HTTP, FTP, communication technology
used and Telnet. techniques are used

Data is shared between


other applications that are Data is shared with only the
Data Sharing
used to improve the end-user communicating parties.
experience.
Internet connection is required Devices are not dependent on
Internet
for communication the Internet.

Type of It supports point-to-point


It supports cloud communication
Communication communication.

Computer Involves the usage of both Mostly hardware-based


System Hardware and Software. technology

A large number of devices yet


Scope Limited Scope for devices.
scope is large.

Business Type Business 2 Business(B2B) and


Business 2 Business (B2B)
used Business 2 Consumer(B2C)

Open API There is no support for Open


Supports Open API integrations.
support APIs

It requires  Generic commodity devices. Specialized device solutions.

Communication and device


Centric  Information and service centric 
centric.
SDN
Software-defined networking (SDN) is an umbrella term
encompassing several kinds of network technology aimed at
making the network as agile and flexible as the virtualized
server and storage infrastructure of the modern data center.

Software-Defined Networking (SDN) is an emerging


architecture that is dynamic, manageable, cost-effective,
and adaptable, making it ideal for the high-bandwidth,
dynamic nature of today’s applications. This architecture
decouples the network control and forwarding functions
enabling the network control to become directly
programmable and the underlying infrastructure to be
abstracted for applications and network services
Software-Defined Networking (SDN) is an approach to
networking that uses software-based controllers or application
programming interfaces (APIs) to communicate with
underlying hardware infrastructure and direct traffic on a
network.
•This model differs from that of traditional networks, which use
dedicated hardware devices (i.e., routers and switches) to
control network traffic. SDN can create and control a virtual
network – or control a traditional hardware – via software.
•While network virtualization allows organizations to segment
different virtual networks within a single physical network, or
to connect devices on different physical networks to create a
single virtual network, software-defined networking enables a
new way of controlling the routing of data packets through a
centralized server.
Software-Defined Networking is important--
SDN represents a substantial step forward from traditional networking, in that it
enables the following:
•Increased control with greater speed and flexibility: Instead of manually
programming multiple vendor-specific hardware devices, developers can control the
flow of traffic over a network simply by programming an open standard software-
based controller. Networking administrators also have more flexibility in choosing
networking equipment, since they can choose a single protocol to communicate
with any number of hardware devices through a central controller.
•Customizable network infrastructure: With a software-defined network,
administrators can configure network services and allocate virtual resources to
change the network infrastructure in real time through one centralized location. This
allows network administrators to optimize the flow of data through the network and
prioritize applications that require more availability.
•Robust security: A software-defined network delivers visibility into the entire
network, providing a more holistic view of security threats. With the proliferation of
smart devices that connect to the internet, SDN offers clear advantages over
traditional networking. Operators can create separate zones for devices that require
different levels of security, or immediately quarantine compromised devices so that
they cannot infect the rest of the network.
Software-Defined WSN Prototype
For more information:
https://cse.iitkgp.ac.in/~smisra/theme_pages/sdn/index.html
Difference between SDN and traditional networking--
The key difference between SDN and traditional networking is
infrastructure: SDN is software-based, while traditional
networking is hardware-based. Because the control plane is
software-based, SDN is much more flexible than traditional
networking. It allows administrators to control the network,
change configuration settings, provision resources, and increase
network capacity — all from a centralized user interface,
without the need for more hardware.
There are also security differences between SDN and traditional
networking. Thanks to greater visibility and the ability to define
secure pathways, SDN offers better security in many ways.
However, because software-defined networks use a centralized
controller, securing the controller is crucial to maintaining a
secure network.
Software-Defined Networking (SDN) working

In SDN (like anything virtualized), the software is decoupled from the


hardware. SDN moves the control plane that determines where to send
traffic to software, and leaves the data plane that actually forwards the
traffic in the hardware. This allows network administrators who use
software-defined networking to program and control the entire network
via a single pane of glass instead of on a device by device basis.

There are three parts to a typical SDN architecture, which may be


located in different physical locations:

•Applications, which communicate resource requests or information


about the network as a whole

•Controllers, which use the information from applications to decide how


to route a data packet

•Networking devices, which receive information from the controller about


where to move the data
SDN----

The physical
separation of
the network
control plane
from the
forwarding
plane, and
where a
control plane
controls
several
devices.
Different models of SDN--
• Open SDN: Network administrators use a protocol like OpenFlow to
control the behavior of virtual and physical switches at the data plane
level.
• SDN by APIs: Instead of using an open protocol, application
programming interfaces control how data moves through the network
on each device.
• SDN Overlay Model: Another type of software-defined networking runs
a virtual network on top of an existing hardware infrastructure, creating
dynamic tunnels to different on-premise and remote data centers. The
virtual network allocates bandwidth over a variety of channels and
assigns devices to each channel, leaving the physical network
untouched.
• Hybrid SDN: This model combines software-defined networking with
traditional networking protocols in one environment to support different
functions on a network. Standard networking protocols continue to
direct some traffic, while SDN takes on responsibility for other traffic,
allowing network administrators to introduce SDN in stages to a legacy
environment.
The SDN Architecture is:
DIRECTLY PROGRAMMABLE
Network control is directly programmable because it is
decoupled from forwarding functions.
AGILE
Abstracting control from forwarding lets administrators
dynamically adjust network-wide traffic flow to meet
changing needs.
CENTRALLY MANAGED
Network intelligence is (logically) centralized in software-
based SDN controllers that maintain a global view of the
network, which appears to applications and policy engines
as a single, logical switch.
PROGRAMMATICALLY CONFIGURED
SDN lets network managers configure, manage, secure,
and optimize network resources very quickly via dynamic,
automated SDN programs, which they can write
themselves because the programs do not depend on
proprietary software.

OPEN STANDARDS-BASED AND VENDOR-NEUTRAL


When implemented through open standards, SDN
simplifies network design and operation because
instructions are provided by SDN controllers instead of
multiple, vendor-specific devices and protocols.
NFV
Network functions virtualization (NFV) allows service
providers and operators to abstract network services, such
as firewalling and load balancing, into software that runs on
basic servers.

NFV virtualizes network services and applications that once


ran on hardware appliances.
In fact, network functions virtualization could replace many
network devices with more flexible software running on bare
metal servers, enabling a new kind of service chaining.

Network functions virtualization (NFV) is the replacement of


network appliance hardware with virtual machines. The
virtual machines use a hypervisor to run networking
software and processes such as routing and load balancing.
Network functions virtualization and software-defined
networking are two closely related technologies that often
exist together, but not always. 
NFV and SDN are both moves toward network
virtualization and automation, but the two technologies
have different goals.
A hypervisor, also known as a virtual machine monitor or VMM, is
software that creates and runs virtual machines (VMs). A hypervisor
allows one host computer to support multiple guest VMs by virtually
sharing its resources, such as memory and processing. 

Hypervisors make it possible to use more of a system’s available


resources and provide greater IT mobility since the guest VMs are
independent of the host hardware. This means they can be easily
moved between different servers. Because multiple virtual machines
can run off of one physical server with a hypervisor, a hypervisor
reduces:

•Space
•Energy
•Maintenance requirements
Benefits of hypervisors

•Speed: Hypervisors allow virtual machines to be created instantly, unlike


bare-metal servers. This makes it easier to provision resources as needed
for dynamic workloads.
•Efficiency: Hypervisors that run several virtual machines on one physical
machine’s resources also allow for more efficient utilization of one physical
server. It is more cost- and energy-efficient to run several virtual machines
on one physical machine than to run multiple underutilized physical
machines for the same task.
•Flexibility: Bare-metal hypervisors allow operating systems and their
associated applications to run on a variety of hardware types because the
hypervisor separates the OS from the underlying hardware, so the software
no longer relies on specific hardware devices or drivers.
•Portability: Hypervisors allow multiple operating systems to reside on the
same physical server (host machine)..
NFV allows for the separation of communication services from
dedicated hardware, such as routers and firewalls. This
separation means network operations can provide new services
dynamically and without installing new hardware. Deploying
network components with network functions virtualization takes
hours instead of months like with traditional networking. Also, the
virtualized services can run on less expensive, generic servers
instead of proprietary hardware.
Reasons to use network functions virtualization include:---
Pay-as-you-go: Pay-as-you-go NFV models can reduce costs
because businesses pay only for what they need.
Fewer appliances: Because NFV runs on virtual machines
instead of physical machines, fewer appliances are necessary
and operational costs are lower.
Scalability: Scaling the network architecture with virtual machines
is faster and easier, and it does not require purchasing additional
hardware.
Network Functions Virtualization Works as---

• Network functions virtualization replaces the functionality


provided by individual hardware networking components.
• This means that virtual machines run software that
accomplishes the same networking functions as the
traditional hardware.
• Load balancing, routing and firewall security are all
performed by software instead of hardware components.
• A hypervisor or software-defined networking controller
allows network engineers to program all of the different
segments of the virtual network.
Advantages of NFV:--

•Reduced space needed for network hardware


•Reduce network power consumption
•Reduced network maintenance costs
•Easier network upgrades
•Longer life cycles for network hardware
•Reduced maintenance and hardware costs
NFV Architecture:

There are seven key segments of NFV architecture:

•Virtual Network Function


•Element Management (EM)
•VNF Manager
•Network Function Virtualization Infrastructure (NFVI)
•Virtualized Infrastructure Manager (VIM)
•NFV Orchestrator
•Operation Support System/Business Support System
(OSS/BSS)
Advantages of Network Functions Virtualization---
•Reduced hardware needs – By virtualizing your infrastructure
you minimize the amount of hardware you need to purchase
and maintain.

•Saving space and power – One of the issues with hardware


is that it takes up space and needs to be powered and cooled
in order to stay operational. This isn’t the same for virtual
services which can be managed entirely with software.

•Lowers time to releasing services – One can deploy


networking services at a faster rate than is possible with
hardware. Every time the requirements of your enterprise
change you can make a change and keep up quickly.

•Scalability – Being able to upscale and downscale services


on demand provide you with the long-term capacity potential
that may need to be successful in the future.
Network Functions Virtualization (NVF) has these important
features:

•Provides a new management system for out-of-date


hardware
•Steps toward virtualization of networks
•Suitable for assisting in the creation of a hybrid network
•Impose new functions, such as firewalls on existing
network devices, such as routers
•Enables the network to be remapped virtually without
moving physical devices
•Virtual device functions can be replicated on a new site
•Creates extra devices out of existing hardware like VM to
server mapping
•Enables easier upgrades of device operational software
NFV vs SDN

NFV – NFV is used to optimize network services by taking


network functions away from hardware. Network functions run
at the software level so that provisioning can take place more
efficiently.
SDN – SDN separates the control plane from the forwarding
plane and provides a top-down perspective of the network
infrastructure. This allows the user to provision network
services as they are needed.
Both of these technologies turn legacy networks on their head
in favor of a software-based networking approach. Virtualizing
networking services allows resources to be provisioned faster
and more efficiently in a way that supports scalability.
These two don’t need to be used together but they
complement each other in a number of ways--

For instance, with SDN you can enable network


automation to determine where network traffic is sent on.
NFV can complement this by allowing you to manage
routing controls at the software level.

Combining the two allows you to mix automation with


software-level routing to create the most efficient service
across the network.
SDN, NFV, Network Virtualization & White box networking (bare
metal switching)  are all complementary approaches. They each
offer services:
SDN: separates the network’s control (brains) and forwarding
(muscle) planes and provides a centralized view of the distributed
network for more efficient orchestration and automation of network
services.
NFV: focuses on optimizing the network services themselves. NFV
decouples the network functions, such as DNS, caching, etc., from
proprietary hardware appliances, so they can run in software to
accelerate service innovation and provisioning, particularly within
service provider environments.
NV: ensures the network can integrate with and support the
demands of virtualized architectures, particularly those with multi-
tenancy requirements.
White Box: uses network devices, such as switches and routers,
that as based on “generic” merchant silicon networking networking
chipset available for anyone to buy, as opposed to proprietary
silicon chips designed by and for a single networking vendor.
1.  The Basic Idea:
SDN separates control and data and centralizes control and
programmability of the network.
NFV transfers network functions from dedicated appliances to
generic servers.
2.  Areas of Operation  
SDN operates in a campus, data centre and/or cloud
environment
NFV targets the service provider network
3.  Initial Application Target.
SDN software targets cloud orchestration and networking
NFV software targets routers, firewalls, gateways, WAN, CDN,
accelerators and SLA assurance
4. Protocols
SDN – OpenFlow; NFV – None
5. Supporting organization
SDN: Open Networking Foundation (ONF)
NFV: ETSI NFV Working Group
SDN, NFV, and NV technologies can finally be decoupled
from the hardware, so that it’s no longer constrained by the
box that delivers it. This is reason why SDN and NFV have
become the key to building networks that can:
Enable Innovation: enabling organizations to create new
types of applications, services and business models
Offer New Services:  Create new revenue generating
services
Reduce CapEx: allowing network functions to run on off-
the-shelf hardware
Reduce OpEX: supporting automation and algorithm control
through increased programmability of network elements to
make it simple to design, deploy, manage and scale
networks
Deliver Agility and Flexibility: helping organizations
rapidly deploy new applications, services and infrastructure
to quickly meet their changing requirements
SDN or NFV or Both?  

Software-defined networking (SDN), network functions


virtualization (NFV), and network virtualization (NV)are all
complementary approaches.

They each offer a new way to design deploy and manage the
network and its services:
SDN: Separates the network’s control (brains) and forwarding
(muscle) planes and provides a centralized view of the
distributed network for more efficient orchestration and
automation of network services.
NFV: Focuses on optimizing the network services themselves.
NFV decouples the network functions, such as DNS, caching,
etc., from proprietary hardware appliances, so they can run in
software to accelerate service innovation and provisioning,
particularly within service provider environments.
NV: Ensures the network can integrate with and support
the demands of virtualized architectures, particularly those
with multi-tenancy requirements.

White Box: Uses network devices, such as switches and


routers, that as based on “generic” merchant silicon
networking chipset available for anyone to buy, as opposed
to proprietary silicon chips designed by and for a single
networking vendor.
Data Storage in IoT

Consider the volume of data created just in transportation


alone. Every autonomous car on the road by 2020 is
expected to generate 2 petabytes (PB) of data each year
and a single airplane will generate 40TB of data daily.

Cars and planes cannot push all this data into the cloud
in real time. Even if 5G data networks were ubiquitous,
there may not be enough bandwidth to handle that kind of
data transfer in real time. As a result, connected cars and
planes today are using on-board local storage to capture
and cache data till they can move it to the cloud via a
high-speed network.
Data storage: This layer stores data collected from sensors
and devices at the edge or cloud for long-term or short-
term applications. The edge gateway provides
functionalities, such as sensor data aggregation, pre-
processing of the data, and securing connectivity to the
cloud. In the cloud, there are various database
management systems built for IoT applications. The
systems can store and manage those enormous amounts
of data for further applications.
IoT deals with any device or sensor sending some details. Those
details are then analyzed and processed to give any feature or
functionality of what went with that device or system, or it could be
to predict something for the system, based on the data.
For e.g.: Consider telematics.

In the automotive industry, there is a device that is fitted onto the


vehicle which can collect data from the vehicle which includes
vehicle stats as well the positional data (PVT data).

It can also sense driver behavior in the form of Harsh acceleration,


Harsh Braking and Rash turning, this data can also be sent to the
system which will analyze the driver behavior based on the pattern.
So, in a similar way, each IoT device in some or the other way is
sending data to some system that needs to consume it, process it
and give the predictive or the analytical output.
If we take the automotive example again, the device sends these
data in the form of raw packets.
Raw packets are nothing but a string that contains the values of
various params and which usually follow some protocol. The raw
data is typically sent from the vehicle at the frequency of 1
packet/ 5 sec if it’s a normal positional data and a packet almost
immediately when there is an emergency or any alerts generated.

This you would think is normal as the device sends data at this
frequency to the server and the data is to be processed. But in
reality, there are thousands of such vehicles plying all over and
each vehicle has one device fitted on it. Now just multiply the
frequency with these many vehicles. The amount of data that the
microservices have to analyse and the process is huge. Typically,
the data would be in TB for even a single day.
Devices sending raw packets. But IoT devices can send images,
recorded audios and videos too which is much space-consuming as
compared to raw data.
Here is where the storage plays a crucial role.
The storage is where the raw data or the IoT data would be heading
and this data would then interact with micro services or APIs, where
the features of the product would be served (either predictive or
analytical).
The things that would be expected from the storage in the IoT realm
are:
Cloud-based as the IoT device can access the public cloud easily and
send the data
Scalable and massive storage expected
Saving the data in a way so that it could be accessed fast, this is
important, especially for analytics. Edge storage would be mean
lower latency and real-time analysis
Data stored securely, because this data cannot be recreated in most
of the cases.
So, is the cloud really the best place to keep
all that data?
The not so simple answer is, not always.

One can’t perform real-time analytics on cloud-based


data.
By the time it gets into the cloud, the data is historical.
If a sensor in a factory detects an out-of-bounds
condition on a shop-floor device, real-time information
is critical for a timely response.
Challenges of IoT include big data, data analysis
for enterprise

Big data and the internet of things are creating unique


challenges for the network. The challenges of internet of
things and big data mean that IT needs to be prepared
before jumping in to make a major change.

IoT allows new measurement and control capabilities as


well as new distributed applications. For big data, it's the
ability to aggregate massive amounts of data via
technologies, such as Hadoop and distributed storage
systems. The enormous quantities of data generated by
the growth of IoT feeds the need for big data capabilities.

IoT will generate an incredible amount of data.


Unfortunately, as IoT has progressed, it is not
always easy to know which data will be
important.
Think of traffic cameras on busy streets and the
hundreds if not thousands of images it captures. While
much of that might never be used, a few single frames
might prove critical in understanding an accident or a
traffic violation.

Better data intelligence and pattern recognition tools


are becoming increasingly necessary across all areas
of IoT. This will give a deeper understanding of the
inherent value in captured data—whether in the cloud
or at the edge.
IoT devices normally upload data to a collector inside the
data center -- or at least inside the internal network.
It's impractical to think that one million distributed consumer
IoT devices will enter the network from a single entry point.
There's not a one-size-fits-all solution to the problem of data
collection.
A significant portion of the design is dependent on where
the data analysis will take place. In most big data examples,
the information is centrally located and processed as part of
a centralized application.
IoT Cloud (Salesforce IoT Cloud)

IoT Cloud is a platform from Salesforce.com that is designed


to store and process Internet of Things (IoT) data. The IoT
Cloud is powered by Thunder, describes as a "massively
scalable real-time event processing engine."
The platform is built to take in the massive volumes of data
generated by devices, sensors, websites, applications,
customers and partners and initate actions for real-time
responses.
For example, wind turbines could adjust their behavior based
on current weather data; airline passengers whose
connecting flights are delayed or cancelled could be
rebooked before the planes they are on have landed.
In another context, IoT Cloud can provide business users
with much a much more comprehensive and integrated
perspective on customers, without requiring technical
expertise or the services of a data analyst.
The platform can take in billions of events a day and users
can build rules that specify events to act on and what actions
to take. IoT cloud is data format- and product-agnostic;
output connectors allow communication with third-party
services.
10 Best Internet of Things (IoT) Cloud Platforms
At the moment, the internet is run mainly by humans. The
majority of communication, messages, and data is happening
between people, through desktops, laptops, and
smartphones. This is changing. A whole new category of
devices is starting to take over the internet. These devices
aren’t run by people and don’t send messages to people
either. They are machines that talk to other machines, and
they’ve been given the simple name ‘Things’.
Some examples of ‘Things’ include----
Temperature sensors
Traffic sensors
Flow rate sensors
Energy usage monitors
CLOUD PLATFORMS-AS-A-SERVICE
As these devices start to
become connected, we need a
place to send, store, and
process all of the information.
Setting up your own in-house
system isn’t practical anymore.
The cost of maintaining,
upgrading and securing a
system is just too high, and
there are some great services
available.
1. AMAZON WEB SERVICES IOT PLATFORM
1. AMAZON WEB SERVICES IOT PLATFORM

Amazon dominates the consumer cloud market.


They were the first to really turn cloud computing into a
commodity way back in 2004.
It’s an extremely scalable platform, claiming to be able to
support billions of devices, and trillions of interaction
between them.
Each IoT interaction can be thought of as a message
between a device and a server.
AWS-IoT prrovides---
Amazon S3
Amazon DynamoDB
AWS Lambda
Amazon Kinesis
Amazon SNS
Amazon SQS
2. MICROSOFT AZURE IOT HUB

MS have cloud storage,


machine learning, and
IoT services, and have
even developed their
own operating system
for IoT devices.

This means they intend


to provide a complete
IoT solution provider.
3. IBM WATSON IOT PLATFORM

IBM is another IT giant


trying to set itself up as
an Internet of Things
platform authority.

They try to make their


cloud services as
accessible as possible
to beginners with easy
apps and interfaces.
4. GOOGLE CLOUD PLATFORM

Google claim that “Cloud Platform is the best place to


build IoT initiatives, taking advantage of Google’s
heritage of web-scale processing, analytics, and
machine intelligence”.

Their focus is on making things easy and fast for your


business, where instant information is expected.
5. ORACLE

Oracle is a platform as a
service provider that
seems to be focusing on
manufacturing and
logistics operations. They
want to help you get your
products to market faster.
6. SALESFORCE
Salesforce specializes in
customer relations
management. Their cloud
platform is powered by
Thunder, which is focused
on high speed, real-time
decision making in the
cloud. The idea is to create
more meaningful customer
interactions. Their easy
point-and-click UI is
designed to connect you
with your customers..
7. BOSCH

Bosch is a German based company IT company, who


have recently launched their own cloud IoT services to
compete with the likes of Amazon. They focus on security
and efficiency. Their IoT platform is flexible and based on
open standards and open source.
The CEO Volkmar Denner says “As of today, we offer all
the ace cards for the connected world from a single
source. The Bosch IoT Cloud is the final piece of the
puzzle that completes our software expertise.

Bosch is now a full-service provider for connectivity and


the internet of things”.
8. CISCO IOT CLOUD CONNECT
Cisco is a global leader in IT services, helping companies
“seize the opportunities of tomorrow”. They strongly
believe that the opportunities of tomorrow lie in the cloud,
and have developed a new ‘mobility-cloud-based software
suite’.
Their goal is to strengthen your relationship with your
customers. What’s more, they actually say their focus is to
help you “find new ways to make money”.
9. GENERAL ELECTRICS PREDIX

General Electric have decided to get into the platform-as-


a-service game. They are focused on the industrial market
by offering connectivity and analytics at scale for
mainstream sectors like aviation.
10 SAP

The SAP homepage


reads like a buzzword
dictionary for the last
couple of years. Here
the title of a press
release: “SAP Cloud
Platform extends its
capabilities for IoT, Big
Data, Machine
Learning, and Artificial
Intelligence”.

You might also like